Team Leader - Nutanix Technology Champion - Nutanix NTC Storyteller

Julien DUMUR
Infrastructure in a Nutshell
nutanix centreon supervision snmp

We’ve all been there. That moment when your monitoring dashboard shows a beautiful green circle for your Nutanix cluster, while in reality, one of the nodes is struggling. That’s exactly what happened to me recently.

When I integrated Nutanix into my infrastructure, my first instinct was to pull out Centreon. Why? Because it’s my Swiss Army knife for monitoring. But I quickly realized that the “standard” method of adding a cluster locks us into an illusion of security. We see the “whole,” but we miss the “detail.”

In this feedback report, I’ll share my experience with Nutanix Centreon monitoring and explain why you should stop monitoring your cluster solely through its Virtual IP (VIP) and switch to a granular node-by-node strategy.

Why the “default” configuration left me wanting more

When installing the Nutanix Plugin Pack on Centreon, the documentation naturally guides you toward adding a single host representing the cluster.

How the standard Nutanix Plugin Pack works

The classic method involves querying the cluster’s Virtual IP (VIP) or the IP of one of the CVMs (Controller VM). It’s simple and fast: you enter the SNMP community, apply the template, and the services appear. You then monitor global CPU usage, average storage latency, and the general status reported by Prism.

The “Black Box” problem

This is where the trouble starts. By querying only the VIP, you are actually querying an SNMP agent that aggregates data. If you have a 3-node cluster, the monitoring will tell you that the cluster-wide memory is “OK.” But what about the memory load on node #3?

This is what I call the “black box” effect. Nutanix’s Shared Nothing architecture is a strength for resilience, but it can become a blind spot for monitoring if you don’t drill down to the physical layer. For an expert, knowing the cluster is “Up” is not enough; we need to know which specific physical component requires intervention before redundancy is compromised.

Decoupling monitoring for granular visibility

To break out of this deadlock, I changed my approach: treating each node as its own entity in Centreon. Here’s how I did it.

Step 1: Setting the stage on Prism Element

Before touching Centreon, you must ensure Nutanix is ready to talk. Head to Prism Element, in the SNMP settings. Here, I configured SNMP v2c access (or v3 if you want to max out security).

Check out my dedicated articles if you need details on how to configure SNMP v2c or SNMP v3 on your Nutanix cluster.

Step 2: The “Node by Node” addition strategy in Centreon

This is where the magic happens. Instead of creating a single “Cluster-Nutanix” host, I created as many hosts as I have physical nodes (e.g., cluster-2170_n1, cluster-2170_n2, etc.).

Host Configuration: Each host points to the cluster’s VIP IP address or the specific node’s CVM IP. By default, this will pull the same global information, but stay tuned.

Applying Templates: I apply the Virt-Nutanix-Hypervisor-Snmp-Custom template.

Surgical Filtering: This is the key secret. In the “Host check options,” I apply the custom macro FILTERNAME. This allows me to specify the exact name of the host to monitor. The plugin then filters the SNMP data sent by the VIP to return only what concerns my specific node.

Step 3: The trick to maintaining Cluster consistency

To keep an overview, I use Host Groups in Centreon. I created a group named HG-Cluster-Nutanix-Prod containing my 3 nodes. This allows me to create aggregated dashboards while keeping the “drill-down” capability (clicking to see details) for each physical machine.

Immediate benefits: Dashboarding and Peace of Mind

Since I switched to this configuration, my daily life as a sysadmin has radically changed:

Granular performance analysis: I can now identify a node consuming abnormally more RAM or CPU than its neighbors. It’s the perfect tool for detecting a “hot point” or a VM distribution issue.

Increased responsiveness: When something goes wrong, Centreon sends me an alert with the specific node name (n1, n2, etc.). No more guessing games in Prism Element to find out where to focus my search.

Clean history: I have metric graphs per physical server, which greatly facilitates Capacity Planning and troubleshooting.

Conclusion

If you manage Nutanix, don’t settle for the superficial view offered by the VIP IP alone. By taking 10 minutes to declare your hosts individually in Centreon with the FILTERNAME macro, you move from “passive” monitoring to a true control tower.

My verdict is clear: node-level monitoring is the only way to guarantee true high availability and sleep soundly at night.

Read More

In the previous blog post, I explained how to monitor your Centreon cluster using SNMP v2c.

In this new blog post, I’ll explain how to monitor your Nutanix cluster using Centreon using SNMP v3.

Prerequisites

There are a few prerequisites you must meet to add your Nutanix cluster to the Centreon solution. Here’s a list of what you need:

  • A Nutanix cluster with admin access to the web interface
  • A running Centreon server with the Nutanix connector installed
  • SSH access to the Centreon VM
  • Streams must be open in the firewall

Configuring SNMP v3 on the Nutanix Cluster

To configure SNMP on your Nutanix cluster, start by connecting to the Prism Element and then going to “Settings > SNMP”. Check “Enable SNMP” and click “+ New Transport” to add port 161 in UDP:

Then, in “Users” click on “New User”, enter a username as well as a private key pair in AES and authentication key in SHA:

In my case, I’ve entered the following information because it’s for lab purposes only, but I recommend you enter much more complex information:

  • Username: snmp-centreon
  • Priv Key: snmp-priv-key
  • Auth Key: snmp-auth-key

Make a note of the Username, Priv Key, and Auth Key; we’ll need them later. The configuration is complete on the Nutanix side; now let’s move on to the Centreon configuration..

Adding a Nutanix Cluster to Centreon

To add your Nutanix cluster to Centreon, log in to your monitoring system’s web interface, go to “Configuration > Hosts” and click “Add”:

On the page that appears, there is a first block of information to fill in:

  • 1: Cluster name
  • 2: Cluster IP address
  • 3: SNMP version 3
  • 4: The Centreon server that will monitor the cluster
  • 5: The time zone associated with your cluster
  • 6: The templates you wish to add
  • 7: Check “Yes” to ensure that all services associated with the previously added templates are automatically created

On the second part of the page, there are a few things to configure, including the amplitude and frequency of checks, and especially the “SNM” field.

The command line syntax to enter in SNMPEXTRAOPTIONS is:

--snmp-username='snmp-centreon' --authprotocol='SHA' --authpassphrase='snmp-auth-key' --privprotocol='AES'--privpassphrase='snmp-priv-key'

Remember to check the “Password” box to hide sensitive information:

Once all the information has been entered, confirm so that the new host is created on the server. You must then export the configuration to the pollers. To do this, click on “Pollers” in the top left corner, then on “Export configuration”:

Then click on “Export & Reload” in the small window that appears:

To check that your host is being taken into account, go to “Monitoring > Resources Status”, your first checks should start to come up:

If all goes as planned, you should have all your probes green within minutes!

Troubleshooting

If you unfortunately have a monitor that looks like this:

I recommend checking the following:

  • Open SNMP streams (port 161/UDP) in the firewall
  • Configure the AuthKey/PrivKey pair and username
Read More

Continuously monitoring your cluster is the best option to ensure everything is running as you expect.

In this blog post, I’ll explain how to monitor your Nutanix cluster on Centreon using SNMP v2c.

Prerequisites

There are a few prerequisites you must meet to add your Nutanix cluster to the Centreon solution. Here’s a list of what you need:

  • A Nutanix cluster with admin access to the web interface
  • A running Centreon server with the Nutanix connector installed
  • SSH access to the Centreon VM
  • Flux must be open in the firewall

Configuring SNMP v2c on the Nutanix Cluster

To configure SNMP on your Nutanix cluster, start by connecting to the Prism Element and then going to “Settings > SNMP”. Check “Enable SNMP” and click “+ New Transport” to add port 161 in UDP:

Then, under “Traps,” click “+ New Trap Receiver” and fill in the following fields:

  • Receiver Name: The name you wish to assign to your Receiver
  • Check v2c
  • Community: Indicate the SNMP community you wish to use
  • Address: The address of your Centreon server
  • Port: 161
  • Transport protocol: UDP

Click “Save” to save the configuration.

Adding a Nutanix Cluster to Centreon

To add your Nutanix cluster to Centreon, log in to your monitoring system’s web interface, go to “Configuration > Hosts” and click “Add”:

On the page that appears, there is a first block of information to fill in:

  • 1: Cluster name
  • 2: Cluster IP address
  • 3: The community you specified on the trap receiver
  • 4: SNMP version 2c
  • 5: The Centreon server that will monitor the cluster
  • 6: The time zone associated with your cluster
  • 7: The templates you wish to add
  • 8: Check “Yes” to ensure that all services associated with the previously added templates are automatically created.

On the second part of the page, there are a few things to configure, including the amplitude and frequency of the checks:

Once all the information has been entered, confirm so that the new host is created on the server. You must then export the configuration to the pollers. To do this, click on “Pollers” in the top left corner, then on “Export configuration”:

Then click on “Export & Reload” in the small window that appears:

To check that your host is being taken into account, go to “Monitoring > Resources Status”, your first checks should start to come up:

If all goes as planned, you should have all your probes green within minutes!

Troubleshooting

If you unfortunately have a monitor that looks like this:

I recommend checking the following:

  • Opening SNMP streams (port 161/UDP) in the firewall
  • Configuring the Traps Receiver on the Nutanix cluster
  • Configuring the community on the Centreon server
Read More

We often need to have long-term ping statistics on some of our equipment which does not always support SNMP or which is in an environment which does not allow us to set up SNMP.

I explain in this article how to proceed with the installation and configuration of Smokeping which is then an ideal ally to achieve availability stats of this equipment. Here are some prerequisites: – SSH server installed – Fixed IP

First of all, you have to start by updating the system :

sudo apt update && sudo apt upgrade

Then you need to install some dependencies:

sudo apt install rrdtool fping

And finally Smokeping:

sudo apt install smokeping

Smokeping is now installed, you can access it via the IP address of the server it is installed on.

You can jump to the Targets configuration by editing the Targets file:

sudo vi /etc/smokeping/config.d/Targets

Inside, here is the syntax to follow:

++ SEPARATOR
menu = NAME_OF_MENU
title = TITLE_OF_THE_PAGE
host = IP_ADDRESS

Each uppercase element is to be personalized.

Here is an example if I want to monitor my Freebox for example:

++ Freebox
menu = Freebox
title=Freebox
host=192.168.1.1

Once your file is populated, you can close the editor and restart the Smokeping service:

sudo service smokeping restart

After a few minutes, the graphs will start appearing on the web interface.

smokeping
Read More