Team Leader - Nutanix Technology Champion - Nutanix NTC Storyteller

Julien DUMUR
Infrastructure in a Nutshell
nutanix ahv cli reference guide

In this new blog post, we’ll cover all the main Nutanix AHV CLI commands that allow you to perform some checks on your virtual machines using the command line.

All the commands in this article can be run via SSH from any CVM in the cluster.

Display the list of virtual machines

To display the list of virtual machines on the Nutanix cluster, simply run the following command:

acli vm.list

This will show you all the VMs present on the cluster, without the CVMs:

nutanix@NTNX-S348084X9211699-B-CVM:192.168.84.22:~$ acli vm.list
VM name VM UUID
LINUX 88699c96-11a5-49ce-9d1d-ac6dfeff913d
NTNX-192-168-84-200-PCVM-1760699089 f659d248-9ece-4aa0-bb0c-22a3b3abbe12
vm_test 9439094a-7b6b-48ca-9821-a01310763886

As you can see, I only have two virtual machines on my cluster:

  • My Prism Central
  • A newly deployed “LINUX” virtual machine
  • A test virtual machine

A handy command to quickly retrieve all virtual machines and their respective UUIDs. Now let’s see how to retrieve information about a specific virtual machine.

Retrieving Virtual Machine Information

To display detailed information about a virtual machine, use the following command:

acli vm.get VM_NAME

Using the example of my “LINUX” virtual machine, this returns the following information:

nutanix@NTNX-S348084X9211699-B-CVM:192.168.84.22:~$ acli vm.get LINUX
LINUX {
config {
agent_vm: False
allow_live_migrate: True
apc_config {
apc_enabled: False
}
bios_uuid: "88699c96-11a5-49ce-9d1d-ac6dfeff913d"
boot {
boot_device_order: "kCdrom"
boot_device_order: "kDisk"
boot_device_order: "kNetwork"
hardware_virtualization: False
secure_boot: False
uefi_boot: True
}
cpu_hotplug_enabled: True
cpu_passthrough: False
disable_branding: False
disk_list {
addr {
bus: "ide"
index: 0
}
cdrom: True
device_uuid: "fae2ee55-8736-4f3a-9b2c-7d5f5770bf33"
empty: True
iso_type: "kOther"
}
disk_list {
addr {
bus: "scsi"
index: 0
}
cdrom: False
container_id: 4
container_uuid: "2ead3997-e915-4ee2-b9a4-0334889e434b"
device_uuid: "f9a8a84c-6937-4d01-bfd2-080271c44916"
naa_id: "naa.6506b8def195dc769b32f3fe47100297"
storage_vdisk_uuid: "215ba83c-44cb-4c41-bddc-1aa3a44d41c7"[7] 0:python3.9* "ntnx-s348084x9211699-" 21:12 21-Oct-25 vmdisk_size: 42949672960
vmdisk_uuid: "42a18a62-861a-497a-9d73-e959513ce709"
}
generation_uuid: "9c018794-a71a-45ae-aeca-d61c5dd6d11a"
gpu_console: False
hwclock_timezone: "UTC"
machine_type: "pc"
memory_mb: 8192
memory_overcommit: False
name: "LINUX"
ngt_enable_script_exec: False
ngt_fail_on_script_failure: False
nic_list {
connected: True
mac_addr: "50:6b:8d:fb:a1:4c"
network_name: "NUTANIX"
network_type: "kNativeNetwork"
network_uuid: "7d13d75c-5078-414f-a46a-90e3edc42907"
queues: 1
rx_queue_size: 256
type: "kNormalNic"
uuid: "c6f02560-b8e6-4eed-bc09-1675855dfc77"
vlan_mode: "kAccess"
}
num_cores_per_vcpu: 1
num_threads_per_core: 1
num_vcpus: 2
num_vnuma_nodes: 0
power_state_mechanism: "kHard"
scsi_controller_enabled: True
vcpu_hard_pin: False
vga_console: True
vm_type: "kGuestVM"
vtpm_config { is_enabled: False
}
} is_ngt_ipless_reserved_sp_ready: True
is_rf1_vm: False
logical_timestamp: 1
state: "kOff"
uuid: "88699c96-11a5-49ce-9d1d-ac6dfeff913d"

As you can see, this returns all the information about a virtual machine. It is possible to filter some of the information returned with certain commands. Here are the ones I use most often:

acli vm.disk_get VM_NAME : to retrieve detailed information of all disks of a virtual machine

nutanix@NTNX-S348084X9211699-B-CVM:192.168.84.22:~$ acli vm.disk_get LINUX
ide.0 {
addr {
bus: "ide"
index: 0
}
cdrom: True
device_uuid: fae2ee55-8736-4f3a-9b2c-7d5f5770bf33
empty: True
iso_type: "kOther"
}
scsi.0 {
addr {
bus: "scsi"
index: 0
}
cdrom: False
container_id: 4
container_uuid: "2ead3997-e915-4ee2-b9a4-0334889e434b"
device_uuid: f9a8a84c-6937-4d01-bfd2-080271c44916
naa_id: "naa.6506b8def195dc769b32f3fe47100297"
storage_vdisk_uuid: 215ba83c-44cb-4c41-bddc-1aa3a44d41c7
vmdisk_size: 42949672960
vmdisk_uuid: 42a18a62-861a-497a-9d73-e959513ce709
}

acli vm.nic_get VM_NAME : to retrieve the detailed list of network cards attached to a virtual machine

nutanix@NTNX-S348084X9211699-B-CVM:192.168.84.22:~$ acli vm.nic_get LINUX
50:6b:8d:fb:a1:4c {
connected: True
mac_addr: "50:6b:8d:fb:a1:4c"
network_name: "NUTANIX"
network_type: "kNativeNetwork"
network_uuid: "7d13d75c-5078-414f-a46a-90e3edc42907"
queues: 1
rx_queue_size: 256
type: "kNormalNic"
uuid: "c6f02560-b8e6-4eed-bc09-1675855dfc77"
vlan_mode: "kAccess"
}

acli vm.snapshot_list VM_NAME : to retrieve the list of snapshots associated with a virtual machine

nutanix@NTNX-S348084X9211699-B-CVM:192.168.84.22:~$ acli vm.snapshot_list LINUX
Snapshot name Snapshot UUID
SNAPSHOT_BEFORE_UPGRADE e7c1e84e-7087-42fd-9e9e-2b053f0d5714

You now know almost everything about verifying your virtual machines.

For the complete list of commands, I invite you to consult the official documentation: https://portal.nutanix.com/page/documents/details?targetId=Command-Ref-AOS-v7_3:man-ncli-c.html

In the next article, we’ll tackle a big task: creating virtual machines using CLI commands.

Read More
nutanix ahv cli reference guide

In the Maxi Best Of Nutanix CLI series, the previous two articles covered checking the network configuration of a Nutanix cluster and managing subnets.

In this new article, we’ll cover managing storage containers via CLI commands on your Nutanix clusters…

All the commands in this article must be executed from one of the cluster’s CVMs and work on a cluster running AOS 6.10+.

Check the containers

To check the status of your storage containers, the simplest command is:

ncli container list

This command will allow you to display all the information related to all the containers in your cluster.

If you want to display a specific container, you can pass the name (the simplest method) or the ID of your container if you have it as a parameter:

ncli container list name=NAME
ncli container list id=ID

Finally, one last command to display only the usage statistics of your containers:

ncli container list-stats

Renaming a Container

To rename a storage container, it must be completely empty.

Renaming a storage container can be done using the following command:

ncli container edit name=ACTUALNAME new-name=NEWNAME

On the default container, this would give for example the following command:

ncli container edit name=default-container-21425105524428 new-name=ntnx-lab-container

WARNING: There are two containers created by default when deploying your cluster: “SelfServiceContainer” and “NutanixManagementShare”. Do not attempt to rename them!

Creating a Container

It’s also possible to create storage containers using the CLI:

ncli container create name=NAME sp-name=STORAGE-POOL-NAME

The “name” and “sp-name” parameters are the only required parameters when running the command. This will allow you to create a base container on the selected storage pool with the following parameters:

  • No data optimization mechanism
  • No restrictions/reservations
  • The default replication factor

But the container creation command can be very useful if you need to create storage containers in batches, for example, if you’re hosting multiple clients on a cluster, each with an allocated amount of storage space!

For example, to create a storage container with the following parameters:

  • Container name “client-alpha”
  • Reserved capacity: 64GB
  • Maximum capacity: 64GB
  • With real-time compression enabled

Here’s the command you would need to run:

ncli container create name=client-alpha res-capacity=64 adv-capacity=64 enable-compression=true compression-delay=0 sp-name=default-storage-pool-21425105524428

A container with the associated characteristics will then be created:

Modifying Container Settings

An existing container can also be modified. You can modify almost everything in terms of settings, from data optimization mechanisms to reserved/allocated sizes, replication factors, and more.

For all the settings, please refer to the official documentation (link at the bottom of the page).

Deleting a Container

Deleting a container is quite simple, but requires that all files stored within it be deleted or moved first. Deleting a container is done using the following command:

ncli container remove name=NAME

It may happen that despite deleting or moving your VM’s vdisks, the deletion is still refused. This is often due to small residual files.

You must then add the “ignore-small-files” parameter to force the deletion:

ncli container remove name=NAME ignore-small-files=true

For example:

ncli container remove name=ntnx-lab-container ignore-small-files=true

WARNING: There are two containers created by default when deploying your cluster: “SelfServiceContainer” and “NutanixManagementShare”. Do not attempt to delete them!

Official Documentation

To learn more about some of the command options presented, please consult the official documentation: https://portal.nutanix.com/page/documents/details?targetId=Command-Ref-AOS-v6_10:acl-ncli-container-auto-r.html

Read More
nutanix ahv cli reference guide

In the previous blog post on the Maxi Best Of Nutanix CLI menu, I presented you with the best commands for checking the entire network configuration of your Nutanix cluster.

In this new article, we’ll now see how CLI commands can help us create or modify networks in our Nutanix cluster…

All the commands in this article must be executed at one of the CVMs in the cluster.

Creating an Unmanaged Subnet on Nutanix AHV

To create a new unmanaged subnet (without IPAM) across the AHV cluster, the command is very simple:

acli net.create NAME vlan=VLAN_ID

Replace:

  • NAME with the name you want to assign to your subnet
  • VLAN_ID with the VLAN ID

Here’s an example command that creates the VLAN “NUTANIX” with the VLAN vlan id “84” :

acli net.create NUTANIX vlan=84

By default, the vlan will be created on the vswitch “vs0” but if you want to create it on another virtual switch, you can specify it as a parameter:

acli net.create NAME vlan=VLAN_ID virtual_switch=VSWITCH

In this case, replace:

  • NAME with the name you want to assign to your subnet
  • VLAN_ID with the VLAN ID
  • VSWITCH with the name of the bridge on which you want to create the subnet

Here is an example of a command that allows you to create the “NUTANIX” VLAN with comme vlan id “84 sur le vswitch “vs0” :

acli net.create NUTANIX vlan=84 virtual_switch=vs0

You can then run the “acli net.list” command and check that your new subnet appears in the list.

Creating a Managed Subnet on Nutanix AHV

This command creates a new managed subnet (using IPAM) across the AHV cluster with basic gateway and subnet mask options.

acli net.create NAME vlan=VLAN_ID virtual_switch=vs0 ip_config=GATEWAY/MASK

Replace:

  • NAME with the name you want to assign to your subnet
  • VLAN_ID with the VLAN ID
  • vs0 with the name of the bridge on which you want to create the subnet
  • GATEWAY with the IP address of the subnet’s gateway
  • MASK with the subnet mask

Here is an example of a command that creates the VLAN “NUTANIX” with a vlan id “84” on the vswitch “vs0”, with a gateway address “10.0.84.254” on the network “10.0.84.0/24”:

acli net.create NUTANIX vlan=84 virtual_switch=vs0 ip_config=10.0.84.254/24

Deleting an Existing Subnet

Deleting an existing subnet on a Nutanix AHV cluster is easy! Simply run the following command:

acli net.delete NAME 

You must replace NAME with the name of the subnet you wish to delete, which would give, for example, for the previously created subnet:

acli net.delete NUTANIX

Nothing could be simpler!

Bulk Subnet Creation/Deletion

To make it easier to import large quantities of subnets, I created several CSV files that I can then convert into a list of commands to create multiple subnets in batches.

Everything is on my Github: https://github.com/Exe64/NUTANIX

For unmanaged subnets: https://github.com/Exe64/NUTANIX/blob/main/nutanix-unmanaged-subnets.csv

For managed subnets: https://github.com/Exe64/NUTANIX/blob/main/nutanix-managed-subnets.csv

For deleting subnets: https://github.com/Exe64/NUTANIX/blob/main/nutanix-subnets-delete.csv

To learn more about using these files, I invite you to consult my dedicated article:

Official Documentation

Complete command documentation is available on the publisher’s official website: https://portal.nutanix.com/page/documents/details?targetId=Command-Ref-AOS-v6_10:man-acli-c.html

Read More
nutanix ahv cli reference guide

Whether you need to perform specific or repetitive operations, troubleshoot, or gain a more detailed view, the CLI commands for a Nutanix cluster will be your best allies.

In this article, I offer a summary of the best commands for performing all network configuration checks on a Nutanix cluster, whether at the cluster, host, CVM, or virtual machine level.

You must have an AOS 6.10+ cluster to execute some commands of this guide.

A. Using Nutanix acli commands from any CVM in the Nutanix AHV cluster

List the status of the host interfaces:

acli net.list_host_nic 192.168.84.11 @IP_HOST_AHV 

Result:

List all vSwitches currently configured on the cluster:

acli net.list_virtual_switch 

Result:

You can list the configuration of a particular vSwitch by passing it as an argument to the command:

acli net.list_virtual_switch vs1

List all subnets created on the cluster:

acli net.list 

Result:

List the VMs attached to a particular subnet:

acli net.list_vms SUBNET

B. Via the Nutanix manage_ovs script from any CVM in the Nutanix AHV cluster

List the interface status of an AHV host:

manage_ovs show_interfaces

Result:

You can also list the interface status of all hosts in the cluster:

allssh "manage_ovs show_interfaces"

List the status of an AHV host’s uplinks (bonds):

manage_ovs show_uplinks

Result :

You can also list the uplink (bond) status of all AHV hosts in the cluster:

allssh "manage_ovs show_uplinks"

Display LLDP information of an AHV host’s interfaces:

manage_ovs show_lldp

Result:

You can also view LLDP information for the interfaces of all AHV hosts in the cluster:

allssh "manage_ovs show_lldp"

Show currently created bridges on an AHV host:

manage_ovs show_bridges

Result :

You can also view the bridges currently created on all AHV hosts in the cluster:

allssh "manage_ovs show_bridges"

Show the mapping of CVM interfaces to those of AHV hosts:

manage_ovs show_cvm_uplinks_map

Result :

You can also view the interface mapping of CVMs on all AHV hosts in the cluster:

allssh "manage_ovs show_cvm_uplinks_map"

 

C. Using the Open vSwitch command from any host in a Nutanix AHV cluster

List the existing bridges of an AHV host:

ovs-vsctl list-br

Result :

List all interfaces attached to a particular bridge of an AHV host:

ovs-vsctl list-ifaces br0

Result:

Display the configuration of an AHV host’s port bond:

ovs-vsctl list port br0-up

Result :

Display the configuration and status of a bond on an AHV host:

ovs-appctl bond/show br0-up

Result:

Display information about the status of a LACP-configured bond on an AHV host:

ovs-appctl lacp/show br1-up

Many thanks to Yohan for the article idea and the helping hand!

Read More

The new version of VirtIO is available, and here are the new features and bug fixes!

What’s New

The “null” QEMU driver has been replaced with a fully functional FwCfg device, allowing the collection of Windows virtual machine core dumps directly on AHV hosts.

Added a new configuration button: you can now adjust the number of I/O requests processed before retrying when the virtual queue is full, giving you greater control over I/O behavior.

Bug Fixes

Fixed an annoying issue that caused Windows virtual machines to hang after unmounting the disk. Operations will now be smooth.

Good to Know

This new version is fully compatible with all supported AOS and AHV versions.

It is available for download here: https://portal.nutanix.com/page/downloads/?product=ahv&version=1.2.5&bit=VirtIO

Older versions are still available on the Nutanix Support Portal, under Downloads > AHV > VirtIO > Other Versions.

For installation assistance, see the AHV Administration Guide.

Updates now!

Read More
Nutanix Foundation on a Steamdeck

In one of my previous articles, I talked about my Nutanix Foundation installation on my Steamdeck. Unfortunately, I hadn’t yet had the opportunity to run an installation with the setup due to a lack of a server to image.

But things have changed since I recently acquired a Supermicro SuperServer 5019D-FN8TP! I also wrote an article about implementing Nutanix Foundation on unofficially supported hardware:

Hardware Preparation for Foundation

To be able to image my node with Nutanix Foundation from my Steamdeck, I absolutely needed to be connected to the network via RJ45, a type of connection missing from Valve’s console…

So I purchased an external dock with several USB ports and an RJ45 port that can be connected via USB-C.

I made the following connections:

  • Connecting the external dock to the Steamdeck’s USB-C port
  • Connecting the power supply to the dock’s USB-C port
  • Connecting the network cable to the dock’s RJ45 port
  • Connecting the external SSD (on which Windows is installed) to one of the dock’s USB ports

This allowed me to boot the Steamdeck from the SSD to start Windows 11!

Nutanix Foundation on a Steamdeck

Nutanix Foundation for a Node with a Steamdeck

As I mentioned in the previous article, I already have Nutanix Foundation installed, so I won’t dwell on that part and will move directly to the Foundation section!

For this Foundation, I used the latest version available, namely Foundation 5.9 with AOS 10 and AHV 7!

Nutanix Foundation on a Steamdeck

The Foundation process starts flawlessly as expected and the process completes after a while:

Nutanix Foundation on a Steamdeck

As you can see, imaging a node with the Steamdeck is possible! Is it relevant? No certainty, but I can at least say that “I did it!”

Read More
header nutanix

Whether you’re selling the cluster to a third party or repurposing it for another purpose, sometimes you need to destroy a Nutanix cluster. Here’s how to do it…

Preparing the Cluster for Destruction

Before destroying a cluster, some preparations must be made.

Among the necessary prerequisites, it is imperative that there are no longer any virtual machines running on the cluster. Make sure to migrate/shutdown/delete (as desired) all virtual machines on the cluster.

Note: All the commands in this article must be entered on one of the cluster’s CVMs.

Once this prerequisite is met, we begin by checking the cluster’s status:

cluster status

You should get a return like this:

nutanix@NTNX-5f832032-A-CVM:192.168.84.22:~$ cluster status
2025-07-24 07:20:18,663Z INFO MainThread zookeeper_session.py:136 Using multithreaded Zookeeper client library: 1
2025-07-24 07:20:18,666Z INFO MainThread zookeeper_session.py:248 Parsed cluster id: 4439894058604263884, cluster incarnation id: 1753169113232129
2025-07-24 07:20:18,666Z INFO MainThread zookeeper_session.py:270 cluster is attempting to connect to Zookeeper, host port list zk1:9876
2025-07-24 07:20:18,676Z INFO Dummy-1 zookeeper_session.py:840 ZK session establishment complete, sessionId=0x198310781ce5e38, negotiated timeout=20 secs
2025-07-24 07:20:18,678Z INFO MainThread cluster:3303 Executing action status on SVMs 192.168.84.22
2025-07-24 07:20:18,682Z INFO Dummy-2 zookeeper_session.py:940 Calling c_impl.close() for session 0x198310781ce5e38
2025-07-24 07:20:18,683Z INFO Dummy-2 zookeeper_session.py:941 Calling zookeeper_close and invalidating zhandle
The state of the cluster: start
Lockdown mode: Disabled

        CVM: 192.168.84.22 Up, ZeusLeader
                              Xmount   UP       [459073, 459235, 459236, 459311]
                           IkatProxy   UP       [458789, 458917, 458918, 458919]
                                Zeus   UP       [454133, 454189, 454190, 454191, 454201, 454218]
                           Scavenger   UP       [459084, 459296, 459297, 459298]
                    SysStatCollector   UP       [464017, 464089, 464090, 464091]
                    IkatControlPlane   UP       [464039, 464218, 464219, 464220]
                       SSLTerminator   UP       [464170, 464323, 464324]
                      SecureFileSync   UP       [464362, 464646, 464647, 464648]
                              Medusa   UP       [468604, 469223, 469224, 469395, 470062]
                  DynamicRingChanger   UP       [476814, 476897, 476898, 476920]
                              Pithos   UP       [476843, 477058, 477060, 477086]
                          InsightsDB   UP       [476918, 477131, 477132, 477155]
                              Athena   UP       [477152, 477270, 477271, 477273]
                             Mercury   UP       [513735, 513803, 513804, 513808]
                              Mantle   UP       [477391, 477551, 477552, 477562]
                          VipMonitor   UP       [485663, 485664, 485665, 485666, 485670]
                            Stargate   UP       [477857, 477995, 477996, 477997, 477998]
                InsightsDataTransfer   UP       [478768, 478929, 478930, 478934, 478935, 478936, 478937, 478938, 478939]
                             GoErgon   UP       [478834, 479020, 479021, 479039]
                             Cerebro   UP       [478950, 479138, 479139, 479306]
                             Chronos   UP       [479088, 479286, 479287, 479310]
                             Curator   UP       [479234, 479406, 479407, 483968]
                               Prism   UP       [479436, 479600, 479601, 479650, 480643, 480885]
                                Hera   UP       [479602, 479917, 479918, 479919]
                        AlertManager   UP       [479860, 480436, 480438, 480555]
                            Arithmos   UP       [480751, 481566, 481567, 481765]
                             Catalog   UP       [481670, 482699, 482700, 482701, 483502]
                           Acropolis   UP       [483575, 484493, 484494, 488301]
                              Castor   UP       [484403, 484877, 484878, 484911, 484972]
                               Uhura   UP       [484912, 485066, 485067, 485300]
                   NutanixGuestTools   UP       [485132, 485254, 485255, 485284, 485611]
                          MinervaCVM   UP       [491046, 491263, 491264, 491265]
                       ClusterConfig   UP       [491188, 491361, 491362, 491363, 491381]
                         APLOSEngine   UP       [491374, 491650, 491651, 491652]
                               APLOS   UP       [495252, 496063, 496064, 496065]
                     PlacementSolver   UP       [497033, 497330, 497331, 497332, 497341]
                               Lazan   UP       [497256, 497568, 497569, 497570]
                             Polaris   UP       [498016, 498620, 498621, 498911]
                              Delphi   UP       [498765, 499238, 499239, 499240, 499332]
                            Security   UP       [500506, 501578, 501579, 501581]
                                Flow   UP       [501478, 502168, 502169, 502171, 502178]
                             Anduril   UP       [510708, 511248, 511249, 511252, 511335]
                              Narsil   UP       [502382, 502472, 502473, 502474]
                               XTrim   UP       [502488, 502629, 502630, 502631]
                       ClusterHealth   UP       [502656, 502774, 503156, 503158, 503166, 503174, 503183, 503351, 503352, 503359, 503384, 503385, 503396, 503401, 503402, 503420, 503421, 503444, 503445, 503468, 503469, 503474, 503752, 503753, 503785, 503786, 503817, 503818, 528495, 528533, 528534, 530466, 530467, 530468, 530469, 530474, 530475, 530488, 530512, 530522, 530571, 530576, 530684, 530773, 530791, 531349, 531357]
2025-07-24 07:20:20,740Z INFO MainThread cluster:3466 Success!

Since the cluster is currently started, I first need to stop it with the following command:

cluster stop

Please note: to shut down the cluster, there must be no virtual machines running on the cluster except the CVM.

The command will shut down the cluster and associated services after you confirm the operation with “I agree” and should return something like this:

The state of the cluster: stop
Lockdown mode: Disabled

        CVM: 192.168.84.22 Up, ZeusLeader
                              Xmount   UP       [1761344, 1761418, 1761419, 1761475]
                           IkatProxy   UP       [458789, 458917, 458918, 458919]
                                Zeus   UP       [454133, 454189, 454190, 454191, 454201, 454218]
                           Scavenger   UP       [459084, 459296, 459297, 459298]
                    SysStatCollector DOWN       []
                    IkatControlPlane DOWN       []
                       SSLTerminator DOWN       []
                      SecureFileSync DOWN       []
                              Medusa DOWN       []
                  DynamicRingChanger DOWN       []
                              Pithos DOWN       []
                          InsightsDB DOWN       []
                              Athena DOWN       []
                             Mercury DOWN       []
                              Mantle DOWN       []
                          VipMonitor   UP       [485663, 485664, 485665, 485666, 485670]
                            Stargate DOWN       []
                InsightsDataTransfer DOWN       []
                             GoErgon DOWN       []
                             Cerebro DOWN       []
                             Chronos DOWN       []
                             Curator DOWN       []
                               Prism DOWN       []
                                Hera DOWN       []
                        AlertManager DOWN       []
                            Arithmos DOWN       []
                             Catalog DOWN       []
                           Acropolis DOWN       []
                              Castor DOWN       []
                               Uhura DOWN       []
                   NutanixGuestTools DOWN       []
                          MinervaCVM DOWN       []
                       ClusterConfig DOWN       []
                         APLOSEngine DOWN       []
                               APLOS DOWN       []
                     PlacementSolver DOWN       []
                               Lazan DOWN       []
                             Polaris DOWN       []
                              Delphi DOWN       []
                            Security DOWN       []
                                Flow DOWN       []
                             Anduril DOWN       []
                              Narsil DOWN       []
                               XTrim DOWN       []
                       ClusterHealth DOWN       []
2025-07-24 07:23:57,716Z INFO MainThread cluster:2194 Cluster has been stopped via 'cluster stop' command, hence stopping all services.
2025-07-24 07:23:57,716Z INFO MainThread cluster:3466 Success!

Now we can move on to destroying the cluster.

Destroying the Cluster

Destroying the cluster requires running the following command:

cluster destroy

The system will then ask you for confirmation before proceeding to delete all configurations and data:

2025-07-24 07:35:45,898Z INFO MainThread zookeeper_session.py:136 Using multithreaded Zookeeper client library: 1
2025-07-24 07:35:45,900Z INFO MainThread zookeeper_session.py:248 Parsed cluster id: 4439894058604263884, cluster incarnation id: 1753169113232129
2025-07-24 07:35:45,900Z INFO MainThread zookeeper_session.py:270 cluster is attempting to connect to Zookeeper, host port list zk1:9876
2025-07-24 07:35:45,916Z INFO Dummy-1 zookeeper_session.py:840 ZK session establishment complete, sessionId=0x198310781ce5e6e, negotiated timeout=20 secs
2025-07-24 07:35:45,918Z INFO Dummy-2 zookeeper_session.py:940 Calling c_impl.close() for session 0x198310781ce5e6e
2025-07-24 07:35:45,918Z INFO Dummy-2 zookeeper_session.py:941 Calling zookeeper_close and invalidating zhandle
2025-07-24 07:35:45,921Z INFO MainThread cluster:3303 Executing action destroy on SVMs 192.168.84.22
2025-07-24 07:35:45,922Z WARNING MainThread genesis_utils.py:348 Deprecated: use util.cluster.info.get_node_uuid() instead
2025-07-24 07:35:45,928Z INFO MainThread cluster:3350

***** CLUSTER NAME *****
Unnamed

This operation will completely erase all data and all metadata, and each node will no longer belong to a cluster. Do you want to proceed? (Y/[N]): Y

The cluster destruction operation will take a few minutes, during which time all remaining data will be completely erased.

Once the cluster destruction is complete, a “cluster status” will allow you to verify that AHV is waiting for the cluster to be created:

nutanix@NTNX-5f832032-A-CVM:192.168.84.22:~$ cluster status
2025-07-24 07:42:50,694Z CRITICAL MainThread cluster:3242 Cluster is currently unconfigured. Please create the cluster.

There you have it, your cluster is destroyed and all you have to do is recreate it.

For those who prefer to follow the procedure via video, here’s my associated YouTube video:

Read More

Nutanix X-Ray is a testing and benchmarking tool designed by Nutanix to evaluate the performance and resilience of hyperconverged infrastructures (HCI). It allows companies to simulate real-world workloads and test the robustness of their infrastructures before deploying them in production.

Why use Nutanix X-Ray?

The primary reason for using X-Ray is to evaluate performance before deployment. Indeed, before putting a hyperconverged infrastructure into production, it is essential to ensure that it meets the performance requirements defined upstream of the project.

Nutanix X-Ray addresses this first issue by offering concrete scenarios that allow you to:

  • Simulate real-world workloads, such as those in a data center or a hybrid cloud environment.
  • Measure performance in terms of IOPS, latency, and throughput.
  • Identify potential bottlenecks and areas for improvement.

Thanks to these tests, companies can validate their technological choices before investing heavily in an HCI solution.

The second reason to use X-Ray is to test a cluster’s resilience, a crucial characteristic for avoiding service interruptions. With Nutanix X-Ray, it is possible to:

  • Simulate node, disk, or network failures to see how the infrastructure reacts.
  • Test failover and recovery mechanisms.
  • Measure the time required to restore a service in the event of a failure.

These tests help ensure high availability and service continuity even in the event of a major problem once the cluster is in production.

X-Ray also allows you to compare several HCI infrastructures and choose the most appropriate one based on your needs. To this end, it allows you to:

  • Comparative benchmarks between different solutions (e.g., Nutanix vs. VMware vSAN).
  • A neutral and impartial performance analysis.
  • A better understanding of the strengths and weaknesses of each infrastructure.

The results collected will enable companies to make the right decisions regarding the evolution or replacement of their infrastructure.

The results provided by X-Ray will also facilitate the optimization of HCI environments by:

  • Adjusting configurations to maximize performance.
  • Identifying potential improvements in storage, network, and CPU.
  • Planning infrastructure upgrades based on future needs.

The tool thus helps reduce costs and improve operational efficiency by reducing errors in sizing or technology choices.

As you can see, Nutanix X-Ray is an essential tool for any company wishing to test, compare, and optimize its HCI infrastructure before and after deployment.

In the next article, I will explain how to implement the tool on a Nutanix cluster.

The official Nutanix X-Ray documentation: https://portal.nutanix.com/page/documents/details?targetId=X-Ray-Guide-v5_3:X-Ray-Guide-v5_3

Read More

I worked with a customer who has a large number of fairly old nodes in production at their remote sites. Unfortunately, they are facing a problem performing AOS and AHV installations on them because the hardware is not officially supported by Nutanix. Having recovered a few identical nodes, I looked into the problem to find a solution…

Node Hardware Configuration

These nodes are Supermicro SuperServer 5019D-FN8TP nodes:

They are perfect for creating home labs with their 1U form factor and half-depth design, allowing them to fit into any home rack.

The hardware configuration is as follows:

  • Processor: Intel® Xeon® processor D-2146NT, 8 cores / 16 threads
  • 128GB RAM (expandable up to 512GB)
  • 1 M.2 boot disk
  • 2 1TB SSDs (2 additional SSDs can be added)
  • 4 1G RJ45 ports
  • 2 10G RJ45 ports
  • 2 10G SFP ports
  • 1 RJ45 port dedicated to IPMI
  • 1 NVidia Tesla P40 graphics card

Did I tell you these knots are perfect for home labs?

First Foundation Tests

For my first Foundation tests, I chose to start with a very old version of Foundation: 4.6.

Software-wise, I also started with an old version, with version 5.5.9.5 and the AHV bundled in the package. Since most of the client nodes were also running older versions, I figured it should work.

First failure… of a long series!

I tested many possible combinations with Foundation versions 4.6 / 5.0 / 5.4 / 5.9, AOS versions 5.5.9.5 / 5.6.1 / 5.20 / 6.10, AHV bundled, not bundled… and even a custom Phoenix image generated from one of the recovered nodes… And absolutely no success, often with error messages that differed depending on the combinations used.

But one message still came up more frequently than the others…

Hardware Compatibility Check

During the Foundation process, there is a step in which the Phoenix system generates a hardware configuration file for the node(s) to be imaged: hardware_config.json.

Once this file is generated, Foundation compares it to its list of known hardware to verify that it is a node capable of imaging… And this is where my problem arises:

2025-06-17 11:55:58,642Z foundation_tools.py:1634 INFO Node with ip 192.168.84.22 is in phoenix. Generating hardware_config.json
2025-06-17 11:55:58,942Z foundation_tools.py:1650 DEBUG Running command .local/bin/layout_finder.py local
2025-06-17 11:56:02,383Z foundation_tools.py:334 ERROR Command '.local/bin/layout_finder.py local' returned error code 1
stdout:

stderr:
Traceback (most recent call last):
  File "/root/.local/bin/layout_finder.py", line 297, in <module>
    write_layout("hardware_config.json", 1)
  File "/root/.local/bin/layout_finder.py", line 238, in write_layout
    top = get_layout(node_position)
  File "/root/.local/bin/layout_finder.py", line 130, in get_layout
    vpd_info = vpd_info_override or get_vpd_info(system_info_override)
  File "/root/.local/bin/layout_finder.py", line 249, in get_vpd_info
    module, model, model_string, hardware_id = _find_model_match(
  File "/root/.local/bin/layout_finder.py", line 78, in _find_model_match
    raise exceptions[0]
__main__.NoMatchingModule: Raw FRU: FRU Device Description : Builtin FRU Device (ID 0)
 Chassis Type          : Other
 Chassis Part Number   : CSE-505-203B
 Chassis Serial        : C5050LH47NA0950
 Board Mfg Date        : Wed Oct 31 16:00:00 2018
 Board Mfg             : Supermicro
 Board Serial          : ZM18AS036679
 Board Part Number     : X11SDV-8C-TP8F
 Product Manufacturer  : Supermicro
 Product Name          : 
 Product Part Number   : SYS-5019D-FN8TP-1-NI22
 Product Version       : 
 Product Serial        : S348084X9211699
Product Name: SYS-5019D-FN8TP-1-NI22
Unable to match system information to layout module. Please refer KB-7138 to resolve the issue. 

Foundation is very kind to point out that there’s a KB available, as this is clearly a recurring problem!

Link to the Nutanix KB: https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000PVxTCAW

Now let’s see how to solve my problem…

FRU Modification

The Nutanix KB indicates that you must edit your hardware’s FRU to match hardware on the compatibility list.

To do this, use the SMCIPMITools utility provided by Supermicro and available here: https://www.supermicro.com/en/solutions/management-software/ipmi-utilities

Once the utility is downloaded, you need to launch it from the command line with the correct parameters:

./SMCIPMITool.exe IP_ADDRESS ADMIN PASSWORD ipmi fru

The parameters are as follows:

  • IP ​​address of your node’s IPMI
  • Administrator account login (default is ADMIN)
  • The associated password

The command will query the IPMI and return information about the hardwareriel :

Getting FRU ...
Chassis Type (CT)              = Other (01h)
Chassis Part Number (CP)       = CSE-505-203B
Chassis Serial Number (CS)     = XXXXXXXXXXXXXXX
Board mfg. Date/Time (BDT)     = 2018/10/31 16:00:00 (A0 3E B7)
Board Manufacturer Name (BM)   = Supermicro
Board Product Name (BPN)       =
Board Serial Number (BS)       = XXXXXXXXXXXX
Board Part Number (BP)         = X11SDV-8C-TP8F
Board FRU File ID              =
Product Manufacturer Name (PM) = Supermicro
Product Name (PN)              =
Product PartModel Number (PPM) = SYS-5019D-FN8TP-1-NI22
Product Version (PV)           =
Product Serial Number (PS)     = XXXXXXXXXXXXXXX
Product Asset Tag (PAT)        =
Product FRU File ID            =

It is then possible to access each of the elements via different commands, for example:

SMCIPMITool.exe IP_ADDRESS ADMIN password ipmi fruw PM "param"
SMCIPMITool.exe IP_ADDRESS ADMIN password ipmi fruw PN "NONE"
SMCIPMITool.exe IP_ADDRESS ADMIN password ipmi fruw PPM "param"
SMCIPMITool.exe IP_ADDRESS ADMIN password ipmi fruw PV "NONE"

Obviously, I replace “param” with the desired parameter. Now that I have a technique to “lie” to the system, I need to come up with a good lie…

Looking for the lost model…

The problem in our case is having a FRU that matches a piece of hardware on the compatibility list integrated into Phoenix…

I tested it randomly with similar hardware by replacing the existing PPM:

  • SYS-5019D-FN8TP-1-NI22 (the original one)
  • X11SDV-8C-TP8F (this is the model recognized by Nutanix on the client nodes)
  • NX-1120S-G7
  • NX-1065-G7

The first is not recognized during the Foundation process, the same goes for the second. For 2 suivants, ils sont bien reconnu mais c’est un message d’erreur légèrement différent qui s’affiche…

stderr:
Traceback (most recent call last):
File "/root/.local/bin/layout_finder.py", line 297, in
write_layout("hardware_config.json", 1)
File "/root/.local/bin/layout_finder.py", line 238, in write_layout
top = get_layout(node_position)
File "/root/.local/bin/layout_finder.py", line 146, in get_layout
module.populate_layout(layout_api, layout_api.discovery_info, layout,
File "/root/.local/lib/python3.9/site-packages/layout/modules/smc_gen11_4node.py", line 104, in populate_layout
data_hbas = api.find_devices(pci_ids=["1000:0097"], min_=1, max_=1,
File "/root/.local/lib/python3.9/site-packages/layout/layout_api.py", line 300, in find_devices
raise Exception(msg)
Exception: This node is expected to have exactly 1 SAS3008. But phoenix could not find any such device
2025-06-17 12:22:11,405Z imaging_step.py:123 DEBUG Setting state of ) @c2b0> from RUNNING to FAILED
2025-06-17 12:22:11,409Z imaging_step.py:123 DEBUG Setting state of ) @ca90> from PENDING to NR
2025-06-17 12:22:11,410Z imaging_step.py:182 WARNING Skipping ) @ca90> because dependencies not met, failed tasks: [) @c2b0>]
2025-06-17 12:22:11,412Z imaging_step.py:123 DEBUG Setting state of ) @c940> from PENDING to NR
2025-06-17 12:22:11,413Z imaging_step.py:182 WARNING Skipping ) @c940> because dependencies not met
2025-06-17 12:22:11,413Z imaging_step.py:123 DEBUG Setting state of ) @c2e0> from PENDING to NR
2025-06-17 12:22:11,414Z imaging_step.py:182 WARNING Skipping ) @c2e0> because dependencies not met

The node model is recognized by the Foundation process, but the node’s hardware configuration is also checked! Therefore, finding a similar model isn’t enough; the model AND the hardware configuration must be similar…

But how do I find the right model? And then I had an idea: search the Phoenix files mounted during installation to find out which models it expects to find…

A quick SSH into the node booted on Phoenix, whose installation failed, and here I am, wandering through the system’s intricacies to find what I’m looking for…

The information about supported templates is located in the /root/.local/lib/python3.9/site-packages/layout/modules folder. How do I know this? Because the logs generated during my previous attempts indicated:

File "/root/.local/lib/python3.9/site-packages/layout/modules/smc_gen11_4node.py", line 104, in populate_layout

And in this module folder, there is absolutely something for everyone:

Since the nodes in question are Supermicro, I focused my research on the “smc” prefix in order to reduce the range of possibilities:

In order to further reduce the number of possibilities, I eliminated everything that concerned more than 1 node (2 and 4 nodes therefore) which left me with only about ten possibilities and as I started in order, I immediately found the right template: smc_e300_gen11.py!

Inside the file, I immediately spot the same motherboard: X11SDV-8C-TP8F

It comes in two models: the SMC-E300-2, which has two drives, and the SMC-E300-4, which has four. So, it’s the first one that interests me, and while searching online, I came across another Supermicro motherboard, the SuperServer E300-9D-8CN8TP: https://www.supermicro.com/en/products/system/Mini-ITX/SYS-E300-9D-8CN8TP.cfm

Extremely similar to the motherboard I own, I think I’ve finally found the right model! I note the important details and shut down my motherboard:

  • X11SDV-8C-TP8F (board part number)
  • SMC-E300-2 (model)
  • CSE-E300 (chassis part number)

The final stretch: the custom FRU

Now that I have the missing information, I need to modify my FRU to match the model Foundation expects.

Here are the commands I ran:

./SMCIPMITool.exe ip_address ADMIN password ipmi fruw CP "CSE-E300"
./SMCIPMITool.exe ip_address ADMIN password ipmi fruw PPM "SMC-E300-2"
./SMCIPMITool.exe ip_address ADMIN password ipmi fruw PN "NONE"
./SMCIPMITool.exe ip_address ADMIN password ipmi fruw PV "NONE"

Then I relaunched a Foundation 5.9 with an AOS 6.10.1.6 and an AHV 20230302.103014 in order to validate that what I found works:

2025-06-18 07:35:49,786Z foundation_tools.py:1634 INFO Node with ip 192.168.84.22 is in phoenix. Generating hardware_config.json
2025-06-18 07:35:50,071Z foundation_tools.py:1650 DEBUG Running command .local/bin/layout_finder.py local
2025-06-18 07:35:54,153Z imaging_step_misc_hw_checks.py:168 DEBUG Not an NX G7+ node with RAID boot drives. Skipping RAID checks.
2025-06-18 07:35:54,156Z imaging_step.py:123 DEBUG Setting state of ) @dee0> from RUNNING to FINISHED
2025-06-18 07:35:54,157Z imaging_step.py:162 INFO Completed ) @dee0>
2025-06-18 07:35:54,159Z imaging_step.py:123 DEBUG Setting state of ) @deb0> from PENDING to RUNNING
2025-06-18 07:35:54,162Z imaging_step.py:159 INFO Running ) @deb0>
2025-06-18 07:35:54,165Z imaging_step_pre_install.py:364 INFO Rebooting into staging environment
2025-06-18 07:35:54,687Z cache_manager.py:142 DEBUG Cache HIT: key(get_nos_version_from_tarball_()_{'nos_package_path': '/home/nutanix/foundation/nos/nutanix_installer_package-release-fraser-6.10.1.6-stable-a5f69491f9523eef80d3c703f2ad4d2156e71eeb-x86_64.tar.gz'})
2025-06-18 07:35:54,690Z imaging_step_pre_install.py:389 INFO NOS version is 6.10.1.6
2025-06-18 07:35:54,691Z imaging_step_pre_install.py:392 INFO Preparing NOS package (/home/nutanix/foundation/nos/nutanix_installer_package-release-fraser-6.10.1.6-stable-a5f69491f9523eef80d3c703f2ad4d2156e71eeb-x86_64.tar.gz)
2025-06-18 07:35:54,691Z phoenix_prep.py:82 INFO Unzipping NOS package

It passed the hardware validation without a hitch, and the installation eventually went through.

Of course, this is a patch to allow my client to redeploy their nodes and extend their lifespan. The ideal solution would have been to be able to create a custom .py file that perfectly matches my model without me having to modify it, which, to my knowledge, is unfortunately currently impossible.

One problem persists, however: the cluster can be created in RF2, but Data Resiliency will be critical… I’m still looking for a solution to this problem…

Thanks to Théo and Jeroen for their ideas, which showed me the beginning of the path that led me to the solution!

Link to the Nutanix KB used: https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000PVxTCAW

Read More

In the previous blog post, I explained how to monitor your Centreon cluster using SNMP v2c.

In this new blog post, I’ll explain how to monitor your Nutanix cluster using Centreon using SNMP v3.

Prerequisites

There are a few prerequisites you must meet to add your Nutanix cluster to the Centreon solution. Here’s a list of what you need:

  • A Nutanix cluster with admin access to the web interface
  • A running Centreon server with the Nutanix connector installed
  • SSH access to the Centreon VM
  • Streams must be open in the firewall

Configuring SNMP v3 on the Nutanix Cluster

To configure SNMP on your Nutanix cluster, start by connecting to the Prism Element and then going to “Settings > SNMP”. Check “Enable SNMP” and click “+ New Transport” to add port 161 in UDP:

Then, in “Users” click on “New User”, enter a username as well as a private key pair in AES and authentication key in SHA:

In my case, I’ve entered the following information because it’s for lab purposes only, but I recommend you enter much more complex information:

  • Username: snmp-centreon
  • Priv Key: snmp-priv-key
  • Auth Key: snmp-auth-key

Make a note of the Username, Priv Key, and Auth Key; we’ll need them later. The configuration is complete on the Nutanix side; now let’s move on to the Centreon configuration..

Adding a Nutanix Cluster to Centreon

To add your Nutanix cluster to Centreon, log in to your monitoring system’s web interface, go to “Configuration > Hosts” and click “Add”:

On the page that appears, there is a first block of information to fill in:

  • 1: Cluster name
  • 2: Cluster IP address
  • 3: SNMP version 3
  • 4: The Centreon server that will monitor the cluster
  • 5: The time zone associated with your cluster
  • 6: The templates you wish to add
  • 7: Check “Yes” to ensure that all services associated with the previously added templates are automatically created

On the second part of the page, there are a few things to configure, including the amplitude and frequency of checks, and especially the “SNM” field.

The command line syntax to enter in SNMPEXTRAOPTIONS is:

--snmp-username='snmp-centreon' --authprotocol='SHA' --authpassphrase='snmp-auth-key' --privprotocol='AES'--privpassphrase='snmp-priv-key'

Remember to check the “Password” box to hide sensitive information:

Once all the information has been entered, confirm so that the new host is created on the server. You must then export the configuration to the pollers. To do this, click on “Pollers” in the top left corner, then on “Export configuration”:

Then click on “Export & Reload” in the small window that appears:

To check that your host is being taken into account, go to “Monitoring > Resources Status”, your first checks should start to come up:

If all goes as planned, you should have all your probes green within minutes!

Troubleshooting

If you unfortunately have a monitor that looks like this:

I recommend checking the following:

  • Open SNMP streams (port 161/UDP) in the firewall
  • Configure the AuthKey/PrivKey pair and username
Read More