Team Leader - Nutanix Technology Champion - Nutanix NTC Storyteller

Julien DUMUR
Infrastructure in a Nutshell
nutanix ahv cli reference guide

In this new blog post, we’ll cover all the main Nutanix AHV CLI commands that allow you to perform some checks on your virtual machines using the command line.

All the commands in this article can be run via SSH from any CVM in the cluster.

Display the list of virtual machines

To display the list of virtual machines on the Nutanix cluster, simply run the following command:

acli vm.list

This will show you all the VMs present on the cluster, without the CVMs:

nutanix@NTNX-S348084X9211699-B-CVM:192.168.84.22:~$ acli vm.list
VM name VM UUID
LINUX 88699c96-11a5-49ce-9d1d-ac6dfeff913d
NTNX-192-168-84-200-PCVM-1760699089 f659d248-9ece-4aa0-bb0c-22a3b3abbe12
vm_test 9439094a-7b6b-48ca-9821-a01310763886

As you can see, I only have two virtual machines on my cluster:

  • My Prism Central
  • A newly deployed “LINUX” virtual machine
  • A test virtual machine

A handy command to quickly retrieve all virtual machines and their respective UUIDs. Now let’s see how to retrieve information about a specific virtual machine.

Retrieving Virtual Machine Information

To display detailed information about a virtual machine, use the following command:

acli vm.get VM_NAME

Using the example of my “LINUX” virtual machine, this returns the following information:

nutanix@NTNX-S348084X9211699-B-CVM:192.168.84.22:~$ acli vm.get LINUX
LINUX {
config {
agent_vm: False
allow_live_migrate: True
apc_config {
apc_enabled: False
}
bios_uuid: "88699c96-11a5-49ce-9d1d-ac6dfeff913d"
boot {
boot_device_order: "kCdrom"
boot_device_order: "kDisk"
boot_device_order: "kNetwork"
hardware_virtualization: False
secure_boot: False
uefi_boot: True
}
cpu_hotplug_enabled: True
cpu_passthrough: False
disable_branding: False
disk_list {
addr {
bus: "ide"
index: 0
}
cdrom: True
device_uuid: "fae2ee55-8736-4f3a-9b2c-7d5f5770bf33"
empty: True
iso_type: "kOther"
}
disk_list {
addr {
bus: "scsi"
index: 0
}
cdrom: False
container_id: 4
container_uuid: "2ead3997-e915-4ee2-b9a4-0334889e434b"
device_uuid: "f9a8a84c-6937-4d01-bfd2-080271c44916"
naa_id: "naa.6506b8def195dc769b32f3fe47100297"
storage_vdisk_uuid: "215ba83c-44cb-4c41-bddc-1aa3a44d41c7"[7] 0:python3.9* "ntnx-s348084x9211699-" 21:12 21-Oct-25 vmdisk_size: 42949672960
vmdisk_uuid: "42a18a62-861a-497a-9d73-e959513ce709"
}
generation_uuid: "9c018794-a71a-45ae-aeca-d61c5dd6d11a"
gpu_console: False
hwclock_timezone: "UTC"
machine_type: "pc"
memory_mb: 8192
memory_overcommit: False
name: "LINUX"
ngt_enable_script_exec: False
ngt_fail_on_script_failure: False
nic_list {
connected: True
mac_addr: "50:6b:8d:fb:a1:4c"
network_name: "NUTANIX"
network_type: "kNativeNetwork"
network_uuid: "7d13d75c-5078-414f-a46a-90e3edc42907"
queues: 1
rx_queue_size: 256
type: "kNormalNic"
uuid: "c6f02560-b8e6-4eed-bc09-1675855dfc77"
vlan_mode: "kAccess"
}
num_cores_per_vcpu: 1
num_threads_per_core: 1
num_vcpus: 2
num_vnuma_nodes: 0
power_state_mechanism: "kHard"
scsi_controller_enabled: True
vcpu_hard_pin: False
vga_console: True
vm_type: "kGuestVM"
vtpm_config { is_enabled: False
}
} is_ngt_ipless_reserved_sp_ready: True
is_rf1_vm: False
logical_timestamp: 1
state: "kOff"
uuid: "88699c96-11a5-49ce-9d1d-ac6dfeff913d"

As you can see, this returns all the information about a virtual machine. It is possible to filter some of the information returned with certain commands. Here are the ones I use most often:

acli vm.disk_get VM_NAME : to retrieve detailed information of all disks of a virtual machine

nutanix@NTNX-S348084X9211699-B-CVM:192.168.84.22:~$ acli vm.disk_get LINUX
ide.0 {
addr {
bus: "ide"
index: 0
}
cdrom: True
device_uuid: fae2ee55-8736-4f3a-9b2c-7d5f5770bf33
empty: True
iso_type: "kOther"
}
scsi.0 {
addr {
bus: "scsi"
index: 0
}
cdrom: False
container_id: 4
container_uuid: "2ead3997-e915-4ee2-b9a4-0334889e434b"
device_uuid: f9a8a84c-6937-4d01-bfd2-080271c44916
naa_id: "naa.6506b8def195dc769b32f3fe47100297"
storage_vdisk_uuid: 215ba83c-44cb-4c41-bddc-1aa3a44d41c7
vmdisk_size: 42949672960
vmdisk_uuid: 42a18a62-861a-497a-9d73-e959513ce709
}

acli vm.nic_get VM_NAME : to retrieve the detailed list of network cards attached to a virtual machine

nutanix@NTNX-S348084X9211699-B-CVM:192.168.84.22:~$ acli vm.nic_get LINUX
50:6b:8d:fb:a1:4c {
connected: True
mac_addr: "50:6b:8d:fb:a1:4c"
network_name: "NUTANIX"
network_type: "kNativeNetwork"
network_uuid: "7d13d75c-5078-414f-a46a-90e3edc42907"
queues: 1
rx_queue_size: 256
type: "kNormalNic"
uuid: "c6f02560-b8e6-4eed-bc09-1675855dfc77"
vlan_mode: "kAccess"
}

acli vm.snapshot_list VM_NAME : to retrieve the list of snapshots associated with a virtual machine

nutanix@NTNX-S348084X9211699-B-CVM:192.168.84.22:~$ acli vm.snapshot_list LINUX
Snapshot name Snapshot UUID
SNAPSHOT_BEFORE_UPGRADE e7c1e84e-7087-42fd-9e9e-2b053f0d5714

You now know almost everything about verifying your virtual machines.

For the complete list of commands, I invite you to consult the official documentation: https://portal.nutanix.com/page/documents/details?targetId=Command-Ref-AOS-v7_3:man-ncli-c.html

In the next article, we’ll tackle a big task: creating virtual machines using CLI commands.

Read More

In a previous article, we covered how to deploy and perform the basic configuration of a Palo Alto gateway to replace the basic gateway supplied with your OVHcloud Nutanix cluster.

I will now show you how to connect this gateway to the RTvRack supplied with your cluster to connect it to the internet.

Connecting the Gateway to the RTvRack

In “Network > Zones”, we start by creating a new “Layer3” zone, which we’ll call “WAN” for simplicity:

You can also create one or more other zones to connect your other interfaces (e.g., an “INTERNAL” zone).

Next, in “Network > Interfaces,” edit the ethernet1/1 interface. If you’ve successfully created your VM on Nutanix, it will correspond to the WAN output interface. It will be a “Layer3” interface:

On the “Config” tab, select the “default” Virtual Router and select the “WAN” security zone.

On the “IPv4” tab, add the available public IP address in the range provided to you by OVHcloud with your cluster, making sure to include a /32 mask at the end:

You can find the network information for your public IP address on your OVHcloud account in “Hosted Private Cloud > Network > IP”:https://www.ovh.com/manager/#/dedicated/ip

En fUsing the public IP address and its associated network mask, you can deduce:

The public IP address to assign to the WAN port of your gateway

The IP address of the WAN gateway

Example with the network 6.54.32.10/30:

Network address (not usable): 6.54.32.8
First address (public address of the PA-VM): 6.54.32.9
Last address: 6.54.32.10 (WAN gateway address)
Broadcast address: 6.54.32.11 (broadcast address)

Repeat the operation with the interface corresponding to the subnet of your Nutanix cluster, using the IP address of the gateway you specified when deploying your cluster.

However, make sure to set the mask corresponding to that of the network in which the interface is located as indicated in the documentation: https://docs.paloaltonetworks.com/pan-os/11-0/pan-os-networking-admin/configure-interfaces/layer-3-interfaces/configure-layer-3-interfaces#iddc65fa08-60b8-47b2-a695-2e546b4615e9.

In “Network > Virtual Routers”, edit the default router. You should find your “ethernet1/1” interface at a minimum, as well as any other interfaces you may have already configured:

Then, in the “Static Routes” submenu, create a new route with a name that speaks to you, a destination of 0.0.0.0/0, select the “ethernet1/1” interface and as Next Hop the IP address of the public network gateway provided to you by OVHcloud:

Finally, go to the “Device > Setup > Services” tab and edit the “Service Route Configuration” option in “Services Features” to specify the output interface and the associated /32 IP address for some of the services:

The list of services to configure at a minimum is as follows:

  • DNS
  • External Dynamic Lists
  • NTP
  • Palo Alto Networks Services
  • URL Updates

You can validate and commit. Your PA-VM gateway is now communicating with the OVHcloud RTvRack. All that’s left is to finalize the configurations to secure the installation and create your firewall rules to allow your cluster to access the internet.net.

Read More
nutanix on ovhcloud hosted private cloud

In this article, I share my complete feedback on the complete reinstallation of a Nutanix cluster at OVHcloud.

Once logged in to the OVHcloud management interface, go to “Hosted Private Cloud”:

In the left drop-down menu, click on the cluster you want to redeploy:

On the page that appears, click on “Redeploy my cluster”: 

Click on “Continuer” :

Automatic redeployment

The first option is to revert to the default settings provided by OVHcloud to completely reinstall the cluster in its basic configuration:

A summary of the settings is displayed before you finally confirm the reinstallation of your cluster:

Custom redeployment

You can fully customize your cluster’s IP network configuration during its installation phase. When choosing the cluster deployment method, select “Customize configuration” and click “Next”:

Fill in the various fields with the information you want to assign to your cluster and click on “Redeploy”:

Type “REDEPLOY” in the field provided and click “Confirm” to start the reinstallation procedure:

On your cluster’s overview page, a message indicates that cluster redeployment is in progress: 

All that’s left is to wait until the cluster is completely redeployed. All the basic configurations are already done, you just have to finalize the specific ones such as authentication, SMTP relay, monitoring, etc.

Read More
nutanix ahv cli reference guide

In the Maxi Best Of Nutanix CLI series, the previous two articles covered checking the network configuration of a Nutanix cluster and managing subnets.

In this new article, we’ll cover managing storage containers via CLI commands on your Nutanix clusters…

All the commands in this article must be executed from one of the cluster’s CVMs and work on a cluster running AOS 6.10+.

Check the containers

To check the status of your storage containers, the simplest command is:

ncli container list

This command will allow you to display all the information related to all the containers in your cluster.

If you want to display a specific container, you can pass the name (the simplest method) or the ID of your container if you have it as a parameter:

ncli container list name=NAME
ncli container list id=ID

Finally, one last command to display only the usage statistics of your containers:

ncli container list-stats

Renaming a Container

To rename a storage container, it must be completely empty.

Renaming a storage container can be done using the following command:

ncli container edit name=ACTUALNAME new-name=NEWNAME

On the default container, this would give for example the following command:

ncli container edit name=default-container-21425105524428 new-name=ntnx-lab-container

WARNING: There are two containers created by default when deploying your cluster: “SelfServiceContainer” and “NutanixManagementShare”. Do not attempt to rename them!

Creating a Container

It’s also possible to create storage containers using the CLI:

ncli container create name=NAME sp-name=STORAGE-POOL-NAME

The “name” and “sp-name” parameters are the only required parameters when running the command. This will allow you to create a base container on the selected storage pool with the following parameters:

  • No data optimization mechanism
  • No restrictions/reservations
  • The default replication factor

But the container creation command can be very useful if you need to create storage containers in batches, for example, if you’re hosting multiple clients on a cluster, each with an allocated amount of storage space!

For example, to create a storage container with the following parameters:

  • Container name “client-alpha”
  • Reserved capacity: 64GB
  • Maximum capacity: 64GB
  • With real-time compression enabled

Here’s the command you would need to run:

ncli container create name=client-alpha res-capacity=64 adv-capacity=64 enable-compression=true compression-delay=0 sp-name=default-storage-pool-21425105524428

A container with the associated characteristics will then be created:

Modifying Container Settings

An existing container can also be modified. You can modify almost everything in terms of settings, from data optimization mechanisms to reserved/allocated sizes, replication factors, and more.

For all the settings, please refer to the official documentation (link at the bottom of the page).

Deleting a Container

Deleting a container is quite simple, but requires that all files stored within it be deleted or moved first. Deleting a container is done using the following command:

ncli container remove name=NAME

It may happen that despite deleting or moving your VM’s vdisks, the deletion is still refused. This is often due to small residual files.

You must then add the “ignore-small-files” parameter to force the deletion:

ncli container remove name=NAME ignore-small-files=true

For example:

ncli container remove name=ntnx-lab-container ignore-small-files=true

WARNING: There are two containers created by default when deploying your cluster: “SelfServiceContainer” and “NutanixManagementShare”. Do not attempt to delete them!

Official Documentation

To learn more about some of the command options presented, please consult the official documentation: https://portal.nutanix.com/page/documents/details?targetId=Command-Ref-AOS-v6_10:acl-ncli-container-auto-r.html

Read More
nutanix ahv cli reference guide

In the previous blog post on the Maxi Best Of Nutanix CLI menu, I presented you with the best commands for checking the entire network configuration of your Nutanix cluster.

In this new article, we’ll now see how CLI commands can help us create or modify networks in our Nutanix cluster…

All the commands in this article must be executed at one of the CVMs in the cluster.

Creating an Unmanaged Subnet on Nutanix AHV

To create a new unmanaged subnet (without IPAM) across the AHV cluster, the command is very simple:

acli net.create NAME vlan=VLAN_ID

Replace:

  • NAME with the name you want to assign to your subnet
  • VLAN_ID with the VLAN ID

Here’s an example command that creates the VLAN “NUTANIX” with the VLAN vlan id “84” :

acli net.create NUTANIX vlan=84

By default, the vlan will be created on the vswitch “vs0” but if you want to create it on another virtual switch, you can specify it as a parameter:

acli net.create NAME vlan=VLAN_ID virtual_switch=VSWITCH

In this case, replace:

  • NAME with the name you want to assign to your subnet
  • VLAN_ID with the VLAN ID
  • VSWITCH with the name of the bridge on which you want to create the subnet

Here is an example of a command that allows you to create the “NUTANIX” VLAN with comme vlan id “84 sur le vswitch “vs0” :

acli net.create NUTANIX vlan=84 virtual_switch=vs0

You can then run the “acli net.list” command and check that your new subnet appears in the list.

Creating a Managed Subnet on Nutanix AHV

This command creates a new managed subnet (using IPAM) across the AHV cluster with basic gateway and subnet mask options.

acli net.create NAME vlan=VLAN_ID virtual_switch=vs0 ip_config=GATEWAY/MASK

Replace:

  • NAME with the name you want to assign to your subnet
  • VLAN_ID with the VLAN ID
  • vs0 with the name of the bridge on which you want to create the subnet
  • GATEWAY with the IP address of the subnet’s gateway
  • MASK with the subnet mask

Here is an example of a command that creates the VLAN “NUTANIX” with a vlan id “84” on the vswitch “vs0”, with a gateway address “10.0.84.254” on the network “10.0.84.0/24”:

acli net.create NUTANIX vlan=84 virtual_switch=vs0 ip_config=10.0.84.254/24

Deleting an Existing Subnet

Deleting an existing subnet on a Nutanix AHV cluster is easy! Simply run the following command:

acli net.delete NAME 

You must replace NAME with the name of the subnet you wish to delete, which would give, for example, for the previously created subnet:

acli net.delete NUTANIX

Nothing could be simpler!

Bulk Subnet Creation/Deletion

To make it easier to import large quantities of subnets, I created several CSV files that I can then convert into a list of commands to create multiple subnets in batches.

Everything is on my Github: https://github.com/Exe64/NUTANIX

For unmanaged subnets: https://github.com/Exe64/NUTANIX/blob/main/nutanix-unmanaged-subnets.csv

For managed subnets: https://github.com/Exe64/NUTANIX/blob/main/nutanix-managed-subnets.csv

For deleting subnets: https://github.com/Exe64/NUTANIX/blob/main/nutanix-subnets-delete.csv

To learn more about using these files, I invite you to consult my dedicated article:

Official Documentation

Complete command documentation is available on the publisher’s official website: https://portal.nutanix.com/page/documents/details?targetId=Command-Ref-AOS-v6_10:man-acli-c.html

Read More
nutanix ahv cli reference guide

Whether you need to perform specific or repetitive operations, troubleshoot, or gain a more detailed view, the CLI commands for a Nutanix cluster will be your best allies.

In this article, I offer a summary of the best commands for performing all network configuration checks on a Nutanix cluster, whether at the cluster, host, CVM, or virtual machine level.

You must have an AOS 6.10+ cluster to execute some commands of this guide.

A. Using Nutanix acli commands from any CVM in the Nutanix AHV cluster

List the status of the host interfaces:

acli net.list_host_nic 192.168.84.11 @IP_HOST_AHV 

Result:

List all vSwitches currently configured on the cluster:

acli net.list_virtual_switch 

Result:

You can list the configuration of a particular vSwitch by passing it as an argument to the command:

acli net.list_virtual_switch vs1

List all subnets created on the cluster:

acli net.list 

Result:

List the VMs attached to a particular subnet:

acli net.list_vms SUBNET

B. Via the Nutanix manage_ovs script from any CVM in the Nutanix AHV cluster

List the interface status of an AHV host:

manage_ovs show_interfaces

Result:

You can also list the interface status of all hosts in the cluster:

allssh "manage_ovs show_interfaces"

List the status of an AHV host’s uplinks (bonds):

manage_ovs show_uplinks

Result :

You can also list the uplink (bond) status of all AHV hosts in the cluster:

allssh "manage_ovs show_uplinks"

Display LLDP information of an AHV host’s interfaces:

manage_ovs show_lldp

Result:

You can also view LLDP information for the interfaces of all AHV hosts in the cluster:

allssh "manage_ovs show_lldp"

Show currently created bridges on an AHV host:

manage_ovs show_bridges

Result :

You can also view the bridges currently created on all AHV hosts in the cluster:

allssh "manage_ovs show_bridges"

Show the mapping of CVM interfaces to those of AHV hosts:

manage_ovs show_cvm_uplinks_map

Result :

You can also view the interface mapping of CVMs on all AHV hosts in the cluster:

allssh "manage_ovs show_cvm_uplinks_map"

 

C. Using the Open vSwitch command from any host in a Nutanix AHV cluster

List the existing bridges of an AHV host:

ovs-vsctl list-br

Result :

List all interfaces attached to a particular bridge of an AHV host:

ovs-vsctl list-ifaces br0

Result:

Display the configuration of an AHV host’s port bond:

ovs-vsctl list port br0-up

Result :

Display the configuration and status of a bond on an AHV host:

ovs-appctl bond/show br0-up

Result:

Display information about the status of a LACP-configured bond on an AHV host:

ovs-appctl lacp/show br1-up

Many thanks to Yohan for the article idea and the helping hand!

Read More

The new version of VirtIO is available, and here are the new features and bug fixes!

What’s New

The “null” QEMU driver has been replaced with a fully functional FwCfg device, allowing the collection of Windows virtual machine core dumps directly on AHV hosts.

Added a new configuration button: you can now adjust the number of I/O requests processed before retrying when the virtual queue is full, giving you greater control over I/O behavior.

Bug Fixes

Fixed an annoying issue that caused Windows virtual machines to hang after unmounting the disk. Operations will now be smooth.

Good to Know

This new version is fully compatible with all supported AOS and AHV versions.

It is available for download here: https://portal.nutanix.com/page/downloads/?product=ahv&version=1.2.5&bit=VirtIO

Older versions are still available on the Nutanix Support Portal, under Downloads > AHV > VirtIO > Other Versions.

For installation assistance, see the AHV Administration Guide.

Updates now!

Read More
Nutanix Foundation on a Steamdeck

In one of my previous articles, I talked about my Nutanix Foundation installation on my Steamdeck. Unfortunately, I hadn’t yet had the opportunity to run an installation with the setup due to a lack of a server to image.

But things have changed since I recently acquired a Supermicro SuperServer 5019D-FN8TP! I also wrote an article about implementing Nutanix Foundation on unofficially supported hardware:

Hardware Preparation for Foundation

To be able to image my node with Nutanix Foundation from my Steamdeck, I absolutely needed to be connected to the network via RJ45, a type of connection missing from Valve’s console…

So I purchased an external dock with several USB ports and an RJ45 port that can be connected via USB-C.

I made the following connections:

  • Connecting the external dock to the Steamdeck’s USB-C port
  • Connecting the power supply to the dock’s USB-C port
  • Connecting the network cable to the dock’s RJ45 port
  • Connecting the external SSD (on which Windows is installed) to one of the dock’s USB ports

This allowed me to boot the Steamdeck from the SSD to start Windows 11!

Nutanix Foundation on a Steamdeck

Nutanix Foundation for a Node with a Steamdeck

As I mentioned in the previous article, I already have Nutanix Foundation installed, so I won’t dwell on that part and will move directly to the Foundation section!

For this Foundation, I used the latest version available, namely Foundation 5.9 with AOS 10 and AHV 7!

Nutanix Foundation on a Steamdeck

The Foundation process starts flawlessly as expected and the process completes after a while:

Nutanix Foundation on a Steamdeck

As you can see, imaging a node with the Steamdeck is possible! Is it relevant? No certainty, but I can at least say that “I did it!”

Read More
header nutanix

Whether you’re selling the cluster to a third party or repurposing it for another purpose, sometimes you need to destroy a Nutanix cluster. Here’s how to do it…

Preparing the Cluster for Destruction

Before destroying a cluster, some preparations must be made.

Among the necessary prerequisites, it is imperative that there are no longer any virtual machines running on the cluster. Make sure to migrate/shutdown/delete (as desired) all virtual machines on the cluster.

Note: All the commands in this article must be entered on one of the cluster’s CVMs.

Once this prerequisite is met, we begin by checking the cluster’s status:

cluster status

You should get a return like this:

nutanix@NTNX-5f832032-A-CVM:192.168.84.22:~$ cluster status
2025-07-24 07:20:18,663Z INFO MainThread zookeeper_session.py:136 Using multithreaded Zookeeper client library: 1
2025-07-24 07:20:18,666Z INFO MainThread zookeeper_session.py:248 Parsed cluster id: 4439894058604263884, cluster incarnation id: 1753169113232129
2025-07-24 07:20:18,666Z INFO MainThread zookeeper_session.py:270 cluster is attempting to connect to Zookeeper, host port list zk1:9876
2025-07-24 07:20:18,676Z INFO Dummy-1 zookeeper_session.py:840 ZK session establishment complete, sessionId=0x198310781ce5e38, negotiated timeout=20 secs
2025-07-24 07:20:18,678Z INFO MainThread cluster:3303 Executing action status on SVMs 192.168.84.22
2025-07-24 07:20:18,682Z INFO Dummy-2 zookeeper_session.py:940 Calling c_impl.close() for session 0x198310781ce5e38
2025-07-24 07:20:18,683Z INFO Dummy-2 zookeeper_session.py:941 Calling zookeeper_close and invalidating zhandle
The state of the cluster: start
Lockdown mode: Disabled

        CVM: 192.168.84.22 Up, ZeusLeader
                              Xmount   UP       [459073, 459235, 459236, 459311]
                           IkatProxy   UP       [458789, 458917, 458918, 458919]
                                Zeus   UP       [454133, 454189, 454190, 454191, 454201, 454218]
                           Scavenger   UP       [459084, 459296, 459297, 459298]
                    SysStatCollector   UP       [464017, 464089, 464090, 464091]
                    IkatControlPlane   UP       [464039, 464218, 464219, 464220]
                       SSLTerminator   UP       [464170, 464323, 464324]
                      SecureFileSync   UP       [464362, 464646, 464647, 464648]
                              Medusa   UP       [468604, 469223, 469224, 469395, 470062]
                  DynamicRingChanger   UP       [476814, 476897, 476898, 476920]
                              Pithos   UP       [476843, 477058, 477060, 477086]
                          InsightsDB   UP       [476918, 477131, 477132, 477155]
                              Athena   UP       [477152, 477270, 477271, 477273]
                             Mercury   UP       [513735, 513803, 513804, 513808]
                              Mantle   UP       [477391, 477551, 477552, 477562]
                          VipMonitor   UP       [485663, 485664, 485665, 485666, 485670]
                            Stargate   UP       [477857, 477995, 477996, 477997, 477998]
                InsightsDataTransfer   UP       [478768, 478929, 478930, 478934, 478935, 478936, 478937, 478938, 478939]
                             GoErgon   UP       [478834, 479020, 479021, 479039]
                             Cerebro   UP       [478950, 479138, 479139, 479306]
                             Chronos   UP       [479088, 479286, 479287, 479310]
                             Curator   UP       [479234, 479406, 479407, 483968]
                               Prism   UP       [479436, 479600, 479601, 479650, 480643, 480885]
                                Hera   UP       [479602, 479917, 479918, 479919]
                        AlertManager   UP       [479860, 480436, 480438, 480555]
                            Arithmos   UP       [480751, 481566, 481567, 481765]
                             Catalog   UP       [481670, 482699, 482700, 482701, 483502]
                           Acropolis   UP       [483575, 484493, 484494, 488301]
                              Castor   UP       [484403, 484877, 484878, 484911, 484972]
                               Uhura   UP       [484912, 485066, 485067, 485300]
                   NutanixGuestTools   UP       [485132, 485254, 485255, 485284, 485611]
                          MinervaCVM   UP       [491046, 491263, 491264, 491265]
                       ClusterConfig   UP       [491188, 491361, 491362, 491363, 491381]
                         APLOSEngine   UP       [491374, 491650, 491651, 491652]
                               APLOS   UP       [495252, 496063, 496064, 496065]
                     PlacementSolver   UP       [497033, 497330, 497331, 497332, 497341]
                               Lazan   UP       [497256, 497568, 497569, 497570]
                             Polaris   UP       [498016, 498620, 498621, 498911]
                              Delphi   UP       [498765, 499238, 499239, 499240, 499332]
                            Security   UP       [500506, 501578, 501579, 501581]
                                Flow   UP       [501478, 502168, 502169, 502171, 502178]
                             Anduril   UP       [510708, 511248, 511249, 511252, 511335]
                              Narsil   UP       [502382, 502472, 502473, 502474]
                               XTrim   UP       [502488, 502629, 502630, 502631]
                       ClusterHealth   UP       [502656, 502774, 503156, 503158, 503166, 503174, 503183, 503351, 503352, 503359, 503384, 503385, 503396, 503401, 503402, 503420, 503421, 503444, 503445, 503468, 503469, 503474, 503752, 503753, 503785, 503786, 503817, 503818, 528495, 528533, 528534, 530466, 530467, 530468, 530469, 530474, 530475, 530488, 530512, 530522, 530571, 530576, 530684, 530773, 530791, 531349, 531357]
2025-07-24 07:20:20,740Z INFO MainThread cluster:3466 Success!

Since the cluster is currently started, I first need to stop it with the following command:

cluster stop

Please note: to shut down the cluster, there must be no virtual machines running on the cluster except the CVM.

The command will shut down the cluster and associated services after you confirm the operation with “I agree” and should return something like this:

The state of the cluster: stop
Lockdown mode: Disabled

        CVM: 192.168.84.22 Up, ZeusLeader
                              Xmount   UP       [1761344, 1761418, 1761419, 1761475]
                           IkatProxy   UP       [458789, 458917, 458918, 458919]
                                Zeus   UP       [454133, 454189, 454190, 454191, 454201, 454218]
                           Scavenger   UP       [459084, 459296, 459297, 459298]
                    SysStatCollector DOWN       []
                    IkatControlPlane DOWN       []
                       SSLTerminator DOWN       []
                      SecureFileSync DOWN       []
                              Medusa DOWN       []
                  DynamicRingChanger DOWN       []
                              Pithos DOWN       []
                          InsightsDB DOWN       []
                              Athena DOWN       []
                             Mercury DOWN       []
                              Mantle DOWN       []
                          VipMonitor   UP       [485663, 485664, 485665, 485666, 485670]
                            Stargate DOWN       []
                InsightsDataTransfer DOWN       []
                             GoErgon DOWN       []
                             Cerebro DOWN       []
                             Chronos DOWN       []
                             Curator DOWN       []
                               Prism DOWN       []
                                Hera DOWN       []
                        AlertManager DOWN       []
                            Arithmos DOWN       []
                             Catalog DOWN       []
                           Acropolis DOWN       []
                              Castor DOWN       []
                               Uhura DOWN       []
                   NutanixGuestTools DOWN       []
                          MinervaCVM DOWN       []
                       ClusterConfig DOWN       []
                         APLOSEngine DOWN       []
                               APLOS DOWN       []
                     PlacementSolver DOWN       []
                               Lazan DOWN       []
                             Polaris DOWN       []
                              Delphi DOWN       []
                            Security DOWN       []
                                Flow DOWN       []
                             Anduril DOWN       []
                              Narsil DOWN       []
                               XTrim DOWN       []
                       ClusterHealth DOWN       []
2025-07-24 07:23:57,716Z INFO MainThread cluster:2194 Cluster has been stopped via 'cluster stop' command, hence stopping all services.
2025-07-24 07:23:57,716Z INFO MainThread cluster:3466 Success!

Now we can move on to destroying the cluster.

Destroying the Cluster

Destroying the cluster requires running the following command:

cluster destroy

The system will then ask you for confirmation before proceeding to delete all configurations and data:

2025-07-24 07:35:45,898Z INFO MainThread zookeeper_session.py:136 Using multithreaded Zookeeper client library: 1
2025-07-24 07:35:45,900Z INFO MainThread zookeeper_session.py:248 Parsed cluster id: 4439894058604263884, cluster incarnation id: 1753169113232129
2025-07-24 07:35:45,900Z INFO MainThread zookeeper_session.py:270 cluster is attempting to connect to Zookeeper, host port list zk1:9876
2025-07-24 07:35:45,916Z INFO Dummy-1 zookeeper_session.py:840 ZK session establishment complete, sessionId=0x198310781ce5e6e, negotiated timeout=20 secs
2025-07-24 07:35:45,918Z INFO Dummy-2 zookeeper_session.py:940 Calling c_impl.close() for session 0x198310781ce5e6e
2025-07-24 07:35:45,918Z INFO Dummy-2 zookeeper_session.py:941 Calling zookeeper_close and invalidating zhandle
2025-07-24 07:35:45,921Z INFO MainThread cluster:3303 Executing action destroy on SVMs 192.168.84.22
2025-07-24 07:35:45,922Z WARNING MainThread genesis_utils.py:348 Deprecated: use util.cluster.info.get_node_uuid() instead
2025-07-24 07:35:45,928Z INFO MainThread cluster:3350

***** CLUSTER NAME *****
Unnamed

This operation will completely erase all data and all metadata, and each node will no longer belong to a cluster. Do you want to proceed? (Y/[N]): Y

The cluster destruction operation will take a few minutes, during which time all remaining data will be completely erased.

Once the cluster destruction is complete, a “cluster status” will allow you to verify that AHV is waiting for the cluster to be created:

nutanix@NTNX-5f832032-A-CVM:192.168.84.22:~$ cluster status
2025-07-24 07:42:50,694Z CRITICAL MainThread cluster:3242 Cluster is currently unconfigured. Please create the cluster.

There you have it, your cluster is destroyed and all you have to do is recreate it.

For those who prefer to follow the procedure via video, here’s my associated YouTube video:

Read More
nutanix on ovhcloud

This is one of the operations I recommend performing on an OVHcloud cluster immediately after delivery: replacing the pre-deployed gateway that will allow your cluster to connect to the internet.

In this article, we’ll see how to deploy a Palo Alto PA-VM and how to perform its basic configuration so that it’s ready to be connected to the OVHcloud RTvRack (which will be the subject of another article).

Prerequisites

Here is the list of prerequisites for deployment:

  • A Nutanix OVHcloud cluster deployed
  • The required subnets created on the cluster
  • A backup VM deployed on the cluster
  • A Palo Alto account with access to image downloads

Retrieving the PA-VM Image

The first step is to retrieve the qcow2 image, which will allow us to deploy the PA-VM on the Palo Alto site: https://support.paloaltonetworks.com/Updates/SoftwareUpdates/64685971

NOTE: You must have a registered account with them with the correct access rights; there is no “Community” or “Free” version.

VM Deployment

After transferring the newly downloaded image to the cluster, we create a VM with the following characteristics:

For VM sizing, I invite you to consult the documentation to adapt it to your context: https://docs.paloaltonetworks.com/vm-series/11-0/vm-series-deployment/license-the-vm-series-firewall/vm-series-models/vm-series-system-requirements

The disk to add is the one downloaded in qcow2 format from the Palo Alto website.

Also select the subnets that will be connected to your gateway. The first interface you add will always be the PA-VM’s management interface, so make sure you select the correct subnet, which ideally will be a subnet dedicated to management interfaces. Your backup VM must have an interface in this subnet to access the PA-VM’s web interface. Here, for example, is what I would recommend for configuring the interfaces:

Management

  • ethernet1/1 (subnet 0 created by default on the cluster, for the WAN output)
  • ethernet1/2 (internal subnet 1, often the one corresponding to your Nutanix infrastructure)
  • ethernet1/3 (internal subnet 2)

It’s important to select “Legacy BIOS Mode” when creating the VM, otherwise it won’t boot!

Select “Use this VM as an Agent VM” so that it boots first.

Validate the settings, the virtual machine is ready to be started.

Initializing the PA-VM

Start the VM and launch the console from the Nutanix interface. Wait while the operating system boots.

The first login is via the CLI with the following credentials:

  • Username: admin
  • Password: admin

The system will ask you to change the password.de passe par défaut. On passe ensuite en mode configuration :

configure

Next, configure the management IP in static mode:

set deviceconfig system type static

Configuring the management interface parameters:

set deviceconfig system ip-address <Firewall-IP> netmask <netmask> default-gateway <gateway-IP> dns-setting servers primary <DNS-IP>

At this point, the firewall can be accessed from the bounce machine’s web browser at: https://<Firewall-IP>

CAUTION: This only works if the bounce VM has a pin in the same subnet as the Management interface.

Don’t forget to commit, either from the web interface or from the command line:

commit

You can now continue the configuration on the web interface.

Basic PA-VM Configurations

Let’s start with the basic PA-VM configuration.

On the web interface, in “Device > Setup”, edit the “General Settings” widget to enter at least the Hostname and the Timezone:

Then go to the “Services” tab and edit the “Services” widget to add DNS servers and NTP servers:

All that’s left is to commit the changes; the basic configuration of the Palo Alto gateway is complete.

I want to point out that this is a basic configuration, and there are many other configuration points to complete to ensure a perfectly configured and secure gateway that allows your cluster to access the internet, including authentication, password complexity, VPN, firewall rules, and more.

In a future article, we’ll see how to connect your Palo Alto PA-VM gateway to the OVHcloud RTvRack to allow your cluster to access the internet.

Read More