Team Leader - Nutanix Technology Champion - Nutanix NTC Storyteller

Julien DUMUR
Infrastructure in a Nutshell
header nutanix

Whether you’re selling the cluster to a third party or repurposing it for another purpose, sometimes you need to destroy a Nutanix cluster. Here’s how to do it…

Preparing the Cluster for Destruction

Before destroying a cluster, some preparations must be made.

Among the necessary prerequisites, it is imperative that there are no longer any virtual machines running on the cluster. Make sure to migrate/shutdown/delete (as desired) all virtual machines on the cluster.

Note: All the commands in this article must be entered on one of the cluster’s CVMs.

Once this prerequisite is met, we begin by checking the cluster’s status:

cluster status

You should get a return like this:

nutanix@NTNX-5f832032-A-CVM:192.168.84.22:~$ cluster status
2025-07-24 07:20:18,663Z INFO MainThread zookeeper_session.py:136 Using multithreaded Zookeeper client library: 1
2025-07-24 07:20:18,666Z INFO MainThread zookeeper_session.py:248 Parsed cluster id: 4439894058604263884, cluster incarnation id: 1753169113232129
2025-07-24 07:20:18,666Z INFO MainThread zookeeper_session.py:270 cluster is attempting to connect to Zookeeper, host port list zk1:9876
2025-07-24 07:20:18,676Z INFO Dummy-1 zookeeper_session.py:840 ZK session establishment complete, sessionId=0x198310781ce5e38, negotiated timeout=20 secs
2025-07-24 07:20:18,678Z INFO MainThread cluster:3303 Executing action status on SVMs 192.168.84.22
2025-07-24 07:20:18,682Z INFO Dummy-2 zookeeper_session.py:940 Calling c_impl.close() for session 0x198310781ce5e38
2025-07-24 07:20:18,683Z INFO Dummy-2 zookeeper_session.py:941 Calling zookeeper_close and invalidating zhandle
The state of the cluster: start
Lockdown mode: Disabled

        CVM: 192.168.84.22 Up, ZeusLeader
                              Xmount   UP       [459073, 459235, 459236, 459311]
                           IkatProxy   UP       [458789, 458917, 458918, 458919]
                                Zeus   UP       [454133, 454189, 454190, 454191, 454201, 454218]
                           Scavenger   UP       [459084, 459296, 459297, 459298]
                    SysStatCollector   UP       [464017, 464089, 464090, 464091]
                    IkatControlPlane   UP       [464039, 464218, 464219, 464220]
                       SSLTerminator   UP       [464170, 464323, 464324]
                      SecureFileSync   UP       [464362, 464646, 464647, 464648]
                              Medusa   UP       [468604, 469223, 469224, 469395, 470062]
                  DynamicRingChanger   UP       [476814, 476897, 476898, 476920]
                              Pithos   UP       [476843, 477058, 477060, 477086]
                          InsightsDB   UP       [476918, 477131, 477132, 477155]
                              Athena   UP       [477152, 477270, 477271, 477273]
                             Mercury   UP       [513735, 513803, 513804, 513808]
                              Mantle   UP       [477391, 477551, 477552, 477562]
                          VipMonitor   UP       [485663, 485664, 485665, 485666, 485670]
                            Stargate   UP       [477857, 477995, 477996, 477997, 477998]
                InsightsDataTransfer   UP       [478768, 478929, 478930, 478934, 478935, 478936, 478937, 478938, 478939]
                             GoErgon   UP       [478834, 479020, 479021, 479039]
                             Cerebro   UP       [478950, 479138, 479139, 479306]
                             Chronos   UP       [479088, 479286, 479287, 479310]
                             Curator   UP       [479234, 479406, 479407, 483968]
                               Prism   UP       [479436, 479600, 479601, 479650, 480643, 480885]
                                Hera   UP       [479602, 479917, 479918, 479919]
                        AlertManager   UP       [479860, 480436, 480438, 480555]
                            Arithmos   UP       [480751, 481566, 481567, 481765]
                             Catalog   UP       [481670, 482699, 482700, 482701, 483502]
                           Acropolis   UP       [483575, 484493, 484494, 488301]
                              Castor   UP       [484403, 484877, 484878, 484911, 484972]
                               Uhura   UP       [484912, 485066, 485067, 485300]
                   NutanixGuestTools   UP       [485132, 485254, 485255, 485284, 485611]
                          MinervaCVM   UP       [491046, 491263, 491264, 491265]
                       ClusterConfig   UP       [491188, 491361, 491362, 491363, 491381]
                         APLOSEngine   UP       [491374, 491650, 491651, 491652]
                               APLOS   UP       [495252, 496063, 496064, 496065]
                     PlacementSolver   UP       [497033, 497330, 497331, 497332, 497341]
                               Lazan   UP       [497256, 497568, 497569, 497570]
                             Polaris   UP       [498016, 498620, 498621, 498911]
                              Delphi   UP       [498765, 499238, 499239, 499240, 499332]
                            Security   UP       [500506, 501578, 501579, 501581]
                                Flow   UP       [501478, 502168, 502169, 502171, 502178]
                             Anduril   UP       [510708, 511248, 511249, 511252, 511335]
                              Narsil   UP       [502382, 502472, 502473, 502474]
                               XTrim   UP       [502488, 502629, 502630, 502631]
                       ClusterHealth   UP       [502656, 502774, 503156, 503158, 503166, 503174, 503183, 503351, 503352, 503359, 503384, 503385, 503396, 503401, 503402, 503420, 503421, 503444, 503445, 503468, 503469, 503474, 503752, 503753, 503785, 503786, 503817, 503818, 528495, 528533, 528534, 530466, 530467, 530468, 530469, 530474, 530475, 530488, 530512, 530522, 530571, 530576, 530684, 530773, 530791, 531349, 531357]
2025-07-24 07:20:20,740Z INFO MainThread cluster:3466 Success!

Since the cluster is currently started, I first need to stop it with the following command:

cluster stop

Please note: to shut down the cluster, there must be no virtual machines running on the cluster except the CVM.

The command will shut down the cluster and associated services after you confirm the operation with “I agree” and should return something like this:

The state of the cluster: stop
Lockdown mode: Disabled

        CVM: 192.168.84.22 Up, ZeusLeader
                              Xmount   UP       [1761344, 1761418, 1761419, 1761475]
                           IkatProxy   UP       [458789, 458917, 458918, 458919]
                                Zeus   UP       [454133, 454189, 454190, 454191, 454201, 454218]
                           Scavenger   UP       [459084, 459296, 459297, 459298]
                    SysStatCollector DOWN       []
                    IkatControlPlane DOWN       []
                       SSLTerminator DOWN       []
                      SecureFileSync DOWN       []
                              Medusa DOWN       []
                  DynamicRingChanger DOWN       []
                              Pithos DOWN       []
                          InsightsDB DOWN       []
                              Athena DOWN       []
                             Mercury DOWN       []
                              Mantle DOWN       []
                          VipMonitor   UP       [485663, 485664, 485665, 485666, 485670]
                            Stargate DOWN       []
                InsightsDataTransfer DOWN       []
                             GoErgon DOWN       []
                             Cerebro DOWN       []
                             Chronos DOWN       []
                             Curator DOWN       []
                               Prism DOWN       []
                                Hera DOWN       []
                        AlertManager DOWN       []
                            Arithmos DOWN       []
                             Catalog DOWN       []
                           Acropolis DOWN       []
                              Castor DOWN       []
                               Uhura DOWN       []
                   NutanixGuestTools DOWN       []
                          MinervaCVM DOWN       []
                       ClusterConfig DOWN       []
                         APLOSEngine DOWN       []
                               APLOS DOWN       []
                     PlacementSolver DOWN       []
                               Lazan DOWN       []
                             Polaris DOWN       []
                              Delphi DOWN       []
                            Security DOWN       []
                                Flow DOWN       []
                             Anduril DOWN       []
                              Narsil DOWN       []
                               XTrim DOWN       []
                       ClusterHealth DOWN       []
2025-07-24 07:23:57,716Z INFO MainThread cluster:2194 Cluster has been stopped via 'cluster stop' command, hence stopping all services.
2025-07-24 07:23:57,716Z INFO MainThread cluster:3466 Success!

Now we can move on to destroying the cluster.

Destroying the Cluster

Destroying the cluster requires running the following command:

cluster destroy

The system will then ask you for confirmation before proceeding to delete all configurations and data:

2025-07-24 07:35:45,898Z INFO MainThread zookeeper_session.py:136 Using multithreaded Zookeeper client library: 1
2025-07-24 07:35:45,900Z INFO MainThread zookeeper_session.py:248 Parsed cluster id: 4439894058604263884, cluster incarnation id: 1753169113232129
2025-07-24 07:35:45,900Z INFO MainThread zookeeper_session.py:270 cluster is attempting to connect to Zookeeper, host port list zk1:9876
2025-07-24 07:35:45,916Z INFO Dummy-1 zookeeper_session.py:840 ZK session establishment complete, sessionId=0x198310781ce5e6e, negotiated timeout=20 secs
2025-07-24 07:35:45,918Z INFO Dummy-2 zookeeper_session.py:940 Calling c_impl.close() for session 0x198310781ce5e6e
2025-07-24 07:35:45,918Z INFO Dummy-2 zookeeper_session.py:941 Calling zookeeper_close and invalidating zhandle
2025-07-24 07:35:45,921Z INFO MainThread cluster:3303 Executing action destroy on SVMs 192.168.84.22
2025-07-24 07:35:45,922Z WARNING MainThread genesis_utils.py:348 Deprecated: use util.cluster.info.get_node_uuid() instead
2025-07-24 07:35:45,928Z INFO MainThread cluster:3350

***** CLUSTER NAME *****
Unnamed

This operation will completely erase all data and all metadata, and each node will no longer belong to a cluster. Do you want to proceed? (Y/[N]): Y

The cluster destruction operation will take a few minutes, during which time all remaining data will be completely erased.

Once the cluster destruction is complete, a “cluster status” will allow you to verify that AHV is waiting for the cluster to be created:

nutanix@NTNX-5f832032-A-CVM:192.168.84.22:~$ cluster status
2025-07-24 07:42:50,694Z CRITICAL MainThread cluster:3242 Cluster is currently unconfigured. Please create the cluster.

There you have it, your cluster is destroyed and all you have to do is recreate it.

For those who prefer to follow the procedure via video, here’s my associated YouTube video:

Read More
nutanix on ovhcloud

This is one of the operations I recommend performing on an OVHcloud cluster immediately after delivery: replacing the pre-deployed gateway that will allow your cluster to connect to the internet.

In this article, we’ll see how to deploy a Palo Alto PA-VM and how to perform its basic configuration so that it’s ready to be connected to the OVHcloud RTvRack (which will be the subject of another article).

Prerequisites

Here is the list of prerequisites for deployment:

  • A Nutanix OVHcloud cluster deployed
  • The required subnets created on the cluster
  • A backup VM deployed on the cluster
  • A Palo Alto account with access to image downloads

Retrieving the PA-VM Image

The first step is to retrieve the qcow2 image, which will allow us to deploy the PA-VM on the Palo Alto site: https://support.paloaltonetworks.com/Updates/SoftwareUpdates/64685971

NOTE: You must have a registered account with them with the correct access rights; there is no “Community” or “Free” version.

VM Deployment

After transferring the newly downloaded image to the cluster, we create a VM with the following characteristics:

For VM sizing, I invite you to consult the documentation to adapt it to your context: https://docs.paloaltonetworks.com/vm-series/11-0/vm-series-deployment/license-the-vm-series-firewall/vm-series-models/vm-series-system-requirements

The disk to add is the one downloaded in qcow2 format from the Palo Alto website.

Also select the subnets that will be connected to your gateway. The first interface you add will always be the PA-VM’s management interface, so make sure you select the correct subnet, which ideally will be a subnet dedicated to management interfaces. Your backup VM must have an interface in this subnet to access the PA-VM’s web interface. Here, for example, is what I would recommend for configuring the interfaces:

Management

  • ethernet1/1 (subnet 0 created by default on the cluster, for the WAN output)
  • ethernet1/2 (internal subnet 1, often the one corresponding to your Nutanix infrastructure)
  • ethernet1/3 (internal subnet 2)

It’s important to select “Legacy BIOS Mode” when creating the VM, otherwise it won’t boot!

Select “Use this VM as an Agent VM” so that it boots first.

Validate the settings, the virtual machine is ready to be started.

Initializing the PA-VM

Start the VM and launch the console from the Nutanix interface. Wait while the operating system boots.

The first login is via the CLI with the following credentials:

  • Username: admin
  • Password: admin

The system will ask you to change the password.de passe par défaut. On passe ensuite en mode configuration :

configure

Next, configure the management IP in static mode:

set deviceconfig system type static

Configuring the management interface parameters:

set deviceconfig system ip-address <Firewall-IP> netmask <netmask> default-gateway <gateway-IP> dns-setting servers primary <DNS-IP>

At this point, the firewall can be accessed from the bounce machine’s web browser at: https://<Firewall-IP>

CAUTION: This only works if the bounce VM has a pin in the same subnet as the Management interface.

Don’t forget to commit, either from the web interface or from the command line:

commit

You can now continue the configuration on the web interface.

Basic PA-VM Configurations

Let’s start with the basic PA-VM configuration.

On the web interface, in “Device > Setup”, edit the “General Settings” widget to enter at least the Hostname and the Timezone:

Then go to the “Services” tab and edit the “Services” widget to add DNS servers and NTP servers:

All that’s left is to commit the changes; the basic configuration of the Palo Alto gateway is complete.

I want to point out that this is a basic configuration, and there are many other configuration points to complete to ensure a perfectly configured and secure gateway that allows your cluster to access the internet, including authentication, password complexity, VPN, firewall rules, and more.

In a future article, we’ll see how to connect your Palo Alto PA-VM gateway to the OVHcloud RTvRack to allow your cluster to access the internet.

Read More

Nutanix X-Ray is a testing and benchmarking tool designed by Nutanix to evaluate the performance and resilience of hyperconverged infrastructures (HCI). It allows companies to simulate real-world workloads and test the robustness of their infrastructures before deploying them in production.

Why use Nutanix X-Ray?

The primary reason for using X-Ray is to evaluate performance before deployment. Indeed, before putting a hyperconverged infrastructure into production, it is essential to ensure that it meets the performance requirements defined upstream of the project.

Nutanix X-Ray addresses this first issue by offering concrete scenarios that allow you to:

  • Simulate real-world workloads, such as those in a data center or a hybrid cloud environment.
  • Measure performance in terms of IOPS, latency, and throughput.
  • Identify potential bottlenecks and areas for improvement.

Thanks to these tests, companies can validate their technological choices before investing heavily in an HCI solution.

The second reason to use X-Ray is to test a cluster’s resilience, a crucial characteristic for avoiding service interruptions. With Nutanix X-Ray, it is possible to:

  • Simulate node, disk, or network failures to see how the infrastructure reacts.
  • Test failover and recovery mechanisms.
  • Measure the time required to restore a service in the event of a failure.

These tests help ensure high availability and service continuity even in the event of a major problem once the cluster is in production.

X-Ray also allows you to compare several HCI infrastructures and choose the most appropriate one based on your needs. To this end, it allows you to:

  • Comparative benchmarks between different solutions (e.g., Nutanix vs. VMware vSAN).
  • A neutral and impartial performance analysis.
  • A better understanding of the strengths and weaknesses of each infrastructure.

The results collected will enable companies to make the right decisions regarding the evolution or replacement of their infrastructure.

The results provided by X-Ray will also facilitate the optimization of HCI environments by:

  • Adjusting configurations to maximize performance.
  • Identifying potential improvements in storage, network, and CPU.
  • Planning infrastructure upgrades based on future needs.

The tool thus helps reduce costs and improve operational efficiency by reducing errors in sizing or technology choices.

As you can see, Nutanix X-Ray is an essential tool for any company wishing to test, compare, and optimize its HCI infrastructure before and after deployment.

In the next article, I will explain how to implement the tool on a Nutanix cluster.

The official Nutanix X-Ray documentation: https://portal.nutanix.com/page/documents/details?targetId=X-Ray-Guide-v5_3:X-Ray-Guide-v5_3

Read More

From May 7th to 9th, 2025, I was invited as a Nutanix Technology Champion to the Nutanix .NEXT conference in Washington, DC.

On the second day of the event, we had a luncheon planned with:

  • Rajiv Ramaswami, President & CEO of Nutanix
  • Thomas Cornely, Senior Vice President of Product Management at Nutanix
  • Jason Longpre, Vice President of Worldwide Support at Nutanix

This luncheon was an opportunity to interact with them and ask questions directly, but not only that…

An Almost Surprise Ceremony

Angelo Luciani, the Nutanix Technology Champion program manager, had warned us that he was preparing awards to be presented to some of us at .NEXT, without giving us any further details.

Once lunch was over, he took the stage and launched the ceremony, which would recognize three program members for their involvement. The awards were as follows:

  • Nutanix Community Excellence Award, recognizing involvement on the Nutanix forums
  • Nutanix NTC Storyteller Award, recognizing involvement on the blog and social media
  • Nutanix User Group Champion Award, recognizing involvement in the NUG

Nutanix NTC Storyteller: Meaningful Recognition

When I started blogging, my goal was simple: to share. My tests, my struggles, my discoveries, my tech favorites. I had no ambitions or specific roadmap, just the desire to share in my own way.

The blog now has over 110 articles, each written in French and fully translated into English. It’s a colossal and long-term undertaking, requiring hours of research, testing, brainstorming, and more, culminating in simple posts or entire guides.

And there are moments along the way when all this behind-the-scenes work, all these words aligned together, end up resonating beyond the screen.

Today, I have the immense pleasure (and a certain emotion, I admit) of sharing with you the achievement of the “Nutanix NTC Storyteller” trophy.

Being recognized as a “Storyteller” by the Nutanix Technology Champions (NTC) program isn’t just a trophy that will adorn my shelf; it’s recognition of my ongoing commitment to making technology more readable, accessible, and understandable.

For me, this trophy is:

  • Recognition of my commitment to the technical community
  • A strong signal that quality, authentic, and consistent content matters more than quantity
  • A spotlight on the importance of the role of communication in our profession
  • But above all, it provides additional motivation to continue creating, sharing, and connecting ideas

Thank you to everyone who reads, comments, shares, and challenges me (you’ll recognize yourselves).

Thank you to Nutanix and especially Angelo Luciani, without whom none of this would have been possible.

Thank you to the NTC community, which has provided me with so much inspiration, connections, and learning, and especially to my two “partners in crime”: Jeroen and Maroane.

This trophy isn’t an end, but a new starting point. It pushes me to go further, to explore new formats (including a YouTube channel), and to continue talking about infrastructure and hyperconvergence with the same passion and energy.

Read More

I worked with a customer who has a large number of fairly old nodes in production at their remote sites. Unfortunately, they are facing a problem performing AOS and AHV installations on them because the hardware is not officially supported by Nutanix. Having recovered a few identical nodes, I looked into the problem to find a solution…

Node Hardware Configuration

These nodes are Supermicro SuperServer 5019D-FN8TP nodes:

They are perfect for creating home labs with their 1U form factor and half-depth design, allowing them to fit into any home rack.

The hardware configuration is as follows:

  • Processor: Intel® Xeon® processor D-2146NT, 8 cores / 16 threads
  • 128GB RAM (expandable up to 512GB)
  • 1 M.2 boot disk
  • 2 1TB SSDs (2 additional SSDs can be added)
  • 4 1G RJ45 ports
  • 2 10G RJ45 ports
  • 2 10G SFP ports
  • 1 RJ45 port dedicated to IPMI
  • 1 NVidia Tesla P40 graphics card

Did I tell you these knots are perfect for home labs?

First Foundation Tests

For my first Foundation tests, I chose to start with a very old version of Foundation: 4.6.

Software-wise, I also started with an old version, with version 5.5.9.5 and the AHV bundled in the package. Since most of the client nodes were also running older versions, I figured it should work.

First failure… of a long series!

I tested many possible combinations with Foundation versions 4.6 / 5.0 / 5.4 / 5.9, AOS versions 5.5.9.5 / 5.6.1 / 5.20 / 6.10, AHV bundled, not bundled… and even a custom Phoenix image generated from one of the recovered nodes… And absolutely no success, often with error messages that differed depending on the combinations used.

But one message still came up more frequently than the others…

Hardware Compatibility Check

During the Foundation process, there is a step in which the Phoenix system generates a hardware configuration file for the node(s) to be imaged: hardware_config.json.

Once this file is generated, Foundation compares it to its list of known hardware to verify that it is a node capable of imaging… And this is where my problem arises:

2025-06-17 11:55:58,642Z foundation_tools.py:1634 INFO Node with ip 192.168.84.22 is in phoenix. Generating hardware_config.json
2025-06-17 11:55:58,942Z foundation_tools.py:1650 DEBUG Running command .local/bin/layout_finder.py local
2025-06-17 11:56:02,383Z foundation_tools.py:334 ERROR Command '.local/bin/layout_finder.py local' returned error code 1
stdout:

stderr:
Traceback (most recent call last):
  File "/root/.local/bin/layout_finder.py", line 297, in <module>
    write_layout("hardware_config.json", 1)
  File "/root/.local/bin/layout_finder.py", line 238, in write_layout
    top = get_layout(node_position)
  File "/root/.local/bin/layout_finder.py", line 130, in get_layout
    vpd_info = vpd_info_override or get_vpd_info(system_info_override)
  File "/root/.local/bin/layout_finder.py", line 249, in get_vpd_info
    module, model, model_string, hardware_id = _find_model_match(
  File "/root/.local/bin/layout_finder.py", line 78, in _find_model_match
    raise exceptions[0]
__main__.NoMatchingModule: Raw FRU: FRU Device Description : Builtin FRU Device (ID 0)
 Chassis Type          : Other
 Chassis Part Number   : CSE-505-203B
 Chassis Serial        : C5050LH47NA0950
 Board Mfg Date        : Wed Oct 31 16:00:00 2018
 Board Mfg             : Supermicro
 Board Serial          : ZM18AS036679
 Board Part Number     : X11SDV-8C-TP8F
 Product Manufacturer  : Supermicro
 Product Name          : 
 Product Part Number   : SYS-5019D-FN8TP-1-NI22
 Product Version       : 
 Product Serial        : S348084X9211699
Product Name: SYS-5019D-FN8TP-1-NI22
Unable to match system information to layout module. Please refer KB-7138 to resolve the issue. 

Foundation is very kind to point out that there’s a KB available, as this is clearly a recurring problem!

Link to the Nutanix KB: https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000PVxTCAW

Now let’s see how to solve my problem…

FRU Modification

The Nutanix KB indicates that you must edit your hardware’s FRU to match hardware on the compatibility list.

To do this, use the SMCIPMITools utility provided by Supermicro and available here: https://www.supermicro.com/en/solutions/management-software/ipmi-utilities

Once the utility is downloaded, you need to launch it from the command line with the correct parameters:

./SMCIPMITool.exe IP_ADDRESS ADMIN PASSWORD ipmi fru

The parameters are as follows:

  • IP ​​address of your node’s IPMI
  • Administrator account login (default is ADMIN)
  • The associated password

The command will query the IPMI and return information about the hardwareriel :

Getting FRU ...
Chassis Type (CT)              = Other (01h)
Chassis Part Number (CP)       = CSE-505-203B
Chassis Serial Number (CS)     = XXXXXXXXXXXXXXX
Board mfg. Date/Time (BDT)     = 2018/10/31 16:00:00 (A0 3E B7)
Board Manufacturer Name (BM)   = Supermicro
Board Product Name (BPN)       =
Board Serial Number (BS)       = XXXXXXXXXXXX
Board Part Number (BP)         = X11SDV-8C-TP8F
Board FRU File ID              =
Product Manufacturer Name (PM) = Supermicro
Product Name (PN)              =
Product PartModel Number (PPM) = SYS-5019D-FN8TP-1-NI22
Product Version (PV)           =
Product Serial Number (PS)     = XXXXXXXXXXXXXXX
Product Asset Tag (PAT)        =
Product FRU File ID            =

It is then possible to access each of the elements via different commands, for example:

SMCIPMITool.exe IP_ADDRESS ADMIN password ipmi fruw PM "param"
SMCIPMITool.exe IP_ADDRESS ADMIN password ipmi fruw PN "NONE"
SMCIPMITool.exe IP_ADDRESS ADMIN password ipmi fruw PPM "param"
SMCIPMITool.exe IP_ADDRESS ADMIN password ipmi fruw PV "NONE"

Obviously, I replace “param” with the desired parameter. Now that I have a technique to “lie” to the system, I need to come up with a good lie…

Looking for the lost model…

The problem in our case is having a FRU that matches a piece of hardware on the compatibility list integrated into Phoenix…

I tested it randomly with similar hardware by replacing the existing PPM:

  • SYS-5019D-FN8TP-1-NI22 (the original one)
  • X11SDV-8C-TP8F (this is the model recognized by Nutanix on the client nodes)
  • NX-1120S-G7
  • NX-1065-G7

The first is not recognized during the Foundation process, the same goes for the second. For 2 suivants, ils sont bien reconnu mais c’est un message d’erreur légèrement différent qui s’affiche…

stderr:
Traceback (most recent call last):
File "/root/.local/bin/layout_finder.py", line 297, in
write_layout("hardware_config.json", 1)
File "/root/.local/bin/layout_finder.py", line 238, in write_layout
top = get_layout(node_position)
File "/root/.local/bin/layout_finder.py", line 146, in get_layout
module.populate_layout(layout_api, layout_api.discovery_info, layout,
File "/root/.local/lib/python3.9/site-packages/layout/modules/smc_gen11_4node.py", line 104, in populate_layout
data_hbas = api.find_devices(pci_ids=["1000:0097"], min_=1, max_=1,
File "/root/.local/lib/python3.9/site-packages/layout/layout_api.py", line 300, in find_devices
raise Exception(msg)
Exception: This node is expected to have exactly 1 SAS3008. But phoenix could not find any such device
2025-06-17 12:22:11,405Z imaging_step.py:123 DEBUG Setting state of ) @c2b0> from RUNNING to FAILED
2025-06-17 12:22:11,409Z imaging_step.py:123 DEBUG Setting state of ) @ca90> from PENDING to NR
2025-06-17 12:22:11,410Z imaging_step.py:182 WARNING Skipping ) @ca90> because dependencies not met, failed tasks: [) @c2b0>]
2025-06-17 12:22:11,412Z imaging_step.py:123 DEBUG Setting state of ) @c940> from PENDING to NR
2025-06-17 12:22:11,413Z imaging_step.py:182 WARNING Skipping ) @c940> because dependencies not met
2025-06-17 12:22:11,413Z imaging_step.py:123 DEBUG Setting state of ) @c2e0> from PENDING to NR
2025-06-17 12:22:11,414Z imaging_step.py:182 WARNING Skipping ) @c2e0> because dependencies not met

The node model is recognized by the Foundation process, but the node’s hardware configuration is also checked! Therefore, finding a similar model isn’t enough; the model AND the hardware configuration must be similar…

But how do I find the right model? And then I had an idea: search the Phoenix files mounted during installation to find out which models it expects to find…

A quick SSH into the node booted on Phoenix, whose installation failed, and here I am, wandering through the system’s intricacies to find what I’m looking for…

The information about supported templates is located in the /root/.local/lib/python3.9/site-packages/layout/modules folder. How do I know this? Because the logs generated during my previous attempts indicated:

File "/root/.local/lib/python3.9/site-packages/layout/modules/smc_gen11_4node.py", line 104, in populate_layout

And in this module folder, there is absolutely something for everyone:

Since the nodes in question are Supermicro, I focused my research on the “smc” prefix in order to reduce the range of possibilities:

In order to further reduce the number of possibilities, I eliminated everything that concerned more than 1 node (2 and 4 nodes therefore) which left me with only about ten possibilities and as I started in order, I immediately found the right template: smc_e300_gen11.py!

Inside the file, I immediately spot the same motherboard: X11SDV-8C-TP8F

It comes in two models: the SMC-E300-2, which has two drives, and the SMC-E300-4, which has four. So, it’s the first one that interests me, and while searching online, I came across another Supermicro motherboard, the SuperServer E300-9D-8CN8TP: https://www.supermicro.com/en/products/system/Mini-ITX/SYS-E300-9D-8CN8TP.cfm

Extremely similar to the motherboard I own, I think I’ve finally found the right model! I note the important details and shut down my motherboard:

  • X11SDV-8C-TP8F (board part number)
  • SMC-E300-2 (model)
  • CSE-E300 (chassis part number)

The final stretch: the custom FRU

Now that I have the missing information, I need to modify my FRU to match the model Foundation expects.

Here are the commands I ran:

./SMCIPMITool.exe ip_address ADMIN password ipmi fruw CP "CSE-E300"
./SMCIPMITool.exe ip_address ADMIN password ipmi fruw PPM "SMC-E300-2"
./SMCIPMITool.exe ip_address ADMIN password ipmi fruw PN "NONE"
./SMCIPMITool.exe ip_address ADMIN password ipmi fruw PV "NONE"

Then I relaunched a Foundation 5.9 with an AOS 6.10.1.6 and an AHV 20230302.103014 in order to validate that what I found works:

2025-06-18 07:35:49,786Z foundation_tools.py:1634 INFO Node with ip 192.168.84.22 is in phoenix. Generating hardware_config.json
2025-06-18 07:35:50,071Z foundation_tools.py:1650 DEBUG Running command .local/bin/layout_finder.py local
2025-06-18 07:35:54,153Z imaging_step_misc_hw_checks.py:168 DEBUG Not an NX G7+ node with RAID boot drives. Skipping RAID checks.
2025-06-18 07:35:54,156Z imaging_step.py:123 DEBUG Setting state of ) @dee0> from RUNNING to FINISHED
2025-06-18 07:35:54,157Z imaging_step.py:162 INFO Completed ) @dee0>
2025-06-18 07:35:54,159Z imaging_step.py:123 DEBUG Setting state of ) @deb0> from PENDING to RUNNING
2025-06-18 07:35:54,162Z imaging_step.py:159 INFO Running ) @deb0>
2025-06-18 07:35:54,165Z imaging_step_pre_install.py:364 INFO Rebooting into staging environment
2025-06-18 07:35:54,687Z cache_manager.py:142 DEBUG Cache HIT: key(get_nos_version_from_tarball_()_{'nos_package_path': '/home/nutanix/foundation/nos/nutanix_installer_package-release-fraser-6.10.1.6-stable-a5f69491f9523eef80d3c703f2ad4d2156e71eeb-x86_64.tar.gz'})
2025-06-18 07:35:54,690Z imaging_step_pre_install.py:389 INFO NOS version is 6.10.1.6
2025-06-18 07:35:54,691Z imaging_step_pre_install.py:392 INFO Preparing NOS package (/home/nutanix/foundation/nos/nutanix_installer_package-release-fraser-6.10.1.6-stable-a5f69491f9523eef80d3c703f2ad4d2156e71eeb-x86_64.tar.gz)
2025-06-18 07:35:54,691Z phoenix_prep.py:82 INFO Unzipping NOS package

It passed the hardware validation without a hitch, and the installation eventually went through.

Of course, this is a patch to allow my client to redeploy their nodes and extend their lifespan. The ideal solution would have been to be able to create a custom .py file that perfectly matches my model without me having to modify it, which, to my knowledge, is unfortunately currently impossible.

One problem persists, however: the cluster can be created in RF2, but Data Resiliency will be critical… I’m still looking for a solution to this problem…

Thanks to Théo and Jeroen for their ideas, which showed me the beginning of the path that led me to the solution!

Link to the Nutanix KB used: https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000PVxTCAW

Read More

In the previous blog post, I explained how to monitor your Centreon cluster using SNMP v2c.

In this new blog post, I’ll explain how to monitor your Nutanix cluster using Centreon using SNMP v3.

Prerequisites

There are a few prerequisites you must meet to add your Nutanix cluster to the Centreon solution. Here’s a list of what you need:

  • A Nutanix cluster with admin access to the web interface
  • A running Centreon server with the Nutanix connector installed
  • SSH access to the Centreon VM
  • Streams must be open in the firewall

Configuring SNMP v3 on the Nutanix Cluster

To configure SNMP on your Nutanix cluster, start by connecting to the Prism Element and then going to “Settings > SNMP”. Check “Enable SNMP” and click “+ New Transport” to add port 161 in UDP:

Then, in “Users” click on “New User”, enter a username as well as a private key pair in AES and authentication key in SHA:

In my case, I’ve entered the following information because it’s for lab purposes only, but I recommend you enter much more complex information:

  • Username: snmp-centreon
  • Priv Key: snmp-priv-key
  • Auth Key: snmp-auth-key

Make a note of the Username, Priv Key, and Auth Key; we’ll need them later. The configuration is complete on the Nutanix side; now let’s move on to the Centreon configuration..

Adding a Nutanix Cluster to Centreon

To add your Nutanix cluster to Centreon, log in to your monitoring system’s web interface, go to “Configuration > Hosts” and click “Add”:

On the page that appears, there is a first block of information to fill in:

  • 1: Cluster name
  • 2: Cluster IP address
  • 3: SNMP version 3
  • 4: The Centreon server that will monitor the cluster
  • 5: The time zone associated with your cluster
  • 6: The templates you wish to add
  • 7: Check “Yes” to ensure that all services associated with the previously added templates are automatically created

On the second part of the page, there are a few things to configure, including the amplitude and frequency of checks, and especially the “SNM” field.

The command line syntax to enter in SNMPEXTRAOPTIONS is:

--snmp-username='snmp-centreon' --authprotocol='SHA' --authpassphrase='snmp-auth-key' --privprotocol='AES'--privpassphrase='snmp-priv-key'

Remember to check the “Password” box to hide sensitive information:

Once all the information has been entered, confirm so that the new host is created on the server. You must then export the configuration to the pollers. To do this, click on “Pollers” in the top left corner, then on “Export configuration”:

Then click on “Export & Reload” in the small window that appears:

To check that your host is being taken into account, go to “Monitoring > Resources Status”, your first checks should start to come up:

If all goes as planned, you should have all your probes green within minutes!

Troubleshooting

If you unfortunately have a monitor that looks like this:

I recommend checking the following:

  • Open SNMP streams (port 161/UDP) in the firewall
  • Configure the AuthKey/PrivKey pair and username
Read More

Continuously monitoring your cluster is the best option to ensure everything is running as you expect.

In this blog post, I’ll explain how to monitor your Nutanix cluster on Centreon using SNMP v2c.

Prerequisites

There are a few prerequisites you must meet to add your Nutanix cluster to the Centreon solution. Here’s a list of what you need:

  • A Nutanix cluster with admin access to the web interface
  • A running Centreon server with the Nutanix connector installed
  • SSH access to the Centreon VM
  • Flux must be open in the firewall

Configuring SNMP v2c on the Nutanix Cluster

To configure SNMP on your Nutanix cluster, start by connecting to the Prism Element and then going to “Settings > SNMP”. Check “Enable SNMP” and click “+ New Transport” to add port 161 in UDP:

Then, under “Traps,” click “+ New Trap Receiver” and fill in the following fields:

  • Receiver Name: The name you wish to assign to your Receiver
  • Check v2c
  • Community: Indicate the SNMP community you wish to use
  • Address: The address of your Centreon server
  • Port: 161
  • Transport protocol: UDP

Click “Save” to save the configuration.

Adding a Nutanix Cluster to Centreon

To add your Nutanix cluster to Centreon, log in to your monitoring system’s web interface, go to “Configuration > Hosts” and click “Add”:

On the page that appears, there is a first block of information to fill in:

  • 1: Cluster name
  • 2: Cluster IP address
  • 3: The community you specified on the trap receiver
  • 4: SNMP version 2c
  • 5: The Centreon server that will monitor the cluster
  • 6: The time zone associated with your cluster
  • 7: The templates you wish to add
  • 8: Check “Yes” to ensure that all services associated with the previously added templates are automatically created.

On the second part of the page, there are a few things to configure, including the amplitude and frequency of the checks:

Once all the information has been entered, confirm so that the new host is created on the server. You must then export the configuration to the pollers. To do this, click on “Pollers” in the top left corner, then on “Export configuration”:

Then click on “Export & Reload” in the small window that appears:

To check that your host is being taken into account, go to “Monitoring > Resources Status”, your first checks should start to come up:

If all goes as planned, you should have all your probes green within minutes!

Troubleshooting

If you unfortunately have a monitor that looks like this:

I recommend checking the following:

  • Opening SNMP streams (port 161/UDP) in the firewall
  • Configuring the Traps Receiver on the Nutanix cluster
  • Configuring the community on the Centreon server
Read More

More and more businesses are adopting multicloud infrastructures to benefit from flexibility, agility, and security. To meet this need, OVHcloud has partnered with Nutanix to offer optimized solutions for managing hybrid cloud solutions.

I invite you to discover the Nutanix offerings on OVHcloud and how they can help transform business infrastructures.

OVHcloud and Nutanix: A Strategic Collaboration

OVHcloud, a major European cloud provider, and Nutanix, a leader in hyperconverged solutions, are collaborating to offer high-performance, secure, and enterprise-grade services. This partnership aims to provide an integrated and secure cloud platform, allowing businesses to focus on their applications without worrying about managing the underlying infrastructure.

Integrating Nutanix solutions into the OVHcloud cloud creates a simplified multi-cloud environment, offering IT teams greater flexibility. Customers can deploy their applications across hybrid and multi-cloud environments while benefiting from unified management, enhanced security, and reduced operational costs.

Nutanix solutions at OVHcloud

Nutanix offerings on OVHcloud include several essential services for businesses looking to modernize and simplify their infrastructure:

Nutanix Cloud Platform on OVHcloud: This platform provides a scalable and integrated cloud infrastructure with a Nutanix hyperconverged infrastructure (HCI) solution. It can run a variety of workloads, such as databases, productivity applications, and mission-critical applications, while ensuring high security and optimal performance.

HYCU Backup: OVHcloud also offers a backup solution for your Nutanix infrastructure through the HYCU Backup solution, a comprehensive backup software solution that is seamlessly integrated with Nutanix.

The advantages of OVHcloud’s Nutanix offerings

Adopting Nutanix offerings on OVHcloud offers several advantages:

Simplicity and centralized management: Nutanix solutions provide a centralized management interface allowing IT teams to manage their resources in a multicloud environment without additional complexity.

Data sovereignty: OVHcloud complies with European data protection standards. Combined with Nutanix solutions, businesses benefit from high levels of security and enhanced access controls.

Licensing flexibility: All hardware and software licenses can be provided by OVHcloud, helping to eliminate complexity and hidden costs, or you can bring your own Nutanix license to facilitate the provisioning of OVHcloud resources.

Performance and scalability: Nutanix solutions on OVHcloud offer a high-performance and scalable infrastructure, adapted to the growing needs of businesses. With the flexibility of Nutanix solutions, businesses can easily adjust their resources as needed by adding nodes on demand to increase the hardware resources of their clusters.

Cost Reduction: Nutanix’s hyperconverged infrastructure reduces operational costs by simplifying infrastructure management and reducing the need for physical servers. OVHcloud customers can thus optimize their IT spending while benefiting from high performance.

Use cases: How do businesses benefit from Nutanix on OVHcloud offerings?

Nutanix on OVHcloud offerings are particularly suited for the following use cases:

Use cases reinforced by the options offered by OVHcloud:

Conclusion

Nutanix on OVHcloud offers a comprehensive solution for businesses looking to efficiently manage their multicloud infrastructures. By combining the performance of Nutanix solutions with the flexibility and security of OVHcloud, businesses can benefit from a scalable, high-performance infrastructure that complies with European regulations.

By adopting Nutanix on OVHcloud solutions, businesses can simplify their infrastructure, strengthen their security, optimize their costs, and focus on growth.

Add to this additional services such as KMS key management and the HYCU backup solution, and we clearly have a serious European competitor to Google Cloud, AWS, and Azure.

Read More

After a rather short night due to the previous evening and jet lag, I set off for the Washington Convention Center for the launch of the first day of Nutanix .NEXT 2025.

NTCs! Gathering!

The first mission was to find a coffee, a large coffee, a very large coffee, because I already knew it was going to be a long day… Once I had my precious coffee, I headed straight to the Nutanix Technology Champions’ meeting point: the Community Lounge.

I had the pleasure of meeting Angelo, Jeroen, Brad, Jason, Avi, Kim, Bart, Angela, and many others there, with whom we were able to make sure that everyone had a good trip, discuss the 3 days to come, exchange our stickers specially designed for the occasion, which allowed us to pass the time until the time came for the Keynote…

WELCOME TO .NEXT 2025 !!!

Traditionally, the first .NEXT keynote is the one that reveals the majority of the publisher’s big announcements, and this edition was no exception!

To kick off the show, we were able to count on Mandi Dhaliwal who, after warmly thanking the event’s sponsors, quickly handed over to Rajiv Ramaswami, CEO of Nutanix.

After a short introduction in which he presented himself with his muscular starter pack boosted with AI, .NEXT 2025 was officially launched and the keynote could really get started with the following announcements and their dedicated articles:

Keynote #1: Closing Clap

Once all the technical announcements had been made, Rajiv Ramaswami handed the stage over to Mandy Dhaliwal, who, with undisguised pleasure, welcomed Chef José Andres. With his trademark sense of humor, he spoke to us about his personal vision of the values ​​Nutanix embodies: Hungry, Humble, Honest, and with Heart.

Making a decision, whether good or bad, is always better than not making a decision at all – José Andres

He also told us about his work against world hunger through his organization “World Central Kitchen” and his various activities (writing books, opening restaurants, television shows, etc.).

Once the keynote was over, I rushed out to grab a quick meal before heading off to my certification…

Certification: NCM MCI 6.10 Beta, a Real Mess

Scheduled for 1:10 PM, nothing was ready! The examiners hadn’t anticipated the workstation preparation and had to do it urgently using USB drives…

I had to wait until 2:10 PM before being able to start the exam, and I wasn’t done with the surprises… Between the missing information in the statements, the information that was present but unclear, the various typos, the missing features in the interface (local account management on Prism Central, for example), and even the licensing issues… I think for a beta, it lived up to its name!

I won’t have the results until August, so stay tuned, but don’t expect much!

End of the Day

We ended the day in the exhibitor hall with drinks and some canapés at the OVHcloud booth. We then headed to the Nutanix EMEA evening, where we met partners and competitors in a rather pleasant setting.

The volume being quite high, we ended up changing locations to go to Jaleo, the restaurant of chef… José Andres! We were able to share tapas and drinks there before finally returning to our respective hotels to regain some strength for the 2nd day…

Read More

It couldn’t be a keynote without mentioning the NC2-validated cloud providers (Nutanix Cloud Clusters) and making an announcement!

An NC2 instance generally refers to Nutanix Cloud Clusters (NC2), a solution developed by Nutanix to deploy and run hyperconverged infrastructures (HCI) in the public cloud while maintaining the same tools and practices as in a private data center.

To date, there are two main players:

  • AWS NC2, available and certified since 2020
  • NC2 on Azure, available and certified since 2022

During the keynote, the availability of Google Cloud NC2 instances was announced for the end of 2025, as well as seamless migration between an on-prem infrastructure and a Google Cloud infrastructure with Flow Virtual Networking..

That makes 3 US cloud providers, and no European ones… But my little finger tells me that OVHcloud is in the starting blocks to play in the big leagues… Stay Tuned!

Official Announcement : https://www.nutanix.com/blog/announcing-nutanix-cloud-clusters-on-google-cloud-public-preview#

Another announcement concerning AWS this time, the availability of Cloud Native for AOS!

Nutanix Cloud Native AOS (CN-AOS) integrates Nutanix storage directly into Kubernetes, without a hypervisor. This facilitates the management, portability, and resilience of containerized applications across the cloud, edge, or on-premises. CN-AOS unifies data and application management, simplifies disaster recovery, and leverages the robust Nutanix AOS architecture, suitable for modern hybrid environments.

Official announcement: https://www.nutanix.com/blog/nutanix-cloud-native-aos-brings-enterprise-storage-to-diverse-kubernetes-environments#

Finally, one last announcement: Nutanix is ​​partnering with Canonical, the publisher of Ubuntu, to offer Ubuntu Pro as an integrated option for its Kubernetes Platform (NKP). This collaboration aims to simplify deployment and strengthen the security of Kubernetes clusters, with features such as Livepatch (reboot-free kernel updates) and long-term support. Ubuntu Pro will be available with the NKP Pro and Ultimate editions. NKP remains flexible, also allowing the use of other Linux systems depending on customer needs.

Official Announcement: https://www.nutanix.com/blog/nutanix-and-canonical-partner-to-simplify-kubernetes-deployments-with-nkp-and-ubuntu-pro#

Read More