Team Leader - Nutanix Technology Champion - Nutanix NTC Storyteller

Julien DUMUR
Infrastructure in a Nutshell
Nutanix Foundation on a Steamdeck

In one of my previous articles, I talked about my Nutanix Foundation installation on my Steamdeck. Unfortunately, I hadn’t yet had the opportunity to run an installation with the setup due to a lack of a server to image.

But things have changed since I recently acquired a Supermicro SuperServer 5019D-FN8TP! I also wrote an article about implementing Nutanix Foundation on unofficially supported hardware:

Hardware Preparation for Foundation

To be able to image my node with Nutanix Foundation from my Steamdeck, I absolutely needed to be connected to the network via RJ45, a type of connection missing from Valve’s console…

So I purchased an external dock with several USB ports and an RJ45 port that can be connected via USB-C.

I made the following connections:

  • Connecting the external dock to the Steamdeck’s USB-C port
  • Connecting the power supply to the dock’s USB-C port
  • Connecting the network cable to the dock’s RJ45 port
  • Connecting the external SSD (on which Windows is installed) to one of the dock’s USB ports

This allowed me to boot the Steamdeck from the SSD to start Windows 11!

Nutanix Foundation on a Steamdeck

Nutanix Foundation for a Node with a Steamdeck

As I mentioned in the previous article, I already have Nutanix Foundation installed, so I won’t dwell on that part and will move directly to the Foundation section!

For this Foundation, I used the latest version available, namely Foundation 5.9 with AOS 10 and AHV 7!

Nutanix Foundation on a Steamdeck

The Foundation process starts flawlessly as expected and the process completes after a while:

Nutanix Foundation on a Steamdeck

As you can see, imaging a node with the Steamdeck is possible! Is it relevant? No certainty, but I can at least say that “I did it!”

Read More

I worked with a customer who has a large number of fairly old nodes in production at their remote sites. Unfortunately, they are facing a problem performing AOS and AHV installations on them because the hardware is not officially supported by Nutanix. Having recovered a few identical nodes, I looked into the problem to find a solution…

Node Hardware Configuration

These nodes are Supermicro SuperServer 5019D-FN8TP nodes:

They are perfect for creating home labs with their 1U form factor and half-depth design, allowing them to fit into any home rack.

The hardware configuration is as follows:

  • Processor: Intel® Xeon® processor D-2146NT, 8 cores / 16 threads
  • 128GB RAM (expandable up to 512GB)
  • 1 M.2 boot disk
  • 2 1TB SSDs (2 additional SSDs can be added)
  • 4 1G RJ45 ports
  • 2 10G RJ45 ports
  • 2 10G SFP ports
  • 1 RJ45 port dedicated to IPMI
  • 1 NVidia Tesla P40 graphics card

Did I tell you these knots are perfect for home labs?

First Foundation Tests

For my first Foundation tests, I chose to start with a very old version of Foundation: 4.6.

Software-wise, I also started with an old version, with version 5.5.9.5 and the AHV bundled in the package. Since most of the client nodes were also running older versions, I figured it should work.

First failure… of a long series!

I tested many possible combinations with Foundation versions 4.6 / 5.0 / 5.4 / 5.9, AOS versions 5.5.9.5 / 5.6.1 / 5.20 / 6.10, AHV bundled, not bundled… and even a custom Phoenix image generated from one of the recovered nodes… And absolutely no success, often with error messages that differed depending on the combinations used.

But one message still came up more frequently than the others…

Hardware Compatibility Check

During the Foundation process, there is a step in which the Phoenix system generates a hardware configuration file for the node(s) to be imaged: hardware_config.json.

Once this file is generated, Foundation compares it to its list of known hardware to verify that it is a node capable of imaging… And this is where my problem arises:

2025-06-17 11:55:58,642Z foundation_tools.py:1634 INFO Node with ip 192.168.84.22 is in phoenix. Generating hardware_config.json
2025-06-17 11:55:58,942Z foundation_tools.py:1650 DEBUG Running command .local/bin/layout_finder.py local
2025-06-17 11:56:02,383Z foundation_tools.py:334 ERROR Command '.local/bin/layout_finder.py local' returned error code 1
stdout:

stderr:
Traceback (most recent call last):
  File "/root/.local/bin/layout_finder.py", line 297, in <module>
    write_layout("hardware_config.json", 1)
  File "/root/.local/bin/layout_finder.py", line 238, in write_layout
    top = get_layout(node_position)
  File "/root/.local/bin/layout_finder.py", line 130, in get_layout
    vpd_info = vpd_info_override or get_vpd_info(system_info_override)
  File "/root/.local/bin/layout_finder.py", line 249, in get_vpd_info
    module, model, model_string, hardware_id = _find_model_match(
  File "/root/.local/bin/layout_finder.py", line 78, in _find_model_match
    raise exceptions[0]
__main__.NoMatchingModule: Raw FRU: FRU Device Description : Builtin FRU Device (ID 0)
 Chassis Type          : Other
 Chassis Part Number   : CSE-505-203B
 Chassis Serial        : C5050LH47NA0950
 Board Mfg Date        : Wed Oct 31 16:00:00 2018
 Board Mfg             : Supermicro
 Board Serial          : ZM18AS036679
 Board Part Number     : X11SDV-8C-TP8F
 Product Manufacturer  : Supermicro
 Product Name          : 
 Product Part Number   : SYS-5019D-FN8TP-1-NI22
 Product Version       : 
 Product Serial        : S348084X9211699
Product Name: SYS-5019D-FN8TP-1-NI22
Unable to match system information to layout module. Please refer KB-7138 to resolve the issue. 

Foundation is very kind to point out that there’s a KB available, as this is clearly a recurring problem!

Link to the Nutanix KB: https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000PVxTCAW

Now let’s see how to solve my problem…

FRU Modification

The Nutanix KB indicates that you must edit your hardware’s FRU to match hardware on the compatibility list.

To do this, use the SMCIPMITools utility provided by Supermicro and available here: https://www.supermicro.com/en/solutions/management-software/ipmi-utilities

Once the utility is downloaded, you need to launch it from the command line with the correct parameters:

./SMCIPMITool.exe IP_ADDRESS ADMIN PASSWORD ipmi fru

The parameters are as follows:

  • IP ​​address of your node’s IPMI
  • Administrator account login (default is ADMIN)
  • The associated password

The command will query the IPMI and return information about the hardwareriel :

Getting FRU ...
Chassis Type (CT)              = Other (01h)
Chassis Part Number (CP)       = CSE-505-203B
Chassis Serial Number (CS)     = XXXXXXXXXXXXXXX
Board mfg. Date/Time (BDT)     = 2018/10/31 16:00:00 (A0 3E B7)
Board Manufacturer Name (BM)   = Supermicro
Board Product Name (BPN)       =
Board Serial Number (BS)       = XXXXXXXXXXXX
Board Part Number (BP)         = X11SDV-8C-TP8F
Board FRU File ID              =
Product Manufacturer Name (PM) = Supermicro
Product Name (PN)              =
Product PartModel Number (PPM) = SYS-5019D-FN8TP-1-NI22
Product Version (PV)           =
Product Serial Number (PS)     = XXXXXXXXXXXXXXX
Product Asset Tag (PAT)        =
Product FRU File ID            =

It is then possible to access each of the elements via different commands, for example:

SMCIPMITool.exe IP_ADDRESS ADMIN password ipmi fruw PM "param"
SMCIPMITool.exe IP_ADDRESS ADMIN password ipmi fruw PN "NONE"
SMCIPMITool.exe IP_ADDRESS ADMIN password ipmi fruw PPM "param"
SMCIPMITool.exe IP_ADDRESS ADMIN password ipmi fruw PV "NONE"

Obviously, I replace “param” with the desired parameter. Now that I have a technique to “lie” to the system, I need to come up with a good lie…

Looking for the lost model…

The problem in our case is having a FRU that matches a piece of hardware on the compatibility list integrated into Phoenix…

I tested it randomly with similar hardware by replacing the existing PPM:

  • SYS-5019D-FN8TP-1-NI22 (the original one)
  • X11SDV-8C-TP8F (this is the model recognized by Nutanix on the client nodes)
  • NX-1120S-G7
  • NX-1065-G7

The first is not recognized during the Foundation process, the same goes for the second. For 2 suivants, ils sont bien reconnu mais c’est un message d’erreur légèrement différent qui s’affiche…

stderr:
Traceback (most recent call last):
File "/root/.local/bin/layout_finder.py", line 297, in
write_layout("hardware_config.json", 1)
File "/root/.local/bin/layout_finder.py", line 238, in write_layout
top = get_layout(node_position)
File "/root/.local/bin/layout_finder.py", line 146, in get_layout
module.populate_layout(layout_api, layout_api.discovery_info, layout,
File "/root/.local/lib/python3.9/site-packages/layout/modules/smc_gen11_4node.py", line 104, in populate_layout
data_hbas = api.find_devices(pci_ids=["1000:0097"], min_=1, max_=1,
File "/root/.local/lib/python3.9/site-packages/layout/layout_api.py", line 300, in find_devices
raise Exception(msg)
Exception: This node is expected to have exactly 1 SAS3008. But phoenix could not find any such device
2025-06-17 12:22:11,405Z imaging_step.py:123 DEBUG Setting state of ) @c2b0> from RUNNING to FAILED
2025-06-17 12:22:11,409Z imaging_step.py:123 DEBUG Setting state of ) @ca90> from PENDING to NR
2025-06-17 12:22:11,410Z imaging_step.py:182 WARNING Skipping ) @ca90> because dependencies not met, failed tasks: [) @c2b0>]
2025-06-17 12:22:11,412Z imaging_step.py:123 DEBUG Setting state of ) @c940> from PENDING to NR
2025-06-17 12:22:11,413Z imaging_step.py:182 WARNING Skipping ) @c940> because dependencies not met
2025-06-17 12:22:11,413Z imaging_step.py:123 DEBUG Setting state of ) @c2e0> from PENDING to NR
2025-06-17 12:22:11,414Z imaging_step.py:182 WARNING Skipping ) @c2e0> because dependencies not met

The node model is recognized by the Foundation process, but the node’s hardware configuration is also checked! Therefore, finding a similar model isn’t enough; the model AND the hardware configuration must be similar…

But how do I find the right model? And then I had an idea: search the Phoenix files mounted during installation to find out which models it expects to find…

A quick SSH into the node booted on Phoenix, whose installation failed, and here I am, wandering through the system’s intricacies to find what I’m looking for…

The information about supported templates is located in the /root/.local/lib/python3.9/site-packages/layout/modules folder. How do I know this? Because the logs generated during my previous attempts indicated:

File "/root/.local/lib/python3.9/site-packages/layout/modules/smc_gen11_4node.py", line 104, in populate_layout

And in this module folder, there is absolutely something for everyone:

Since the nodes in question are Supermicro, I focused my research on the “smc” prefix in order to reduce the range of possibilities:

In order to further reduce the number of possibilities, I eliminated everything that concerned more than 1 node (2 and 4 nodes therefore) which left me with only about ten possibilities and as I started in order, I immediately found the right template: smc_e300_gen11.py!

Inside the file, I immediately spot the same motherboard: X11SDV-8C-TP8F

It comes in two models: the SMC-E300-2, which has two drives, and the SMC-E300-4, which has four. So, it’s the first one that interests me, and while searching online, I came across another Supermicro motherboard, the SuperServer E300-9D-8CN8TP: https://www.supermicro.com/en/products/system/Mini-ITX/SYS-E300-9D-8CN8TP.cfm

Extremely similar to the motherboard I own, I think I’ve finally found the right model! I note the important details and shut down my motherboard:

  • X11SDV-8C-TP8F (board part number)
  • SMC-E300-2 (model)
  • CSE-E300 (chassis part number)

The final stretch: the custom FRU

Now that I have the missing information, I need to modify my FRU to match the model Foundation expects.

Here are the commands I ran:

./SMCIPMITool.exe ip_address ADMIN password ipmi fruw CP "CSE-E300"
./SMCIPMITool.exe ip_address ADMIN password ipmi fruw PPM "SMC-E300-2"
./SMCIPMITool.exe ip_address ADMIN password ipmi fruw PN "NONE"
./SMCIPMITool.exe ip_address ADMIN password ipmi fruw PV "NONE"

Then I relaunched a Foundation 5.9 with an AOS 6.10.1.6 and an AHV 20230302.103014 in order to validate that what I found works:

2025-06-18 07:35:49,786Z foundation_tools.py:1634 INFO Node with ip 192.168.84.22 is in phoenix. Generating hardware_config.json
2025-06-18 07:35:50,071Z foundation_tools.py:1650 DEBUG Running command .local/bin/layout_finder.py local
2025-06-18 07:35:54,153Z imaging_step_misc_hw_checks.py:168 DEBUG Not an NX G7+ node with RAID boot drives. Skipping RAID checks.
2025-06-18 07:35:54,156Z imaging_step.py:123 DEBUG Setting state of ) @dee0> from RUNNING to FINISHED
2025-06-18 07:35:54,157Z imaging_step.py:162 INFO Completed ) @dee0>
2025-06-18 07:35:54,159Z imaging_step.py:123 DEBUG Setting state of ) @deb0> from PENDING to RUNNING
2025-06-18 07:35:54,162Z imaging_step.py:159 INFO Running ) @deb0>
2025-06-18 07:35:54,165Z imaging_step_pre_install.py:364 INFO Rebooting into staging environment
2025-06-18 07:35:54,687Z cache_manager.py:142 DEBUG Cache HIT: key(get_nos_version_from_tarball_()_{'nos_package_path': '/home/nutanix/foundation/nos/nutanix_installer_package-release-fraser-6.10.1.6-stable-a5f69491f9523eef80d3c703f2ad4d2156e71eeb-x86_64.tar.gz'})
2025-06-18 07:35:54,690Z imaging_step_pre_install.py:389 INFO NOS version is 6.10.1.6
2025-06-18 07:35:54,691Z imaging_step_pre_install.py:392 INFO Preparing NOS package (/home/nutanix/foundation/nos/nutanix_installer_package-release-fraser-6.10.1.6-stable-a5f69491f9523eef80d3c703f2ad4d2156e71eeb-x86_64.tar.gz)
2025-06-18 07:35:54,691Z phoenix_prep.py:82 INFO Unzipping NOS package

It passed the hardware validation without a hitch, and the installation eventually went through.

Of course, this is a patch to allow my client to redeploy their nodes and extend their lifespan. The ideal solution would have been to be able to create a custom .py file that perfectly matches my model without me having to modify it, which, to my knowledge, is unfortunately currently impossible.

One problem persists, however: the cluster can be created in RF2, but Data Resiliency will be critical… I’m still looking for a solution to this problem…

Thanks to Théo and Jeroen for their ideas, which showed me the beginning of the path that led me to the solution!

Link to the Nutanix KB used: https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000PVxTCAW

Read More

A few months ago, I bought myself a Steamdeck to pass the time during my convalescence after a foot operation that left me couch-locked. I had already shown you that you can manage a cluster from the Steamdeck and I wanted to push the experience a little further…

Multi-boot on the Steamdeck

To make Foundation for Windows possible to run on the Steam deck, I had to find a solution to run Windows 11 instead of the natively embedded SteamOS.

To carry out the operation, I had several options:

  • replace the embedded operating system to switch from SteamOS to Windows 11 but that would add a lot of constraints to be able to play my Steam games on the console
  • set up a multi-boot system with an external drive on which I would have installed Windows 11 but that would make a potentially cumbersome device to carry around…

The goal being to have an additional boot solution for my Steamdeck in order to be able to experiment with Windows on the machine in various situations without having to remove the natively embedded operating system, I opted for the second option and I started looking for an external drive that would do the trick.

While browsing the Internet, I finally came across a Kickstarter project “Genki SavePoint”: https://www.kickstarter.com/projects/humanthings/genki-savepoint?lang=fr

The Genki Savepoint is a mini SSD enclosure designed for portable use. On paper, here are the promises of the case:

  • Compatible SSD M.2 2230
  • Max capacity of 2Tb
  • Transfer speed of 10Gb/s
  • 100w charging
  • Integrated heat sink
  • Integrated protection capacitor

So I rushed to support the project by ordering 2 cases that I finally received after a few weeks of waiting. I added a 1Tb M.2 2230 SSD to have enough space whatever use I have for it…

Exit SteamOS, hello Windows 11!

Once the case and the SSD were received, I mounted the SSD in the case and to do this, simply unscrew the heat sink to reveal the M.2 connector and insert the SSD. Once connected to the computer, the case is detected as an external hard drive.

I now had to prepare the SSD by installing Windows 11 on it using Rufus. I won’t detail the process here since the case manufacturer took care of it here: https://www.genkithings.com/blogs/blog/installing-windows-on-savepoint

Once Windows 11 was installed, I downloaded all the available drivers (https://help.steampowered.com/fr/faqs/view/6121-ECCD-D643-BAA8) and the various software that I wanted to install next (including Foundation for Windows) and I put them in preparation on the disk. The serious stuff could begin…

Installing Nutanix Foundation

At the first boot on the case, I obviously had to do all the configuration of the operating system and install all the drivers previously installed.

Then, deploying Foundation on the Steamdeck is simple since I just had to run the file I downloaded from the official website (https://portal.nutanix.com/page/downloads?product=foundation).

Once the installation was complete, I opened the internet browser and opened the address http://locahost:8000/gui/index.html to access the Nutanix Foundation interface:

Unsurprisingly, Foundation for Windows runs flawlessly on the Steamdeck, but what about a deployment without an onboard RJ45 network port? To solve this problem, I just had to purchase a mini USB-C dock with:

  • 1 RJ45
  • 3 USB2
  • 1 USB-C
  • 2 HDMI

At this stage, the Steamdeck is “Foundation Ready” and ready to deploy clusters. However, the last question that remains is: does it work? In all honesty, I don’t know because unfortunately I didn’t have a cluster available to allow me to do a full-scale test, but as soon as the opportunity arises it will be done!

Read More