
I worked with a customer who has a large number of fairly old nodes in production at their remote sites. Unfortunately, they are facing a problem performing AOS and AHV installations on them because the hardware is not officially supported by Nutanix. Having recovered a few identical nodes, I looked into the problem to find a solution…
Node Hardware Configuration
These nodes are Supermicro SuperServer 5019D-FN8TP nodes:

They are perfect for creating home labs with their 1U form factor and half-depth design, allowing them to fit into any home rack.
The hardware configuration is as follows:
- Processor: Intel® Xeon® processor D-2146NT, 8 cores / 16 threads
- 128GB RAM (expandable up to 512GB)
- 1 M.2 boot disk
- 2 1TB SSDs (2 additional SSDs can be added)
- 4 1G RJ45 ports
- 2 10G RJ45 ports
- 2 10G SFP ports
- 1 RJ45 port dedicated to IPMI
- 1 NVidia Tesla P40 graphics card

Did I tell you these knots are perfect for home labs?
First Foundation Tests
For my first Foundation tests, I chose to start with a very old version of Foundation: 4.6.
Software-wise, I also started with an old version, with version 5.5.9.5 and the AHV bundled in the package. Since most of the client nodes were also running older versions, I figured it should work.
First failure… of a long series!
I tested many possible combinations with Foundation versions 4.6 / 5.0 / 5.4 / 5.9, AOS versions 5.5.9.5 / 5.6.1 / 5.20 / 6.10, AHV bundled, not bundled… and even a custom Phoenix image generated from one of the recovered nodes… And absolutely no success, often with error messages that differed depending on the combinations used.
But one message still came up more frequently than the others…
Hardware Compatibility Check
During the Foundation process, there is a step in which the Phoenix system generates a hardware configuration file for the node(s) to be imaged: hardware_config.json.
Once this file is generated, Foundation compares it to its list of known hardware to verify that it is a node capable of imaging… And this is where my problem arises:
2025-06-17 11:55:58,642Z foundation_tools.py:1634 INFO Node with ip 192.168.84.22 is in phoenix. Generating hardware_config.json
2025-06-17 11:55:58,942Z foundation_tools.py:1650 DEBUG Running command .local/bin/layout_finder.py local
2025-06-17 11:56:02,383Z foundation_tools.py:334 ERROR Command '.local/bin/layout_finder.py local' returned error code 1
stdout:
stderr:
Traceback (most recent call last):
File "/root/.local/bin/layout_finder.py", line 297, in <module>
write_layout("hardware_config.json", 1)
File "/root/.local/bin/layout_finder.py", line 238, in write_layout
top = get_layout(node_position)
File "/root/.local/bin/layout_finder.py", line 130, in get_layout
vpd_info = vpd_info_override or get_vpd_info(system_info_override)
File "/root/.local/bin/layout_finder.py", line 249, in get_vpd_info
module, model, model_string, hardware_id = _find_model_match(
File "/root/.local/bin/layout_finder.py", line 78, in _find_model_match
raise exceptions[0]
__main__.NoMatchingModule: Raw FRU: FRU Device Description : Builtin FRU Device (ID 0)
Chassis Type : Other
Chassis Part Number : CSE-505-203B
Chassis Serial : C5050LH47NA0950
Board Mfg Date : Wed Oct 31 16:00:00 2018
Board Mfg : Supermicro
Board Serial : ZM18AS036679
Board Part Number : X11SDV-8C-TP8F
Product Manufacturer : Supermicro
Product Name :
Product Part Number : SYS-5019D-FN8TP-1-NI22
Product Version :
Product Serial : S348084X9211699
Product Name: SYS-5019D-FN8TP-1-NI22
Unable to match system information to layout module. Please refer KB-7138 to resolve the issue.
Foundation is very kind to point out that there’s a KB available, as this is clearly a recurring problem!
Link to the Nutanix KB: https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000PVxTCAW
Now let’s see how to solve my problem…
FRU Modification
The Nutanix KB indicates that you must edit your hardware’s FRU to match hardware on the compatibility list.
To do this, use the SMCIPMITools utility provided by Supermicro and available here: https://www.supermicro.com/en/solutions/management-software/ipmi-utilities
Once the utility is downloaded, you need to launch it from the command line with the correct parameters:
./SMCIPMITool.exe IP_ADDRESS ADMIN PASSWORD ipmi fru
The parameters are as follows:
- IP address of your node’s IPMI
- Administrator account login (default is ADMIN)
- The associated password
The command will query the IPMI and return information about the hardwareriel :
Getting FRU ...
Chassis Type (CT) = Other (01h)
Chassis Part Number (CP) = CSE-505-203B
Chassis Serial Number (CS) = XXXXXXXXXXXXXXX
Board mfg. Date/Time (BDT) = 2018/10/31 16:00:00 (A0 3E B7)
Board Manufacturer Name (BM) = Supermicro
Board Product Name (BPN) =
Board Serial Number (BS) = XXXXXXXXXXXX
Board Part Number (BP) = X11SDV-8C-TP8F
Board FRU File ID =
Product Manufacturer Name (PM) = Supermicro
Product Name (PN) =
Product PartModel Number (PPM) = SYS-5019D-FN8TP-1-NI22
Product Version (PV) =
Product Serial Number (PS) = XXXXXXXXXXXXXXX
Product Asset Tag (PAT) =
Product FRU File ID =
It is then possible to access each of the elements via different commands, for example:
SMCIPMITool.exe IP_ADDRESS ADMIN password ipmi fruw PM "param"
SMCIPMITool.exe IP_ADDRESS ADMIN password ipmi fruw PN "NONE"
SMCIPMITool.exe IP_ADDRESS ADMIN password ipmi fruw PPM "param"
SMCIPMITool.exe IP_ADDRESS ADMIN password ipmi fruw PV "NONE"
Obviously, I replace “param” with the desired parameter. Now that I have a technique to “lie” to the system, I need to come up with a good lie…
Looking for the lost model…
The problem in our case is having a FRU that matches a piece of hardware on the compatibility list integrated into Phoenix…
I tested it randomly with similar hardware by replacing the existing PPM:
- SYS-5019D-FN8TP-1-NI22 (the original one)
- X11SDV-8C-TP8F (this is the model recognized by Nutanix on the client nodes)
- NX-1120S-G7
- NX-1065-G7
The first is not recognized during the Foundation process, the same goes for the second. For 2 suivants, ils sont bien reconnu mais c’est un message d’erreur légèrement différent qui s’affiche…
stderr:
Traceback (most recent call last):
File "/root/.local/bin/layout_finder.py", line 297, in
write_layout("hardware_config.json", 1)
File "/root/.local/bin/layout_finder.py", line 238, in write_layout
top = get_layout(node_position)
File "/root/.local/bin/layout_finder.py", line 146, in get_layout
module.populate_layout(layout_api, layout_api.discovery_info, layout,
File "/root/.local/lib/python3.9/site-packages/layout/modules/smc_gen11_4node.py", line 104, in populate_layout
data_hbas = api.find_devices(pci_ids=["1000:0097"], min_=1, max_=1,
File "/root/.local/lib/python3.9/site-packages/layout/layout_api.py", line 300, in find_devices
raise Exception(msg)
Exception: This node is expected to have exactly 1 SAS3008. But phoenix could not find any such device
2025-06-17 12:22:11,405Z imaging_step.py:123 DEBUG Setting state of ) @c2b0> from RUNNING to FAILED
2025-06-17 12:22:11,409Z imaging_step.py:123 DEBUG Setting state of ) @ca90> from PENDING to NR
2025-06-17 12:22:11,410Z imaging_step.py:182 WARNING Skipping ) @ca90> because dependencies not met, failed tasks: [) @c2b0>]
2025-06-17 12:22:11,412Z imaging_step.py:123 DEBUG Setting state of ) @c940> from PENDING to NR
2025-06-17 12:22:11,413Z imaging_step.py:182 WARNING Skipping ) @c940> because dependencies not met
2025-06-17 12:22:11,413Z imaging_step.py:123 DEBUG Setting state of ) @c2e0> from PENDING to NR
2025-06-17 12:22:11,414Z imaging_step.py:182 WARNING Skipping ) @c2e0> because dependencies not met
The node model is recognized by the Foundation process, but the node’s hardware configuration is also checked! Therefore, finding a similar model isn’t enough; the model AND the hardware configuration must be similar…
But how do I find the right model? And then I had an idea: search the Phoenix files mounted during installation to find out which models it expects to find…

A quick SSH into the node booted on Phoenix, whose installation failed, and here I am, wandering through the system’s intricacies to find what I’m looking for…
The information about supported templates is located in the /root/.local/lib/python3.9/site-packages/layout/modules folder. How do I know this? Because the logs generated during my previous attempts indicated:
File "/root/.local/lib/python3.9/site-packages/layout/modules/smc_gen11_4node.py", line 104, in populate_layout
And in this module folder, there is absolutely something for everyone:

Since the nodes in question are Supermicro, I focused my research on the “smc” prefix in order to reduce the range of possibilities:

In order to further reduce the number of possibilities, I eliminated everything that concerned more than 1 node (2 and 4 nodes therefore) which left me with only about ten possibilities and as I started in order, I immediately found the right template: smc_e300_gen11.py!

Inside the file, I immediately spot the same motherboard: X11SDV-8C-TP8F
It comes in two models: the SMC-E300-2, which has two drives, and the SMC-E300-4, which has four. So, it’s the first one that interests me, and while searching online, I came across another Supermicro motherboard, the SuperServer E300-9D-8CN8TP: https://www.supermicro.com/en/products/system/Mini-ITX/SYS-E300-9D-8CN8TP.cfm
Extremely similar to the motherboard I own, I think I’ve finally found the right model! I note the important details and shut down my motherboard:
- X11SDV-8C-TP8F (board part number)
- SMC-E300-2 (model)
- CSE-E300 (chassis part number)
The final stretch: the custom FRU
Now that I have the missing information, I need to modify my FRU to match the model Foundation expects.
Here are the commands I ran:
./SMCIPMITool.exe ip_address ADMIN password ipmi fruw CP "CSE-E300"
./SMCIPMITool.exe ip_address ADMIN password ipmi fruw PPM "SMC-E300-2"
./SMCIPMITool.exe ip_address ADMIN password ipmi fruw PN "NONE"
./SMCIPMITool.exe ip_address ADMIN password ipmi fruw PV "NONE"
Then I relaunched a Foundation 5.9 with an AOS 6.10.1.6 and an AHV 20230302.103014 in order to validate that what I found works:
2025-06-18 07:35:49,786Z foundation_tools.py:1634 INFO Node with ip 192.168.84.22 is in phoenix. Generating hardware_config.json
2025-06-18 07:35:50,071Z foundation_tools.py:1650 DEBUG Running command .local/bin/layout_finder.py local
2025-06-18 07:35:54,153Z imaging_step_misc_hw_checks.py:168 DEBUG Not an NX G7+ node with RAID boot drives. Skipping RAID checks.
2025-06-18 07:35:54,156Z imaging_step.py:123 DEBUG Setting state of ) @dee0> from RUNNING to FINISHED
2025-06-18 07:35:54,157Z imaging_step.py:162 INFO Completed ) @dee0>
2025-06-18 07:35:54,159Z imaging_step.py:123 DEBUG Setting state of ) @deb0> from PENDING to RUNNING
2025-06-18 07:35:54,162Z imaging_step.py:159 INFO Running ) @deb0>
2025-06-18 07:35:54,165Z imaging_step_pre_install.py:364 INFO Rebooting into staging environment
2025-06-18 07:35:54,687Z cache_manager.py:142 DEBUG Cache HIT: key(get_nos_version_from_tarball_()_{'nos_package_path': '/home/nutanix/foundation/nos/nutanix_installer_package-release-fraser-6.10.1.6-stable-a5f69491f9523eef80d3c703f2ad4d2156e71eeb-x86_64.tar.gz'})
2025-06-18 07:35:54,690Z imaging_step_pre_install.py:389 INFO NOS version is 6.10.1.6
2025-06-18 07:35:54,691Z imaging_step_pre_install.py:392 INFO Preparing NOS package (/home/nutanix/foundation/nos/nutanix_installer_package-release-fraser-6.10.1.6-stable-a5f69491f9523eef80d3c703f2ad4d2156e71eeb-x86_64.tar.gz)
2025-06-18 07:35:54,691Z phoenix_prep.py:82 INFO Unzipping NOS package
It passed the hardware validation without a hitch, and the installation eventually went through.
Of course, this is a patch to allow my client to redeploy their nodes and extend their lifespan. The ideal solution would have been to be able to create a custom .py file that perfectly matches my model without me having to modify it, which, to my knowledge, is unfortunately currently impossible.
One problem persists, however: the cluster can be created in RF2, but Data Resiliency will be critical… I’m still looking for a solution to this problem…
Thanks to Théo and Jeroen for their ideas, which showed me the beginning of the path that led me to the solution!
Link to the Nutanix KB used: https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000PVxTCAW























































