Team Leader - Nutanix Technology Champion - Nutanix NTC Storyteller

Julien DUMUR
Infrastructure in a Nutshell

You might think that over time, you get used to it. That after two years, opening the email announcing the results becomes a mere administrative formality. Well, I must confess: not at all.

It is with immense pride – and undisguised relief – that I announce my nomination as a Nutanix Technology Champion (NTC) for the year 2026. This is the third consecutive year that I have the honor of joining this group of passionate experts.

To be completely transparent, I never take this distinction for granted. In the IT world, technologies evolve fast, and so do we. Staying relevant requires work, curiosity, and above all, the desire to share. Seeing my name once again on the official NTC 2026 list is a beautiful validation of the efforts put into the blog throughout the year.

What is an “NTC”? (Spoiler: It’s not just a LinkedIn badge)

I am often asked if it is an exam I passed, like an NCP-MCI certification. The answer is no, and that is precisely the beauty of this program.

The Nutanix Technology Champion program does not just reward passing a technical multiple-choice quiz. It is a distinction that recognizes community engagement. Basically, Nutanix spots those who spend their free time testing, breaking, fixing, and above all explaining their technologies to others. Whether through blog posts (like here), forum contributions, or talks at events.

For the purists, it is the equivalent of the vExpert at VMware or the MVP at Microsoft. It is the validation of what we call technical “Soft Skills”: the ability to evangelize a solution not because we are paid to do so, but because we master its intricacies and we love it. It is a recognition by peers and by the vendor, and that is what makes it so rewarding.

Under the Hood: Why this nomination matters for the blog

Beyond the shiny logo to put in a signature, being an NTC has a direct impact on the quality of what I can offer you on juliendumur.fr. It is not an honorary title devoid of meaning; it is a key that opens interesting doors.

Concretely, this status gives me privileged access behind the scenes. I have the opportunity to exchange directly with Product Managers and Nutanix engineering teams. This means that when I write a technical article, I can validate my hypotheses at the source, avoiding approximations.

Furthermore, we have access to roadmap briefings and Beta versions. Even if this information is often under NDA (I can’t reveal everything to you in advance!), it allows me to understand the direction the technology is taking. I can thus better anticipate topics to cover and offer you more relevant analyses as soon as features reach General Availability (GA). It is the assurance for you to read content that is not only technically accurate but also in phase with market reality.

Retrospective and 2026 Goals: Full Steam Ahead

This third nomination is the fruit of consistency. But above all, it marks the beginning of a new year of “lab”. The goal is not to collect stars, but to continue exploring the Nutanix Cloud Platform from every angle.

For 2026, I intend to keep offering practical tutorials and field feedback. While the AHV hypervisor remains the unavoidable foundation, I really want to move up the software stack a bit more this year. Expect to see topics covering container orchestration with NKP (Nutanix Kubernetes Platform), automation, and probably a stronger focus on security with Flow. The objective remains the same: dissecting the tech to make it accessible.

A huge thank you to the community for the daily exchanges, and of course to the NTC program team (shout out to Angelo Luciani) for their renewed trust. It is a pleasure to be part of this virtual family.

Now, the ball is also in your court: are there specific topics or features of the Nutanix ecosystem that you would like to see me cover this year? The comments are open!

Read More

I won’t lie to you: when you’ve had a taste of gold, bronze has a peculiar flavor. Last year, I had the immense pride of finishing first in the “Top Bloggers” ranking of the Nutanix Technology Champion (NTC) program.

This year, the verdict is in on the official community blog: I ranked 3rd.

Did I slow down? No. Did I share less? On the contrary. But in tech, just like in sports, staying at the top is often harder than getting there. This 3rd place is, above all, a signal that the competition has intensified. And honestly? It’s exactly what I needed to motivate me to get back in the fight for 2026.

The NTC Program is Not Just a Badge

For those new to the ecosystem, being a Nutanix Technology Champion (NTC) isn’t just about slapping a logo on your LinkedIn profile. It is a commitment. It means being part of a technical vanguard that tests, breaks, fixes, and—above all—documents Nutanix solutions. The “Top Blogger” ranking is the barometer of this activity.

1st in 2024, 3rd in 2025: Analyzing the Logs

So, what happened? I pulled my logs to compare. If my performance had dropped, I would have accepted this 3rd place with a shrug. But the data shows otherwise: my publication volume is equivalent to last year’s. Even better, my strategy was cleaner: instead of doing “bursts” (flurries of articles), I maintained a metronomic consistency, spread evenly over the 12 months.

The conclusion is simple and undeniable: the overall bar has been raised. My peers were absolute beasts this year. They produced more. This is excellent news for the Nutanix community: the ecosystem is alive, dense, and increasingly sharp. But for the competitor in me, it’s a wake-up call. Consistency is no longer enough; just like in cycling, I’m going to have to up the intensity.

Why Publish?

Beyond the rankings and the competition, why continue writing with such discipline? The answer is pragmatic. My blog is primarily my external memory. In our line of work, we don’t remember everything. We test, we configure, we hit a critical error, we resolve it… and six months later, we’ve forgotten how we did it. Blogging is about documenting my own “struggles” so I never have to look for the solution twice. It’s about transforming obscure troubleshooting into a clear tutorial. But make no mistake: every article is born from a real technical need, from a real infra that I built or fixed. No fluffy theory, just experience from the field. The icing on the cake: the feedback from our clients who stumble upon my blog and tell me, “We found a solution on your site.” That is the real reward.

Conclusion: See You at the Finish Line

Bravo to the two peers who finished ahead of me this year. You set the bar very high, and that is exactly what I like. The level of the NTC program is what makes it credible. But the message has been received. The consistency of 2025 was a good foundation, but for 2026, I’m shifting gears. I’m going to chase more specific topics, dig deeper into the guts of Nutanix AOS and AHV, and perhaps explore use cases that no one has documented yet.

The bronze medal is nice. But it will serve primarily as a reminder on my desk: next year, I’m aiming for the yellow jersey.

See you soon for the next technical article.

Read More

It’s one of those mornings where the coffee tastes a little different. The taste of major announcements that are bound to change our habits as administrators. Nutanix has just released a trio of major updates into the wild: AOS 7.5, AHV 11.0, and Prism Central 7.5.

Let’s be clear from the start: I’ve combed through the Release Notes for you, and this isn’t just a simple “Patch Tuesday.” It is a structural overhaul. Nutanix is no longer content with just improving its HCI; the vendor is breaking its own dogmas (hello external storage and compute-only nodes) and drastically tightening security, even if it shakes up our old reflexes.

While on paper, the promises of performance (AES everywhere) and flexibility (Elastic Storage) are enticing, my field experience dictates a certain prudence. When you mess with the storage engine and SSH access at the same time, you don’t rush into production without reading the fine print carefully. That is exactly what I’m proposing here: an unfiltered technical analysis of what awaits you.

AOS 7.5: Performance & Architecture

Let’s start with the core of the reactor: AOS 7.5. If you thought the Nutanix storage architecture was set in stone, think again. This version marks a turning point in hot data and disk space management.

The Key Concept: AES Becomes the Absolute Standard

Until now, the Autonomous Extent Store (AES) was often reserved for high-performance All-Flash environments. With 7.5, that’s over: AES becomes the default architecture for all deployments, whether All-Flash or Hybrid.

Why is this important? Because AES improves metadata locality and reduces CPU consumption for I/O. But be careful, the critical novelty here is the automatic migration. If you upgrade an existing hybrid cluster to 7.5, AOS will launch a background conversion task to switch to AES.

Do not underestimate the I/O impact of this “transparent” conversion. Even if Nutanix handles it in the background, metadata restructuring is never trivial on a loaded cluster. Furthermore, Nutanix introduces a revamped Garbage Collection (GC) (“Accelerated Data Reclamation”). It is now capable of cleaning multiple “holes” in an Erasure Coding stripe in a single pass and merging inefficient stripes. It’s brilliant for efficiency, but it confirms that the engine is working much more “intelligently” under the hood.

The Unexpected Opening: Pure Storage and Dense Nodes

This might be the strongest sign of this release: Nutanix is officially opening up to third-party storage. AOS 7.5 supports connecting to Pure Storage FlashArray arrays via NVMeoF/TCP for capacity storage. Nutanix handles the compute, Pure handles the data. For HCI purists like me, this is a paradigm shift, but one that meets a real need for disaggregation.

Finally, for those managing storage monsters, note that existing All-Flash nodes can be upgraded to support up to 185 TB per node, while maintaining aggressive RPOs (NearSync/Sync).

AHV 11.0 & Flexibility: The Era of “Compute-Only” and Elastic Storage

If AOS 7.5 boosts the engine, AHV 11.0 changes the bodywork. For a long time, Nutanix preached the dogma of strict hyperconvergence: “You buy identical nodes, you expand storage and compute at the same time.” With this version, I feel like Nutanix is finally listening to those who, like me, found themselves with too much CPU and not enough disk (or vice versa).

The Key Concept: Official Disaggregation

It’s a small revolution: Nutanix now allows the deployment of “Compute-Only” nodes much more flexibly. We are seeing the arrival of a standalone AHV installer. Concretely, you can manually install AHV via an ISO on a server, without going through the heaviness of a full re-imaging via Foundation.

For labs or rapid compute power expansions, this is a phenomenal time-saver. But be careful, this requires increased rigor regarding hardware compatibility management, as Foundation will no longer be there to act as a safeguard during installation.

The Awaited Feature: Elastic VM Storage

This is undoubtedly the feature I was waiting for the most to break down silos. With Elastic VM Storage, available starting with AHV 11.0 and AOS 7.5, you can finally share a storage container from one AHV cluster to another AHV cluster within the same Prism Central.

Imagine: your Cluster A is bursting at the seams storage-wise, but your Cluster B is sleeping half-empty. Before, you had to move VMs. Now, you can mount the container from Cluster B onto Cluster A and deploy your VMs directly on it.

It’s great, but caution. It’s not magic. You are introducing a critical network dependency between two clusters that were previously isolated. If your inter-cluster network fails, the VMs on Cluster A hosted on Cluster B go down. Moreover, Nutanix clearly states that this allows “serving storage from a remote cluster,” which necessarily implies additional network latency compared to native data locality. Reserve this for workloads that are not sensitive to disk latency or for temporary overflow.

Finally, note the arrival of Dual Stack IPv6. AHV can now talk to your DNS, NTP, and Syslog servers in IPv6. A necessary update to align with modern network standards.

Security and Governance: Locking Everything Down (SSH, vTPM, Profiles)

Let’s move on to the part that will make command-line regulars (myself included) grind their teeth. Nutanix has decided to tighten the screws on security, and they aren’t kidding around.

The Key Concept: The Digital Fortress

The goal is clear: reduce the attack surface, especially against ransomware that often attempts to propagate via lateral movements on management interfaces. Nutanix is therefore introducing mechanisms to limit direct human access to infrastructure components (CVM and Hosts).

The Critical Change: CVM Secure Access (The End of SSH is coming)

This is the number one vigilance point of this article. With AOS 7.5, you now have the option (and strong incentive) to totally disable SSH access to CVMs and AHV hosts.

On paper, this is excellent for security (“Security by Obscurity”). In operational reality, it is a violent cultural change. No more quick ssh nutanix@cvm to check a log or run a quick diagnostic script. Everything must go through APIs or the console.

Danger Warning! Before checking that “Disable SSH” box, check your migration procedures. The Release Notes are formal: disabling SSH breaks Cross-Cluster Live Migration (CCLM) workflows, whether in On-Demand mode (OD-CCLM) or Disaster Recovery (DR-CCLM). These operations still rely on SSH tunnels between source and destination hosts. If you cut SSH, your migrations will fail. You will have to re-enable SSH to make them work. This is a major operational constraint to anticipate.

Governance: vTPM & Guest Profiles

For highly sensitive environments, AHV now supports storing vTPM encryption keys in an external KMS. This allows centralizing key management and aligning the vTPM security policy with the cluster’s “Data-at-Rest” encryption policy.

On the quality of life side, I welcome the arrival of reusable Guest Customization Profiles. No more tedious copy-pasting of Sysprep scripts with every VM clone. You create a profile (Windows + NGT 4.5 min required), store it, and apply it on the fly to clones or templates. It’s simple, efficient, and avoids input errors.

Prism Central 7.5: The Interface That Makes Life Easier (NIM & Policies)

We finish this overview with Prism Central 7.5 (pc.7.5). If AOS is the engine and AHV the chassis, PC is the dashboard. And believe me, it is fleshing out considerably to save us from ungrateful manual tasks.

The Key Concept: Intelligent Orchestration

The major addition is the arrival of VM Startup Policies. This is a feature I’ve been waiting for for years to replace my cobbled-together startup scripts. Concretely, you can now define the exact restart order of VMs during an HA event (node failure) or a cluster restart.

This allows managing application dependencies cleanly: “Start the Database, wait for it to be UP, then start the Application Server”. It’s native, integrated into the interface, and greatly secures recovery plans.

For large-scale environments, note the appearance of NIM (Nutanix Infrastructure Manager). It is a new orchestrator designed to provision, configure, and manage your datacenters in a standardized way, aligning with the famous “Nutanix Validated Designs” (NVD). It is clearly oriented for very large deployments that want to avoid configuration drift.

Enhanced Resilience: PC Backup & Restore

Until now, restoring a crashed Prism Central could be an adventure, especially if the original cluster was itself down. Nutanix has lifted a major technical constraint: you can now recover a Prism Central instance from a backup located on any Prism Element cluster.

This is a detail that changes everything in case of a total site disaster. Previously, recovery from a Prism Element backup was restricted to the specific cluster where PC was registered. This new flexibility, coupled with the ability to backup to a generic S3 Object Store, makes the management architecture much more robust. We are no longer putting all our eggs in one basket.

Conclusion & Recommendations: Maturity Has a Price

After dissecting these three release notes, my feeling is clear: Nutanix is reaching an impressive level of maturity. The generalization of AES and the opening to external storage show that the platform is ready for the most demanding workloads and the most complex architectures.

However, as a “Prudent Ghost Writer,” I must raise a final red flag before you click “Upgrade.”

⚠️ Watch out for prerequisites: Do not rush headlong into the Prism Central update. Version pc.7.5 requires your Prism Element clusters to run at least AOS 7.0.1.9. If you are on an earlier version, deployment will be blocked. You will have to plan your migration path rigorously.

This is an unavoidable update for the performance and security gains, but it is also a structural update. The AES conversion, the potential SSH deactivation, and the new network dependencies for elastic storage require validating these changes in a pre-production environment.

Take the time to test, check your compatibility matrices, and above all, do not cut SSH before verifying that you do not have any planned inter-cluster migration (CCLM)!

To your keyboards, and happy upgrading!

Read More
nutanix ahv cli reference guide

In this new blog post, we’ll cover all the main Nutanix AHV CLI commands that allow you to perform some checks on your virtual machines using the command line.

All the commands in this article can be run via SSH from any CVM in the cluster.

Display the list of virtual machines

To display the list of virtual machines on the Nutanix cluster, simply run the following command:

acli vm.list

This will show you all the VMs present on the cluster, without the CVMs:

nutanix@NTNX-S348084X9211699-B-CVM:192.168.84.22:~$ acli vm.list
VM name VM UUID
LINUX 88699c96-11a5-49ce-9d1d-ac6dfeff913d
NTNX-192-168-84-200-PCVM-1760699089 f659d248-9ece-4aa0-bb0c-22a3b3abbe12
vm_test 9439094a-7b6b-48ca-9821-a01310763886

As you can see, I only have two virtual machines on my cluster:

  • My Prism Central
  • A newly deployed “LINUX” virtual machine
  • A test virtual machine

A handy command to quickly retrieve all virtual machines and their respective UUIDs. Now let’s see how to retrieve information about a specific virtual machine.

Retrieving Virtual Machine Information

To display detailed information about a virtual machine, use the following command:

acli vm.get VM_NAME

Using the example of my “LINUX” virtual machine, this returns the following information:

nutanix@NTNX-S348084X9211699-B-CVM:192.168.84.22:~$ acli vm.get LINUX
LINUX {
config {
agent_vm: False
allow_live_migrate: True
apc_config {
apc_enabled: False
}
bios_uuid: "88699c96-11a5-49ce-9d1d-ac6dfeff913d"
boot {
boot_device_order: "kCdrom"
boot_device_order: "kDisk"
boot_device_order: "kNetwork"
hardware_virtualization: False
secure_boot: False
uefi_boot: True
}
cpu_hotplug_enabled: True
cpu_passthrough: False
disable_branding: False
disk_list {
addr {
bus: "ide"
index: 0
}
cdrom: True
device_uuid: "fae2ee55-8736-4f3a-9b2c-7d5f5770bf33"
empty: True
iso_type: "kOther"
}
disk_list {
addr {
bus: "scsi"
index: 0
}
cdrom: False
container_id: 4
container_uuid: "2ead3997-e915-4ee2-b9a4-0334889e434b"
device_uuid: "f9a8a84c-6937-4d01-bfd2-080271c44916"
naa_id: "naa.6506b8def195dc769b32f3fe47100297"
storage_vdisk_uuid: "215ba83c-44cb-4c41-bddc-1aa3a44d41c7"[7] 0:python3.9* "ntnx-s348084x9211699-" 21:12 21-Oct-25 vmdisk_size: 42949672960
vmdisk_uuid: "42a18a62-861a-497a-9d73-e959513ce709"
}
generation_uuid: "9c018794-a71a-45ae-aeca-d61c5dd6d11a"
gpu_console: False
hwclock_timezone: "UTC"
machine_type: "pc"
memory_mb: 8192
memory_overcommit: False
name: "LINUX"
ngt_enable_script_exec: False
ngt_fail_on_script_failure: False
nic_list {
connected: True
mac_addr: "50:6b:8d:fb:a1:4c"
network_name: "NUTANIX"
network_type: "kNativeNetwork"
network_uuid: "7d13d75c-5078-414f-a46a-90e3edc42907"
queues: 1
rx_queue_size: 256
type: "kNormalNic"
uuid: "c6f02560-b8e6-4eed-bc09-1675855dfc77"
vlan_mode: "kAccess"
}
num_cores_per_vcpu: 1
num_threads_per_core: 1
num_vcpus: 2
num_vnuma_nodes: 0
power_state_mechanism: "kHard"
scsi_controller_enabled: True
vcpu_hard_pin: False
vga_console: True
vm_type: "kGuestVM"
vtpm_config { is_enabled: False
}
} is_ngt_ipless_reserved_sp_ready: True
is_rf1_vm: False
logical_timestamp: 1
state: "kOff"
uuid: "88699c96-11a5-49ce-9d1d-ac6dfeff913d"

As you can see, this returns all the information about a virtual machine. It is possible to filter some of the information returned with certain commands. Here are the ones I use most often:

acli vm.disk_get VM_NAME : to retrieve detailed information of all disks of a virtual machine

nutanix@NTNX-S348084X9211699-B-CVM:192.168.84.22:~$ acli vm.disk_get LINUX
ide.0 {
addr {
bus: "ide"
index: 0
}
cdrom: True
device_uuid: fae2ee55-8736-4f3a-9b2c-7d5f5770bf33
empty: True
iso_type: "kOther"
}
scsi.0 {
addr {
bus: "scsi"
index: 0
}
cdrom: False
container_id: 4
container_uuid: "2ead3997-e915-4ee2-b9a4-0334889e434b"
device_uuid: f9a8a84c-6937-4d01-bfd2-080271c44916
naa_id: "naa.6506b8def195dc769b32f3fe47100297"
storage_vdisk_uuid: 215ba83c-44cb-4c41-bddc-1aa3a44d41c7
vmdisk_size: 42949672960
vmdisk_uuid: 42a18a62-861a-497a-9d73-e959513ce709
}

acli vm.nic_get VM_NAME : to retrieve the detailed list of network cards attached to a virtual machine

nutanix@NTNX-S348084X9211699-B-CVM:192.168.84.22:~$ acli vm.nic_get LINUX
50:6b:8d:fb:a1:4c {
connected: True
mac_addr: "50:6b:8d:fb:a1:4c"
network_name: "NUTANIX"
network_type: "kNativeNetwork"
network_uuid: "7d13d75c-5078-414f-a46a-90e3edc42907"
queues: 1
rx_queue_size: 256
type: "kNormalNic"
uuid: "c6f02560-b8e6-4eed-bc09-1675855dfc77"
vlan_mode: "kAccess"
}

acli vm.snapshot_list VM_NAME : to retrieve the list of snapshots associated with a virtual machine

nutanix@NTNX-S348084X9211699-B-CVM:192.168.84.22:~$ acli vm.snapshot_list LINUX
Snapshot name Snapshot UUID
SNAPSHOT_BEFORE_UPGRADE e7c1e84e-7087-42fd-9e9e-2b053f0d5714

You now know almost everything about verifying your virtual machines.

For the complete list of commands, I invite you to consult the official documentation: https://portal.nutanix.com/page/documents/details?targetId=Command-Ref-AOS-v7_3:man-ncli-c.html

In the next article, we’ll tackle a big task: creating virtual machines using CLI commands.

Read More

In a previous article, we covered how to deploy and perform the basic configuration of a Palo Alto gateway to replace the basic gateway supplied with your OVHcloud Nutanix cluster.

I will now show you how to connect this gateway to the RTvRack supplied with your cluster to connect it to the internet.

Connecting the Gateway to the RTvRack

In “Network > Zones”, we start by creating a new “Layer3” zone, which we’ll call “WAN” for simplicity:

You can also create one or more other zones to connect your other interfaces (e.g., an “INTERNAL” zone).

Next, in “Network > Interfaces,” edit the ethernet1/1 interface. If you’ve successfully created your VM on Nutanix, it will correspond to the WAN output interface. It will be a “Layer3” interface:

On the “Config” tab, select the “default” Virtual Router and select the “WAN” security zone.

On the “IPv4” tab, add the available public IP address in the range provided to you by OVHcloud with your cluster, making sure to include a /32 mask at the end:

You can find the network information for your public IP address on your OVHcloud account in “Hosted Private Cloud > Network > IP”:https://www.ovh.com/manager/#/dedicated/ip

En fUsing the public IP address and its associated network mask, you can deduce:

The public IP address to assign to the WAN port of your gateway

The IP address of the WAN gateway

Example with the network 6.54.32.10/30:

Network address (not usable): 6.54.32.8
First address (public address of the PA-VM): 6.54.32.9
Last address: 6.54.32.10 (WAN gateway address)
Broadcast address: 6.54.32.11 (broadcast address)

Repeat the operation with the interface corresponding to the subnet of your Nutanix cluster, using the IP address of the gateway you specified when deploying your cluster.

However, make sure to set the mask corresponding to that of the network in which the interface is located as indicated in the documentation: https://docs.paloaltonetworks.com/pan-os/11-0/pan-os-networking-admin/configure-interfaces/layer-3-interfaces/configure-layer-3-interfaces#iddc65fa08-60b8-47b2-a695-2e546b4615e9.

In “Network > Virtual Routers”, edit the default router. You should find your “ethernet1/1” interface at a minimum, as well as any other interfaces you may have already configured:

Then, in the “Static Routes” submenu, create a new route with a name that speaks to you, a destination of 0.0.0.0/0, select the “ethernet1/1” interface and as Next Hop the IP address of the public network gateway provided to you by OVHcloud:

Finally, go to the “Device > Setup > Services” tab and edit the “Service Route Configuration” option in “Services Features” to specify the output interface and the associated /32 IP address for some of the services:

The list of services to configure at a minimum is as follows:

  • DNS
  • External Dynamic Lists
  • NTP
  • Palo Alto Networks Services
  • URL Updates

You can validate and commit. Your PA-VM gateway is now communicating with the OVHcloud RTvRack. All that’s left is to finalize the configurations to secure the installation and create your firewall rules to allow your cluster to access the internet.net.

Read More
header nutanix

A quick blog post to share that the Nutanix Technology Champion (NTC) program registrationssont are open !

From today, October 1st, and until October 31, you can fill the form and try to be a part of this program.

Applications will be reviewed in November and announcement for the NTC 2026 members will be published in December.

You have a blog and you want to share Nutanix knowledge with other experts ? Fill the form on the official webpage: https://next.nutanix.com/community-blog-154/step-into-the-spotlight-nutanix-technology-champion-2026-applications-now-open-44876

My application is already sent, and I hope to be part of this wonderful program for the third year in a row !

Read More

Behind this musical reference lies the annual event organized by Nutanix France: Nutanix .NEXT on Tour!

Nutanix .NEXT on Tour Paris

Like last year, Nutanix is ​​once again organizing NEXT on Tour in Paris on October 2, 2025, at the CNIT La Défense.

The program for this day includes plenary sessions, keynotes, and feedback sessions. Some partners will also have a booth, providing the perfect opportunity for the publisher’s French customers to spend a full day engaging with hyperconverged infrastructure professionals.

Topics to be covered include:

  • Migration to Nutanix
  • Management and automation of your hybrid cloud with Nutanix Cloud Manager
  • Nutanix Kubernetes Platform
  • AI
  • and many more!

You can find the detailed program here: https://www.nutanix.com/fr/go/next-on-tour-paris

Come meet us at the Mikadolabs booth!

As a Nutanix Pure Player, Mikadolabs will have a booth again this year at the show. I’ll be there for a good part of the day to welcome you and answer your questions about Nutanix and hyperconvergence with part of the team.

Don’t hesitate to stop by and say hello, and if you haven’t yet registered for the event, you can still do so via this link: Event Registration

See you in two weeks!

Read More
nutanix on ovhcloud hosted private cloud

In this article, I share my complete feedback on the complete reinstallation of a Nutanix cluster at OVHcloud.

Once logged in to the OVHcloud management interface, go to “Hosted Private Cloud”:

In the left drop-down menu, click on the cluster you want to redeploy:

On the page that appears, click on “Redeploy my cluster”: 

Click on “Continuer” :

Automatic redeployment

The first option is to revert to the default settings provided by OVHcloud to completely reinstall the cluster in its basic configuration:

A summary of the settings is displayed before you finally confirm the reinstallation of your cluster:

Custom redeployment

You can fully customize your cluster’s IP network configuration during its installation phase. When choosing the cluster deployment method, select “Customize configuration” and click “Next”:

Fill in the various fields with the information you want to assign to your cluster and click on “Redeploy”:

Type “REDEPLOY” in the field provided and click “Confirm” to start the reinstallation procedure:

On your cluster’s overview page, a message indicates that cluster redeployment is in progress: 

All that’s left is to wait until the cluster is completely redeployed. All the basic configurations are already done, you just have to finalize the specific ones such as authentication, SMTP relay, monitoring, etc.

Read More
nutanix ahv cli reference guide

In the Maxi Best Of Nutanix CLI series, the previous two articles covered checking the network configuration of a Nutanix cluster and managing subnets.

In this new article, we’ll cover managing storage containers via CLI commands on your Nutanix clusters…

All the commands in this article must be executed from one of the cluster’s CVMs and work on a cluster running AOS 6.10+.

Check the containers

To check the status of your storage containers, the simplest command is:

ncli container list

This command will allow you to display all the information related to all the containers in your cluster.

If you want to display a specific container, you can pass the name (the simplest method) or the ID of your container if you have it as a parameter:

ncli container list name=NAME
ncli container list id=ID

Finally, one last command to display only the usage statistics of your containers:

ncli container list-stats

Renaming a Container

To rename a storage container, it must be completely empty.

Renaming a storage container can be done using the following command:

ncli container edit name=ACTUALNAME new-name=NEWNAME

On the default container, this would give for example the following command:

ncli container edit name=default-container-21425105524428 new-name=ntnx-lab-container

WARNING: There are two containers created by default when deploying your cluster: “SelfServiceContainer” and “NutanixManagementShare”. Do not attempt to rename them!

Creating a Container

It’s also possible to create storage containers using the CLI:

ncli container create name=NAME sp-name=STORAGE-POOL-NAME

The “name” and “sp-name” parameters are the only required parameters when running the command. This will allow you to create a base container on the selected storage pool with the following parameters:

  • No data optimization mechanism
  • No restrictions/reservations
  • The default replication factor

But the container creation command can be very useful if you need to create storage containers in batches, for example, if you’re hosting multiple clients on a cluster, each with an allocated amount of storage space!

For example, to create a storage container with the following parameters:

  • Container name “client-alpha”
  • Reserved capacity: 64GB
  • Maximum capacity: 64GB
  • With real-time compression enabled

Here’s the command you would need to run:

ncli container create name=client-alpha res-capacity=64 adv-capacity=64 enable-compression=true compression-delay=0 sp-name=default-storage-pool-21425105524428

A container with the associated characteristics will then be created:

Modifying Container Settings

An existing container can also be modified. You can modify almost everything in terms of settings, from data optimization mechanisms to reserved/allocated sizes, replication factors, and more.

For all the settings, please refer to the official documentation (link at the bottom of the page).

Deleting a Container

Deleting a container is quite simple, but requires that all files stored within it be deleted or moved first. Deleting a container is done using the following command:

ncli container remove name=NAME

It may happen that despite deleting or moving your VM’s vdisks, the deletion is still refused. This is often due to small residual files.

You must then add the “ignore-small-files” parameter to force the deletion:

ncli container remove name=NAME ignore-small-files=true

For example:

ncli container remove name=ntnx-lab-container ignore-small-files=true

WARNING: There are two containers created by default when deploying your cluster: “SelfServiceContainer” and “NutanixManagementShare”. Do not attempt to delete them!

Official Documentation

To learn more about some of the command options presented, please consult the official documentation: https://portal.nutanix.com/page/documents/details?targetId=Command-Ref-AOS-v6_10:acl-ncli-container-auto-r.html

Read More
nutanix ahv cli reference guide

In the previous blog post on the Maxi Best Of Nutanix CLI menu, I presented you with the best commands for checking the entire network configuration of your Nutanix cluster.

In this new article, we’ll now see how CLI commands can help us create or modify networks in our Nutanix cluster…

All the commands in this article must be executed at one of the CVMs in the cluster.

Creating an Unmanaged Subnet on Nutanix AHV

To create a new unmanaged subnet (without IPAM) across the AHV cluster, the command is very simple:

acli net.create NAME vlan=VLAN_ID

Replace:

  • NAME with the name you want to assign to your subnet
  • VLAN_ID with the VLAN ID

Here’s an example command that creates the VLAN “NUTANIX” with the VLAN vlan id “84” :

acli net.create NUTANIX vlan=84

By default, the vlan will be created on the vswitch “vs0” but if you want to create it on another virtual switch, you can specify it as a parameter:

acli net.create NAME vlan=VLAN_ID virtual_switch=VSWITCH

In this case, replace:

  • NAME with the name you want to assign to your subnet
  • VLAN_ID with the VLAN ID
  • VSWITCH with the name of the bridge on which you want to create the subnet

Here is an example of a command that allows you to create the “NUTANIX” VLAN with comme vlan id “84 sur le vswitch “vs0” :

acli net.create NUTANIX vlan=84 virtual_switch=vs0

You can then run the “acli net.list” command and check that your new subnet appears in the list.

Creating a Managed Subnet on Nutanix AHV

This command creates a new managed subnet (using IPAM) across the AHV cluster with basic gateway and subnet mask options.

acli net.create NAME vlan=VLAN_ID virtual_switch=vs0 ip_config=GATEWAY/MASK

Replace:

  • NAME with the name you want to assign to your subnet
  • VLAN_ID with the VLAN ID
  • vs0 with the name of the bridge on which you want to create the subnet
  • GATEWAY with the IP address of the subnet’s gateway
  • MASK with the subnet mask

Here is an example of a command that creates the VLAN “NUTANIX” with a vlan id “84” on the vswitch “vs0”, with a gateway address “10.0.84.254” on the network “10.0.84.0/24”:

acli net.create NUTANIX vlan=84 virtual_switch=vs0 ip_config=10.0.84.254/24

Deleting an Existing Subnet

Deleting an existing subnet on a Nutanix AHV cluster is easy! Simply run the following command:

acli net.delete NAME 

You must replace NAME with the name of the subnet you wish to delete, which would give, for example, for the previously created subnet:

acli net.delete NUTANIX

Nothing could be simpler!

Bulk Subnet Creation/Deletion

To make it easier to import large quantities of subnets, I created several CSV files that I can then convert into a list of commands to create multiple subnets in batches.

Everything is on my Github: https://github.com/Exe64/NUTANIX

For unmanaged subnets: https://github.com/Exe64/NUTANIX/blob/main/nutanix-unmanaged-subnets.csv

For managed subnets: https://github.com/Exe64/NUTANIX/blob/main/nutanix-managed-subnets.csv

For deleting subnets: https://github.com/Exe64/NUTANIX/blob/main/nutanix-subnets-delete.csv

To learn more about using these files, I invite you to consult my dedicated article:

Official Documentation

Complete command documentation is available on the publisher’s official website: https://portal.nutanix.com/page/documents/details?targetId=Command-Ref-AOS-v6_10:man-acli-c.html

Read More