Team Leader - Nutanix Technology Champion - Nutanix NTC Storyteller

Julien DUMUR
Infrastructure in a Nutshell

I still remember my first time entering a “serious” server room back in the mid-2000s. What struck me wasn’t so much the deafening roar of the air conditioning, but the physical density of the infrastructure.

Back then, to run a few hundred virtual machines, you didn’t just need “a cluster.” You needed entire rows. Power-hungry Blade Centers, monstrous Fibre Channel switches with their characteristic orange cables, and above all, sitting in the center of the room like a sacred totem: the Storage Array. Entire cabinets filled with 10k RPM mechanical disks, weighing as much as a small car and consuming as many ‘U’ (rack units) as possible.

This is what we call the 3-Tier architecture. While Hyperconvergence (HCI) and Public Cloud seem to be the norm today, it is crucial to understand that 3-Tier was the backbone of enterprise IT for nearly 20 years. To understand this architecture is to understand where we come from, and why we sought to change it.

In this article, the first in a series that will present the evolution of 3-tier virtualization infrastructures towards Nutanix hyperconverged infrastructures, we will factually dissect this standard: how it works, why it dominated the market, and the technical limits that eventually rendered it obsolete for modern workloads.

Genesis: Why Did We Build It This Way?

To understand 3-Tier, you have to go back to the pre-virtualization era. A physical server hosted a single application (Windows + SQL, for example). It was the “Silo” model. Inefficient, expensive, and a nightmare to manage.

Virtualization (led by VMware) arrived with a promise: consolidate multiple virtual servers onto a single physical server. But for this magic to happen, there was an absolute technical condition: mobility.

For a VM to move from physical server A to physical server B without service interruption (the famous vMotion), both servers had to see exactly the same data, at the same moment.

This is where the architecture split into three distinct layers:

  1. We removed the disks from the servers (which now only do computing).
  2. We centralized all data in external shared storage (the Array).
  3. We connected everything via a dedicated ultra-fast network (the SAN).

It was a revolution: the server became “disposable,” or at least interchangeable, because it no longer held the data. But this centralization created a single point of complexity and performance: shared storage. It is the heart of the reactor, but also its Achilles’ heel.

The Anatomy of 3-Tier: Decoupling the Layers

If we were to draw this architecture, it would look like a three-layer cake, where each layer speaks a different language.

1. The Compute Layer

At the very top, we have the physical servers (Hosts). They run the hypervisor (ESXi, Hyper-V, KVM). Their role is purely mathematical: providing CPU and RAM to the virtual machines.

These servers are “Stateless”. They store nothing persistent. If a server burns out, it doesn’t matter: we restart the VMs on its neighbor (HA).

This logic was pushed to the extreme with “Boot from SAN”. We even ended up removing the small local disks (SD cards or SATA DOM) that contained the hypervisor OS so that the server was a total empty shell, loading its own operating system from the distant storage array. A technical feat, but a nightmare in case of SAN connectivity loss.

2. The Network Layer (SAN)

In the middle sits the Storage Area Network. It is the highway that transports data between the servers and the array. Historically, this didn’t go through classic Ethernet (too unstable at the time), but through a dedicated protocol: Fibre Channel (FC).

It is a deterministic and lossless network. Unlike Ethernet which does “best effort,” FC guarantees that packets arrive in order.

If you have administered SAN, you know the pain of Zoning. You had to manually configure on the switches which port (WWN) was allowed to talk to which other port. A single digit error in a 16-character hexadecimal address, and your production cluster would stop dead. It was a task so complex that it often required a dedicated team (“The SAN Team”).

3. The Storage Layer

At the very bottom, the Storage Array. It is a giant computer specialized in writing and reading blocks of data. It contains controllers (the brains) and disk shelves (the capacity).

The array aggregates dozens or even hundreds of physical disks to create large virtual volumes (LUNs) that it presents to the servers. It ensures data protection via hardware RAID.

All the intelligence resides in two controllers (often in Active/Passive or Asymmetric Active/Active mode). This is an architectural bottleneck: no matter if you have 500 ultra-fast SSDs behind them, if your two controllers saturate in CPU or cache, the entire infrastructure slows down. This is called the “Front-end bottleneck”.

The Strengths: Why This Model Ruled the World

It’s easy to criticize 3-Tier with our 2024 eyes, but we must recognize that it brought incredible stability.

  1. Robustness and Maturity: This is hardware designed never to fail. Storage arrays have redundant components everywhere (power supplies, fans, controllers, access paths). We talk about “Five Nines” (99.999% availability).
  2. Fault Isolation: If a server crashes, the storage lives on. If a disk breaks, hardware RAID rebuilds it without the server even noticing (or almost).
  3. Scale-Up Independence: This was the king argument. Running out of space but your CPUs are idling? You just buy an extra disk shelf. Running out of power but have plenty of space? You add a server. You could size each tier independently.

The Weaknesses: The Other Side of the Coin

Despite its robustness, the 3-Tier model began to show serious signs of fatigue in the face of modern virtualization. For us admins, this translated into shortened nights and a few premature gray hairs.

Operational Complexity

The greatest enemy of 3-tier is not failure, it’s the update. Imagine having to update your hypervisor version (ESXi). You can’t just click “Update.” You have to consult the HCL (Hardware Compatibility List). Is my new HBA card driver compatible with my Fibre Channel switch firmware, which itself must be compatible with my storage array OS version? It’s a house of cards. I’ve seen entire infrastructures become unstable simply because a network card firmware was 3 months behind the one recommended by the array manufacturer.

The Bottleneck (The “I/O Blender Effect”)

This is a fascinating and destructive phenomenon. Imagine 50 VMs on a host.

  • VM 1 writes a large sequential file.
  • VM 2 reads from a database.
  • VM 3 boots up.

At the VM level, operations are clean. But when all these operations arrive at the same time in the storage controller funnel, they get mixed up. What was a nice sequential write becomes a slush of random writes (Random I/O). Traditional array controllers, originally designed for single physical servers, often collapse under this type of load, creating latency perceptible to the end user.

The Hidden Cost

Finally, 3-Tier is expensive. Very expensive.

  • Licensing & Support: You pay for server support, SAN switch support, and array support (often indexed to data volume!).
  • Footprint: As mentioned in the introduction, this equipment consumes enormous amounts of space and electricity.
  • Human Expertise: It often requires a team for compute, a team for network, and a team for storage. Incident resolution times explode (“It’s not the network, it’s storage!” – “No, it’s the hypervisor!”).

Conclusion: A Necessary Foundation

The 3-Tier architecture is not dead. It remains relevant for very specific needs, like massive monolithic databases that require dedicated physical performance guarantees.

However, its management complexity and inability to scale linearly paved the way for a new approach. We started asking the forbidden question: “What if, instead of specializing hardware, we used standard servers and managed everything via software?”

It was this reflection that gave birth to Software-Defined Storage (SDS) and Hyperconvergence (HCI). But that is a topic for our next article.

Read More

October 2023. The situation is tense: the price per kWh is skyrocketing, and my newsfeed is flooded with ads for solar panels promising “total autonomy” or “free energy.” Being naturally suspicious (and a bit of a geek), I took out my Excel spreadsheets before taking out my checkbook. I invested €13,900 for 6kWp of power. Two years later, with 17.4 MWh produced, was it worth it? Spoiler: real-life figures beat my simulations, but the devil is in the details.

1. The Genesis: Why Turn My Roof into a Power Plant?

You don’t wake up one morning deciding to drop nearly 14,000 euros (before subsidies) just to please the planet. It is a calculation. My goal was twofold: to secure part of my energy costs for the next 20 years and, let’s admit it, the technical pleasure of managing my own production.

The Context: Buying Electricity in Advance

For the neophyte, seeing this as an expense is a mistake. It is a pre-purchase. By installing panels, I decided to buy a stock of electricity at a fixed price (the cost of installation divided by future production) rather than renting this energy from a supplier whose rates are indexed to geopolitical crises I cannot control.

But be careful, for this calculation to work, you shouldn’t size the installation based on guesswork.

Consumption Analysis: The Essential Prerequisite

Before even contacting an installer, I audited my own home. Many make the mistake of looking at their annual global bill. That’s insufficient.

You need to understand when you consume.

Solar only produces during the day (no kidding). If 80% of your consumption happens at night (electric heating without inertia, nightlife), solar without batteries will be a financial failure.

I extracted my hourly data via the Enedis website (thanks to the Linky smart meter) to isolate my energy “background noise,” what we call the baseload.

This is the house’s incompressible consumption when “nothing” is on: fridge, internet box, VMC, standby devices. For me, this baseload justified a production base, but to reach profitability on 6kWp, I had to be able to shift my heavy consumers (Washing machine, Dishwasher, Water heater) to the daytime. It was this “load shifting” potential that validated the project.

2. The Technical Study: Choosing Without Getting Scammed

Once the need was validated, I had to choose the hardware. The solar market is a jungle where passionate artisans rub shoulders with eco-scammers. Here are my technical choices and, above all, why I made them.

Self-Consumption with Surplus Sale: The Logical Choice

I opted for the standard model: I consume what I produce first, and what I don’t consume is automatically injected into the grid and sold to EDF OA (Purchase Obligation).

In October 2023, this contract guaranteed a fixed feed-in tariff (around 13 cts/kWh) for 20 years. This is a major financial security that allows amortizing the installation even if I am not home to consume.

The Hardware: DualSun and Enphase, the French-American Couple

For my 6 kWp installation, I selected:

  • 16 DualSun FLASH 375 Half-Cut Panels (Total: 6000 Wp)
  • 16 Enphase IQ8M Micro-inverters
  • One Enphase Envoy S-Metered communication gateway

Why this choice?

  1. DualSun Panels (375 Wp): It’s a French brand (manufactured in Asia, let’s be honest, but French engineering). The “Half-Cut” technology (half-cells) allows better management of partial shading and reduces resistive losses. They are robust and aesthetically sober (black frame).
  2. Micro-inverters vs Central Inverter: This was the big debate. I chose Enphase IQ8M micro-inverters.

Why the IQ8M?

Unlike a central inverter (like SMA or Fronius) which manages the entire string in series (if one panel fails or is shaded, the whole string drops), the micro-inverter manages each panel independently.

But why the IQ8M model? It is Enphase’s latest generation capable of creating a micro-grid (although I don’t use the “Sunlight Backup” mode without battery yet). The “M” suffix indicates an output power adapted to my 375Wp panels. With a peak output power of 330VA, the DC/AC ratio is 1.13, which is excellent for avoiding clipping while maximizing production in low light.

The “Plug & Play Kits” Parenthesis and the Efficiency Trap

Before signing my quote, I obviously looked at the “Plug & Play” kits found in DIY stores. On paper, it’s seductive: no craftsman, plug it into a socket, and you’re good to go. But for 6kWp, this solution was not viable, and one must warn against a frequent marketing mirage.

A 400W panel will never produce 400W if it is poorly oriented. Balcony kits, often placed vertically (90°) or with approximate tilt, lose a huge amount of efficiency compared to an optimized roof installation (generally 30-35°).

The trap is confusing Peak Power (what the panel can output in a lab) and Real Production (useful energy).

On many kits, the inverter is deliberately undersized (to respect the injection limit on a simple socket). You buy a 420Wp panel, but the micro-inverter caps at 350VA. This is called clipping. It’s not serious in itself, but it’s a net loss in the middle of summer. For my 6kWp project, I wanted total coherence between the capacity of the DualSun panels and that of the IQ8Ms to milk every available photon.

3. Installation and Commissioning (October 2023)

Once the hardware was validated, time for action. The installation took place in late October 2023.

Installing 16 panels is not trivial. You have to manage the layout on the roof, the routing of DC cables under the tiles, and the run down to the electrical panel. The advantage of Enphase micro-inverters here is safety: you don’t bring high-voltage DC current (dangerous in case of electric arc) down into the house, but directly 230V AC current.

On the administrative side, do not underestimate the delays. Between the prior declaration at the town hall (DP), the grid connection request to Enedis, and the Consuel inspection (mandatory to validate electrical safety before injecting), it is a journey that requires patience. In my case, everything was wrapped up for effective commissioning at the end of 2023.

4. Production and Monitoring: The Truth of Numbers (2024-2025)

This is where the geek takes over. After more than two years of perspective, I can look up from theoretical estimates to give you the reality of the field.

Monitoring Tools: Enphase Enlighten

To manage it all, I use the Envoy S-Metered gateway. Note the “Metered.” Unlike the standard version which only measures production, this one uses measurement toroids (current clamps) placed on the house’s main supply.

Result: I see what I produce, but more importantly what I consume and what I import/export in real-time.

Without this visibility, self-consumption is done blindly.

Gross Production: 17.4 MWh in Two Years!

Here is the data extracted from my tracking for an installed power of 6 kWp:

YearTotal ProductionPerformance Ratio (kWh/kWp)
20248.9 MWh~1,483 kWh/kWp
20258.5 MWh~1,416 kWh/kWp

Data Analysis:

  1. Weather Variability: There is a production drop of about 4.5% between 2024 and 2025. This is normal. The sun is not an absolute constant from one year to another.
  2. Redoubtable Efficiency: With a ratio approaching 1,500 kWh produced per kWp installed in 2024, my installation performs extremely well (the national average is often between 1100 and 1300 depending on the region). The DualSun + Enphase combination works wonders, aided by near-perfect orientation/tilt (33% tilt for a near due south orientation) and correct panel ventilation (which lose efficiency when they heat up too much).

Self-Consumption: The Sinews of War

Producing is good. Consuming is better (financially).

  • Self-consumption rate 2024: 46%
  • Self-consumption rate 2025: 44%

Concretely, this means that I directly consume about 45% of my production. The rest (55%) comes from the grid.

Despite my efforts (delayed start of machines, water heater during the day), I stagnate below the 50% mark. Why? Because in summer, days are long and production explodes (sometimes 40 kWh/day), far beyond the house’s needs. Without a physical battery or an electric vehicle to charge on the weekend, it is difficult to go higher on a 6kWp installation. This is where selling the surplus becomes vital for profitability.

5. The Time for Assessment: Profitability and Real ROI

Let’s talk cash. We hear everything and anything about solar profitability. Here are my real figures, bill in hand, unfiltered.

The Final Cost of the Operation

For a turnkey 6 kWp installation (hardware + labor + procedures), the bill amounted to:

  • Initial Investment: €13,900 incl. VAT
  • Self-consumption Bonus (State): – €2,000
  • FINAL REAL COST: €11,900

Scenario 1: My Reality (2023 Contract)

With a self-consumption rate of about 45% (the remaining 55% coming from the grid) and a surplus feed-in tariff fixed at €0.13/kWh (the rate in force at the time of my request), here is my average annual yield (based on an average of 8.7 MWh/year):

  1. Bill Savings (Self-consumption):~3,915 kWh that I did not buy from EDF (base €0.25/kWh) = €978 in savings.
  2. Surplus Sale (Injection):~4,785 kWh sold to EDF OA (€0.13/kWh) = €622 in income.

Total Annual Gain: ~€1,600

Return on Investment (ROI): €11,900 / €1,600 = 7.4 years.

Verdict: Amortization in less than 7 and a half years for hardware guaranteed for 20 or 25 years is an unbeatable financial investment, far superior to any savings account.

Scenario 2: If I Had to Do It Again Today (The 4-Cent Trap)

This is where my article should serve as a warning. The rules of the game have changed. Recently, the surplus feed-in tariff dropped drastically to around €0.04/kWh (depending on the quarter). Let’s redo the calculation with this new parameter, keeping the same installation:

  1. Bill Savings: €978 (Unchanged).
  2. Surplus Sale (New Tariff):~4,785 kWh * €0.04 = €191 (instead of €622!).

Total Annual Gain: ~€1,169

New ROI: €11,900 / €1,169 = 10.2 years.

This drop in the feed-in tariff upsets the strategy.

  • In 2023 (my case): Selling surplus was a pillar of profitability. I could afford to inject almost 60% of my production without too much pain.
  • Today: Selling at 4 cents covers almost nothing. The absolute priority is no longer to produce a lot, but to consume everything. This highlights the interest in virtual or physical batteries, which were economically unviable two years ago but, with such a low buyback rate, are becoming an option to seriously study to avoid “wasting” 60% of one’s production.

6. Conclusion

After more than two years, do I regret my €11,900? Absolutely not.

Producing 17.4 MWh of green energy from my roof is a daily satisfaction. Seeing my Linky meter display “0 VA” of consumption while the oven and washing machine are running is a pleasure one never tires of.

Technically, the DualSun / Enphase couple is of exemplary stability: no breakdowns, precise monitoring, production on target.

However, if you start today, do not blindly copy my economic model. Do your calculations with the current feed-in tariff. If you cannot shift your consumption to the daytime, the ROI risks drifting away. Solar remains profitable, but it now requires being even smarter about its management.

Read More

In the world of infrastructure, we know that every cluster must be monitored. We never launch a major update without checking the node status and ensuring redundancy. For our professional and personal lives, it should be the same.

2025 is coming to an end, and if I had to summarize this year, it wouldn’t be a simple hot migration, but a true architectural evolution, with a few production incidents. It’s time for a full Health Check. No filters, just data, infra, and feelings.

Here is my post-mortem audit of 2025 and my roadmap for 2026.

2025: Unfiltered Retrospective (The Health Check)

This year marked a critical turning point in my career: my first full year as a Team Lead, while remaining a Senior Consultant expert in hyperconverged infrastructures at a Nutanix Pure Player: Mikadolabs.

From Technical Expert to Team Lead

For the uninitiated, moving from “Senior Consultant” to “Team Lead” is a bit like moving from managing a single cluster to orchestrating an entire datacenter. The scale changes. We no longer just manage IOPS and latency, but humans and planning.

On paper, the blueprint was clear. In reality, execution requires constant vigilance.

Overall, the stack held up. The team delivered, and infrastructure projects were successfully completed. I learned to delegate operational tasks (sometimes painful for a purist) to focus on organization and process improvement. Seeing a team member skill up on complex subjects, not necessarily technical ones, thanks to my guidance brought me a different satisfaction, but just as powerful as resolving a critical outage.

Let’s be transparent: everything wasn’t smooth. The hardest part for an ultra-technical profile like mine is stepping away from the console.

I like getting my hands dirty, tuning performance, auditing clusters. Becoming a Team Lead meant accepting spending less time on Prism Element or the command line, and more time in meetings or planning. I sometimes felt like I was losing my direct “connection” with tech, that imposter syndrome that stalks those who move away from production.

It’s a precarious balance that I continue to adjust for 2026: remaining one of the team’s expert references without becoming a bottleneck.

2025 in Data: Log Analysis

A good architect doesn’t rely on guesswork; they look at the metrics. And this year, if I hadn’t opened my dashboards, I would have had a totally biased view of my own performance.

That’s where data becomes relevant: it doesn’t lie, unlike our brain which tends to erase successes to focus on shortcomings.

The Blog: “Scale-Out” Growth

The traffic figures are quite good for a personal tech blog.

The KPIs of the year:

  • Production: 60 articles published (an average of 5 articles/month). Swiss clock regularity.
  • Traffic: 39.3k Views (+868%) and 23.8k Unique Visitors (+924%). Note: The growth figures compared to 2024 are a bit biased because the tool I used to track blog traffic changed in the last quarter of 2024.
  • Engagement: A community growing on LinkedIn that is starting to comment and interact, a sign that my content is finding its target.

We observe a direct correlation between publication density (especially the peaks in May and the regularity of the last quarter) and the explosion of organic traffic. It is proof by example that technical SEO, coupled with in-depth content (not simple ChatGPT articles), pays off over time. The blog has gone from “confidential” status to a true consulted resource. Many clients (not to say “all”) have already told me they read the blog regularly. Thank you, that’s what drives me to continue!

Sport: The Perception Bug

This is where the retrospective becomes surprising. If you had asked me yesterday: “Julien, were you athletic this year?”, I would have answered with frustration: “Yes, but not regular enough for my taste, I feel like I’ve stagnated”.

So I extracted the logs of my activities (Running and Cycling, thanks Strava) to see the extent of the damage. And there, surprise: the logs contradict my mental monitoring.

Activity2024 (Baseline)2025 (Prod)Differential
Running106 km487 kmx 4.5
Cycling444 km1987 kmx 4.5

It’s a textbook case of a “False Positive”. My brain focused on the “off” weeks (only 2 weeks out of 52 with 0 activity), forgetting the global volume.

In reality, I multiplied my activity volume by 4.5 compared to 2024. I covered nearly 2500 km all sports combined. That’s not too bad, but I intend to do better in 2026!

The lesson for 2026? Trust the data. Like in prod, when you think there is a latency problem, you look at the curves first before rebooting. I wasn’t “irregular”, I simply changed scale without realizing it.

2026 Goals: My Tech Radar & Roadmap

A review is useless if it doesn’t allow updating the roadmap. For 2026, I don’t foresee a revolution, but a targeted evolution of my technical and personal stack. The goal? Reduce technical debt and prepare for the future.

1. Tech Watch: K8s and AI (Pragmatic)

There are two major subjects on which I intend to skill up, not out of “Hype”, but out of operational necessity:

  • Kubernetes (K8s): It has become unavoidable. Even in a hyperconverged world, container orchestration is the standard upper layer. It’s a subject I’ve put off for a long time, due to lack of time. So I want to learn the basics, and go beyond to master architecture and advanced troubleshooting.
  • AI (User & Integrator): I’m not talking about playing with prompts to generate images of cats or parody songs. My goal is twofold: optimize my daily workflow (AI as an assistant) and above all understand how to technically integrate it into solutions (API, automation). AI will not replace the architect, but the architect who uses AI will replace the one who doesn’t.

2. Side Project: Automated Audit

This is the big “Dev” chunk of the year. As a consultant, I spend a lot of time auditing infrastructures. I am working on developing an automated audit application.

The idea is simple: script intelligence and recurring checks to save time on data collection and focus on high value-added analysis. It’s a project that mixes my infra skills and my desire to code. Stay tuned, I’ll surely talk about it here again.

3. Human Infrastructure: Preventive Maintenance and MCO

Finally, let’s talk about my Hardware: my body.

The 2025 logs showed me that the machine is capable of handling the load, but the configuration will have to be optimized. My 2026 goal is to invest a little more in my health just as one invests in critical infrastructure:

  • More sport: Continue the momentum of 2025 to aim for absolute regularity and increase volume with more structured training.
  • Less stress: Better partition professional and personal life, and learn to pick my battles.
  • Healthy Food: Pay a little more attention to my diet to boost the benefits of physical activity.

My wishes for you: Be curious, be resilient

To conclude this first publication of 2026, I won’t settle for the usual formulas. In 2026, I wish you two essential qualities: Curiosity and Resilience.

Don’t be intimidated by the mountain. Computer science, like any field of expertise, is tamed step by step. Be curious, dare to test, dare to make mistakes. It is the only way to learn.

I also wish you Resilience. In our projects as in our lives, everything never goes exactly as planned on paper. There will be unforeseen events, errors, moments of fatigue. It’s okay.

True strength is not never falling, but knowing how to bounce back. Be lenient with yourselves when it doesn’t work on the first try. Accept the downtime, learn, and restart. That is true sustainable performance.

To all, I wish you an excellent year 2026.

Read More

You might think that over time, you get used to it. That after two years, opening the email announcing the results becomes a mere administrative formality. Well, I must confess: not at all.

It is with immense pride – and undisguised relief – that I announce my nomination as a Nutanix Technology Champion (NTC) for the year 2026. This is the third consecutive year that I have the honor of joining this group of passionate experts.

To be completely transparent, I never take this distinction for granted. In the IT world, technologies evolve fast, and so do we. Staying relevant requires work, curiosity, and above all, the desire to share. Seeing my name once again on the official NTC 2026 list is a beautiful validation of the efforts put into the blog throughout the year.

What is an “NTC”? (Spoiler: It’s not just a LinkedIn badge)

I am often asked if it is an exam I passed, like an NCP-MCI certification. The answer is no, and that is precisely the beauty of this program.

The Nutanix Technology Champion program does not just reward passing a technical multiple-choice quiz. It is a distinction that recognizes community engagement. Basically, Nutanix spots those who spend their free time testing, breaking, fixing, and above all explaining their technologies to others. Whether through blog posts (like here), forum contributions, or talks at events.

For the purists, it is the equivalent of the vExpert at VMware or the MVP at Microsoft. It is the validation of what we call technical “Soft Skills”: the ability to evangelize a solution not because we are paid to do so, but because we master its intricacies and we love it. It is a recognition by peers and by the vendor, and that is what makes it so rewarding.

Under the Hood: Why this nomination matters for the blog

Beyond the shiny logo to put in a signature, being an NTC has a direct impact on the quality of what I can offer you on juliendumur.fr. It is not an honorary title devoid of meaning; it is a key that opens interesting doors.

Concretely, this status gives me privileged access behind the scenes. I have the opportunity to exchange directly with Product Managers and Nutanix engineering teams. This means that when I write a technical article, I can validate my hypotheses at the source, avoiding approximations.

Furthermore, we have access to roadmap briefings and Beta versions. Even if this information is often under NDA (I can’t reveal everything to you in advance!), it allows me to understand the direction the technology is taking. I can thus better anticipate topics to cover and offer you more relevant analyses as soon as features reach General Availability (GA). It is the assurance for you to read content that is not only technically accurate but also in phase with market reality.

Retrospective and 2026 Goals: Full Steam Ahead

This third nomination is the fruit of consistency. But above all, it marks the beginning of a new year of “lab”. The goal is not to collect stars, but to continue exploring the Nutanix Cloud Platform from every angle.

For 2026, I intend to keep offering practical tutorials and field feedback. While the AHV hypervisor remains the unavoidable foundation, I really want to move up the software stack a bit more this year. Expect to see topics covering container orchestration with NKP (Nutanix Kubernetes Platform), automation, and probably a stronger focus on security with Flow. The objective remains the same: dissecting the tech to make it accessible.

A huge thank you to the community for the daily exchanges, and of course to the NTC program team (shout out to Angelo Luciani) for their renewed trust. It is a pleasure to be part of this virtual family.

Now, the ball is also in your court: are there specific topics or features of the Nutanix ecosystem that you would like to see me cover this year? The comments are open!

Read More

I won’t lie to you: when you’ve had a taste of gold, bronze has a peculiar flavor. Last year, I had the immense pride of finishing first in the “Top Bloggers” ranking of the Nutanix Technology Champion (NTC) program.

This year, the verdict is in on the official community blog: I ranked 3rd.

Did I slow down? No. Did I share less? On the contrary. But in tech, just like in sports, staying at the top is often harder than getting there. This 3rd place is, above all, a signal that the competition has intensified. And honestly? It’s exactly what I needed to motivate me to get back in the fight for 2026.

The NTC Program is Not Just a Badge

For those new to the ecosystem, being a Nutanix Technology Champion (NTC) isn’t just about slapping a logo on your LinkedIn profile. It is a commitment. It means being part of a technical vanguard that tests, breaks, fixes, and—above all—documents Nutanix solutions. The “Top Blogger” ranking is the barometer of this activity.

1st in 2024, 3rd in 2025: Analyzing the Logs

So, what happened? I pulled my logs to compare. If my performance had dropped, I would have accepted this 3rd place with a shrug. But the data shows otherwise: my publication volume is equivalent to last year’s. Even better, my strategy was cleaner: instead of doing “bursts” (flurries of articles), I maintained a metronomic consistency, spread evenly over the 12 months.

The conclusion is simple and undeniable: the overall bar has been raised. My peers were absolute beasts this year. They produced more. This is excellent news for the Nutanix community: the ecosystem is alive, dense, and increasingly sharp. But for the competitor in me, it’s a wake-up call. Consistency is no longer enough; just like in cycling, I’m going to have to up the intensity.

Why Publish?

Beyond the rankings and the competition, why continue writing with such discipline? The answer is pragmatic. My blog is primarily my external memory. In our line of work, we don’t remember everything. We test, we configure, we hit a critical error, we resolve it… and six months later, we’ve forgotten how we did it. Blogging is about documenting my own “struggles” so I never have to look for the solution twice. It’s about transforming obscure troubleshooting into a clear tutorial. But make no mistake: every article is born from a real technical need, from a real infra that I built or fixed. No fluffy theory, just experience from the field. The icing on the cake: the feedback from our clients who stumble upon my blog and tell me, “We found a solution on your site.” That is the real reward.

Conclusion: See You at the Finish Line

Bravo to the two peers who finished ahead of me this year. You set the bar very high, and that is exactly what I like. The level of the NTC program is what makes it credible. But the message has been received. The consistency of 2025 was a good foundation, but for 2026, I’m shifting gears. I’m going to chase more specific topics, dig deeper into the guts of Nutanix AOS and AHV, and perhaps explore use cases that no one has documented yet.

The bronze medal is nice. But it will serve primarily as a reminder on my desk: next year, I’m aiming for the yellow jersey.

See you soon for the next technical article.

Read More

It’s one of those mornings where the coffee tastes a little different. The taste of major announcements that are bound to change our habits as administrators. Nutanix has just released a trio of major updates into the wild: AOS 7.5, AHV 11.0, and Prism Central 7.5.

Let’s be clear from the start: I’ve combed through the Release Notes for you, and this isn’t just a simple “Patch Tuesday.” It is a structural overhaul. Nutanix is no longer content with just improving its HCI; the vendor is breaking its own dogmas (hello external storage and compute-only nodes) and drastically tightening security, even if it shakes up our old reflexes.

While on paper, the promises of performance (AES everywhere) and flexibility (Elastic Storage) are enticing, my field experience dictates a certain prudence. When you mess with the storage engine and SSH access at the same time, you don’t rush into production without reading the fine print carefully. That is exactly what I’m proposing here: an unfiltered technical analysis of what awaits you.

AOS 7.5: Performance & Architecture

Let’s start with the core of the reactor: AOS 7.5. If you thought the Nutanix storage architecture was set in stone, think again. This version marks a turning point in hot data and disk space management.

The Key Concept: AES Becomes the Absolute Standard

Until now, the Autonomous Extent Store (AES) was often reserved for high-performance All-Flash environments. With 7.5, that’s over: AES becomes the default architecture for all deployments, whether All-Flash or Hybrid.

Why is this important? Because AES improves metadata locality and reduces CPU consumption for I/O. But be careful, the critical novelty here is the automatic migration. If you upgrade an existing hybrid cluster to 7.5, AOS will launch a background conversion task to switch to AES.

Do not underestimate the I/O impact of this “transparent” conversion. Even if Nutanix handles it in the background, metadata restructuring is never trivial on a loaded cluster. Furthermore, Nutanix introduces a revamped Garbage Collection (GC) (“Accelerated Data Reclamation”). It is now capable of cleaning multiple “holes” in an Erasure Coding stripe in a single pass and merging inefficient stripes. It’s brilliant for efficiency, but it confirms that the engine is working much more “intelligently” under the hood.

The Unexpected Opening: Pure Storage and Dense Nodes

This might be the strongest sign of this release: Nutanix is officially opening up to third-party storage. AOS 7.5 supports connecting to Pure Storage FlashArray arrays via NVMeoF/TCP for capacity storage. Nutanix handles the compute, Pure handles the data. For HCI purists like me, this is a paradigm shift, but one that meets a real need for disaggregation.

Finally, for those managing storage monsters, note that existing All-Flash nodes can be upgraded to support up to 185 TB per node, while maintaining aggressive RPOs (NearSync/Sync).

AHV 11.0 & Flexibility: The Era of “Compute-Only” and Elastic Storage

If AOS 7.5 boosts the engine, AHV 11.0 changes the bodywork. For a long time, Nutanix preached the dogma of strict hyperconvergence: “You buy identical nodes, you expand storage and compute at the same time.” With this version, I feel like Nutanix is finally listening to those who, like me, found themselves with too much CPU and not enough disk (or vice versa).

The Key Concept: Official Disaggregation

It’s a small revolution: Nutanix now allows the deployment of “Compute-Only” nodes much more flexibly. We are seeing the arrival of a standalone AHV installer. Concretely, you can manually install AHV via an ISO on a server, without going through the heaviness of a full re-imaging via Foundation.

For labs or rapid compute power expansions, this is a phenomenal time-saver. But be careful, this requires increased rigor regarding hardware compatibility management, as Foundation will no longer be there to act as a safeguard during installation.

The Awaited Feature: Elastic VM Storage

This is undoubtedly the feature I was waiting for the most to break down silos. With Elastic VM Storage, available starting with AHV 11.0 and AOS 7.5, you can finally share a storage container from one AHV cluster to another AHV cluster within the same Prism Central.

Imagine: your Cluster A is bursting at the seams storage-wise, but your Cluster B is sleeping half-empty. Before, you had to move VMs. Now, you can mount the container from Cluster B onto Cluster A and deploy your VMs directly on it.

It’s great, but caution. It’s not magic. You are introducing a critical network dependency between two clusters that were previously isolated. If your inter-cluster network fails, the VMs on Cluster A hosted on Cluster B go down. Moreover, Nutanix clearly states that this allows “serving storage from a remote cluster,” which necessarily implies additional network latency compared to native data locality. Reserve this for workloads that are not sensitive to disk latency or for temporary overflow.

Finally, note the arrival of Dual Stack IPv6. AHV can now talk to your DNS, NTP, and Syslog servers in IPv6. A necessary update to align with modern network standards.

Security and Governance: Locking Everything Down (SSH, vTPM, Profiles)

Let’s move on to the part that will make command-line regulars (myself included) grind their teeth. Nutanix has decided to tighten the screws on security, and they aren’t kidding around.

The Key Concept: The Digital Fortress

The goal is clear: reduce the attack surface, especially against ransomware that often attempts to propagate via lateral movements on management interfaces. Nutanix is therefore introducing mechanisms to limit direct human access to infrastructure components (CVM and Hosts).

The Critical Change: CVM Secure Access (The End of SSH is coming)

This is the number one vigilance point of this article. With AOS 7.5, you now have the option (and strong incentive) to totally disable SSH access to CVMs and AHV hosts.

On paper, this is excellent for security (“Security by Obscurity”). In operational reality, it is a violent cultural change. No more quick ssh nutanix@cvm to check a log or run a quick diagnostic script. Everything must go through APIs or the console.

Danger Warning! Before checking that “Disable SSH” box, check your migration procedures. The Release Notes are formal: disabling SSH breaks Cross-Cluster Live Migration (CCLM) workflows, whether in On-Demand mode (OD-CCLM) or Disaster Recovery (DR-CCLM). These operations still rely on SSH tunnels between source and destination hosts. If you cut SSH, your migrations will fail. You will have to re-enable SSH to make them work. This is a major operational constraint to anticipate.

Governance: vTPM & Guest Profiles

For highly sensitive environments, AHV now supports storing vTPM encryption keys in an external KMS. This allows centralizing key management and aligning the vTPM security policy with the cluster’s “Data-at-Rest” encryption policy.

On the quality of life side, I welcome the arrival of reusable Guest Customization Profiles. No more tedious copy-pasting of Sysprep scripts with every VM clone. You create a profile (Windows + NGT 4.5 min required), store it, and apply it on the fly to clones or templates. It’s simple, efficient, and avoids input errors.

Prism Central 7.5: The Interface That Makes Life Easier (NIM & Policies)

We finish this overview with Prism Central 7.5 (pc.7.5). If AOS is the engine and AHV the chassis, PC is the dashboard. And believe me, it is fleshing out considerably to save us from ungrateful manual tasks.

The Key Concept: Intelligent Orchestration

The major addition is the arrival of VM Startup Policies. This is a feature I’ve been waiting for for years to replace my cobbled-together startup scripts. Concretely, you can now define the exact restart order of VMs during an HA event (node failure) or a cluster restart.

This allows managing application dependencies cleanly: “Start the Database, wait for it to be UP, then start the Application Server”. It’s native, integrated into the interface, and greatly secures recovery plans.

For large-scale environments, note the appearance of NIM (Nutanix Infrastructure Manager). It is a new orchestrator designed to provision, configure, and manage your datacenters in a standardized way, aligning with the famous “Nutanix Validated Designs” (NVD). It is clearly oriented for very large deployments that want to avoid configuration drift.

Enhanced Resilience: PC Backup & Restore

Until now, restoring a crashed Prism Central could be an adventure, especially if the original cluster was itself down. Nutanix has lifted a major technical constraint: you can now recover a Prism Central instance from a backup located on any Prism Element cluster.

This is a detail that changes everything in case of a total site disaster. Previously, recovery from a Prism Element backup was restricted to the specific cluster where PC was registered. This new flexibility, coupled with the ability to backup to a generic S3 Object Store, makes the management architecture much more robust. We are no longer putting all our eggs in one basket.

Conclusion & Recommendations: Maturity Has a Price

After dissecting these three release notes, my feeling is clear: Nutanix is reaching an impressive level of maturity. The generalization of AES and the opening to external storage show that the platform is ready for the most demanding workloads and the most complex architectures.

However, as a “Prudent Ghost Writer,” I must raise a final red flag before you click “Upgrade.”

⚠️ Watch out for prerequisites: Do not rush headlong into the Prism Central update. Version pc.7.5 requires your Prism Element clusters to run at least AOS 7.0.1.9. If you are on an earlier version, deployment will be blocked. You will have to plan your migration path rigorously.

This is an unavoidable update for the performance and security gains, but it is also a structural update. The AES conversion, the potential SSH deactivation, and the new network dependencies for elastic storage require validating these changes in a pre-production environment.

Take the time to test, check your compatibility matrices, and above all, do not cut SSH before verifying that you do not have any planned inter-cluster migration (CCLM)!

To your keyboards, and happy upgrading!

Read More
nutanix ahv cli reference guide

In this new blog post, we’ll cover all the main Nutanix AHV CLI commands that allow you to perform some checks on your virtual machines using the command line.

All the commands in this article can be run via SSH from any CVM in the cluster.

Display the list of virtual machines

To display the list of virtual machines on the Nutanix cluster, simply run the following command:

acli vm.list

This will show you all the VMs present on the cluster, without the CVMs:

nutanix@NTNX-S348084X9211699-B-CVM:192.168.84.22:~$ acli vm.list
VM name VM UUID
LINUX 88699c96-11a5-49ce-9d1d-ac6dfeff913d
NTNX-192-168-84-200-PCVM-1760699089 f659d248-9ece-4aa0-bb0c-22a3b3abbe12
vm_test 9439094a-7b6b-48ca-9821-a01310763886

As you can see, I only have two virtual machines on my cluster:

  • My Prism Central
  • A newly deployed “LINUX” virtual machine
  • A test virtual machine

A handy command to quickly retrieve all virtual machines and their respective UUIDs. Now let’s see how to retrieve information about a specific virtual machine.

Retrieving Virtual Machine Information

To display detailed information about a virtual machine, use the following command:

acli vm.get VM_NAME

Using the example of my “LINUX” virtual machine, this returns the following information:

nutanix@NTNX-S348084X9211699-B-CVM:192.168.84.22:~$ acli vm.get LINUX
LINUX {
config {
agent_vm: False
allow_live_migrate: True
apc_config {
apc_enabled: False
}
bios_uuid: "88699c96-11a5-49ce-9d1d-ac6dfeff913d"
boot {
boot_device_order: "kCdrom"
boot_device_order: "kDisk"
boot_device_order: "kNetwork"
hardware_virtualization: False
secure_boot: False
uefi_boot: True
}
cpu_hotplug_enabled: True
cpu_passthrough: False
disable_branding: False
disk_list {
addr {
bus: "ide"
index: 0
}
cdrom: True
device_uuid: "fae2ee55-8736-4f3a-9b2c-7d5f5770bf33"
empty: True
iso_type: "kOther"
}
disk_list {
addr {
bus: "scsi"
index: 0
}
cdrom: False
container_id: 4
container_uuid: "2ead3997-e915-4ee2-b9a4-0334889e434b"
device_uuid: "f9a8a84c-6937-4d01-bfd2-080271c44916"
naa_id: "naa.6506b8def195dc769b32f3fe47100297"
storage_vdisk_uuid: "215ba83c-44cb-4c41-bddc-1aa3a44d41c7"[7] 0:python3.9* "ntnx-s348084x9211699-" 21:12 21-Oct-25 vmdisk_size: 42949672960
vmdisk_uuid: "42a18a62-861a-497a-9d73-e959513ce709"
}
generation_uuid: "9c018794-a71a-45ae-aeca-d61c5dd6d11a"
gpu_console: False
hwclock_timezone: "UTC"
machine_type: "pc"
memory_mb: 8192
memory_overcommit: False
name: "LINUX"
ngt_enable_script_exec: False
ngt_fail_on_script_failure: False
nic_list {
connected: True
mac_addr: "50:6b:8d:fb:a1:4c"
network_name: "NUTANIX"
network_type: "kNativeNetwork"
network_uuid: "7d13d75c-5078-414f-a46a-90e3edc42907"
queues: 1
rx_queue_size: 256
type: "kNormalNic"
uuid: "c6f02560-b8e6-4eed-bc09-1675855dfc77"
vlan_mode: "kAccess"
}
num_cores_per_vcpu: 1
num_threads_per_core: 1
num_vcpus: 2
num_vnuma_nodes: 0
power_state_mechanism: "kHard"
scsi_controller_enabled: True
vcpu_hard_pin: False
vga_console: True
vm_type: "kGuestVM"
vtpm_config { is_enabled: False
}
} is_ngt_ipless_reserved_sp_ready: True
is_rf1_vm: False
logical_timestamp: 1
state: "kOff"
uuid: "88699c96-11a5-49ce-9d1d-ac6dfeff913d"

As you can see, this returns all the information about a virtual machine. It is possible to filter some of the information returned with certain commands. Here are the ones I use most often:

acli vm.disk_get VM_NAME : to retrieve detailed information of all disks of a virtual machine

nutanix@NTNX-S348084X9211699-B-CVM:192.168.84.22:~$ acli vm.disk_get LINUX
ide.0 {
addr {
bus: "ide"
index: 0
}
cdrom: True
device_uuid: fae2ee55-8736-4f3a-9b2c-7d5f5770bf33
empty: True
iso_type: "kOther"
}
scsi.0 {
addr {
bus: "scsi"
index: 0
}
cdrom: False
container_id: 4
container_uuid: "2ead3997-e915-4ee2-b9a4-0334889e434b"
device_uuid: f9a8a84c-6937-4d01-bfd2-080271c44916
naa_id: "naa.6506b8def195dc769b32f3fe47100297"
storage_vdisk_uuid: 215ba83c-44cb-4c41-bddc-1aa3a44d41c7
vmdisk_size: 42949672960
vmdisk_uuid: 42a18a62-861a-497a-9d73-e959513ce709
}

acli vm.nic_get VM_NAME : to retrieve the detailed list of network cards attached to a virtual machine

nutanix@NTNX-S348084X9211699-B-CVM:192.168.84.22:~$ acli vm.nic_get LINUX
50:6b:8d:fb:a1:4c {
connected: True
mac_addr: "50:6b:8d:fb:a1:4c"
network_name: "NUTANIX"
network_type: "kNativeNetwork"
network_uuid: "7d13d75c-5078-414f-a46a-90e3edc42907"
queues: 1
rx_queue_size: 256
type: "kNormalNic"
uuid: "c6f02560-b8e6-4eed-bc09-1675855dfc77"
vlan_mode: "kAccess"
}

acli vm.snapshot_list VM_NAME : to retrieve the list of snapshots associated with a virtual machine

nutanix@NTNX-S348084X9211699-B-CVM:192.168.84.22:~$ acli vm.snapshot_list LINUX
Snapshot name Snapshot UUID
SNAPSHOT_BEFORE_UPGRADE e7c1e84e-7087-42fd-9e9e-2b053f0d5714

You now know almost everything about verifying your virtual machines.

For the complete list of commands, I invite you to consult the official documentation: https://portal.nutanix.com/page/documents/details?targetId=Command-Ref-AOS-v7_3:man-ncli-c.html

In the next article, we’ll tackle a big task: creating virtual machines using CLI commands.

Read More

In a previous article, we covered how to deploy and perform the basic configuration of a Palo Alto gateway to replace the basic gateway supplied with your OVHcloud Nutanix cluster.

I will now show you how to connect this gateway to the RTvRack supplied with your cluster to connect it to the internet.

Connecting the Gateway to the RTvRack

In “Network > Zones”, we start by creating a new “Layer3” zone, which we’ll call “WAN” for simplicity:

You can also create one or more other zones to connect your other interfaces (e.g., an “INTERNAL” zone).

Next, in “Network > Interfaces,” edit the ethernet1/1 interface. If you’ve successfully created your VM on Nutanix, it will correspond to the WAN output interface. It will be a “Layer3” interface:

On the “Config” tab, select the “default” Virtual Router and select the “WAN” security zone.

On the “IPv4” tab, add the available public IP address in the range provided to you by OVHcloud with your cluster, making sure to include a /32 mask at the end:

You can find the network information for your public IP address on your OVHcloud account in “Hosted Private Cloud > Network > IP”:https://www.ovh.com/manager/#/dedicated/ip

En fUsing the public IP address and its associated network mask, you can deduce:

The public IP address to assign to the WAN port of your gateway

The IP address of the WAN gateway

Example with the network 6.54.32.10/30:

Network address (not usable): 6.54.32.8
First address (public address of the PA-VM): 6.54.32.9
Last address: 6.54.32.10 (WAN gateway address)
Broadcast address: 6.54.32.11 (broadcast address)

Repeat the operation with the interface corresponding to the subnet of your Nutanix cluster, using the IP address of the gateway you specified when deploying your cluster.

However, make sure to set the mask corresponding to that of the network in which the interface is located as indicated in the documentation: https://docs.paloaltonetworks.com/pan-os/11-0/pan-os-networking-admin/configure-interfaces/layer-3-interfaces/configure-layer-3-interfaces#iddc65fa08-60b8-47b2-a695-2e546b4615e9.

In “Network > Virtual Routers”, edit the default router. You should find your “ethernet1/1” interface at a minimum, as well as any other interfaces you may have already configured:

Then, in the “Static Routes” submenu, create a new route with a name that speaks to you, a destination of 0.0.0.0/0, select the “ethernet1/1” interface and as Next Hop the IP address of the public network gateway provided to you by OVHcloud:

Finally, go to the “Device > Setup > Services” tab and edit the “Service Route Configuration” option in “Services Features” to specify the output interface and the associated /32 IP address for some of the services:

The list of services to configure at a minimum is as follows:

  • DNS
  • External Dynamic Lists
  • NTP
  • Palo Alto Networks Services
  • URL Updates

You can validate and commit. Your PA-VM gateway is now communicating with the OVHcloud RTvRack. All that’s left is to finalize the configurations to secure the installation and create your firewall rules to allow your cluster to access the internet.net.

Read More
header nutanix

A quick blog post to share that the Nutanix Technology Champion (NTC) program registrationssont are open !

From today, October 1st, and until October 31, you can fill the form and try to be a part of this program.

Applications will be reviewed in November and announcement for the NTC 2026 members will be published in December.

You have a blog and you want to share Nutanix knowledge with other experts ? Fill the form on the official webpage: https://next.nutanix.com/community-blog-154/step-into-the-spotlight-nutanix-technology-champion-2026-applications-now-open-44876

My application is already sent, and I hope to be part of this wonderful program for the third year in a row !

Read More
nutanix hycu

In the complete guide to HYCU that I published earlier this year, I mentioned the need to follow the upgrade paths to update your controller.

The problem? To find out this upgrade path, you had to read every release note, from the version directly above your controller’s, up to the latest version.

If your controller is lagging behind in its upgrade path, this can be a long and tedious process.

A problem? A solution

To overcome this recurring problem, I decided to develop a time-saving tool: HYCU Upgrade Path Wizard!

Those who know me a little know that I’m a bit of a developer, so I enlisted the help of AI to develop this tool! I gave it my idea, it developed it in record time, and I took care of adding what was missing and creating the design just the way I wanted it.

Please feel free to give me your feedback and/or suggestions for improvement!

Read More