Team Leader - Nutanix Technology Champion - Nutanix NTC Storyteller

Julien DUMUR
Infrastructure in a Nutshell

A question that comes up regularly: “what are the default passwords on my cluster?”

Here is an exhaustive list of passwords for a Nutanix cluster after installation, as well as those for some additional components:

AHV hosts passwords:

  • “root” account: nutanix/4u
  • “nutanix” account: nutanix/4u
  • “admin” account: nutanix/4u

CVMs passwords:

  • “admin” account: nutanix/4u
  • “nutanix” account: nutanix/4u

Prism Central password:

  • “admin” account: Nutanix/4u
  • “nutanix” account: nutanix/4u

Nutanix Move password:

  • “admin” account: nutanix/4u

Nutanix Files password:

  • “nutanix” account: nutanix/4u

I invite you to read this article to change your cluster passwords: https://juliendumur.fr/nutanix-ahv-operations-post-install-partie-1/

Read More
Nutanix Blog Header

After re-iping a Prism Central at a customer’s location, the menu to switch between application sections no longer worked at all and generated an error message when hovering over it:

By inspecting the log file “Styx.log” : /home/docker/domain-manager/log/styx.log, I was able to find this type of log lines:

V3ApiError: ACCESS_DENIED | No permission to access the resource.;

In the “Athena.INFO” file I also found JWT errors that indicated certificate parsing problems:

ERROR 2024-11-18T17:49:01,154Z Thread-1 athena.authentication_connectors.CertificateAuthenticator.getX509Certificate:309 certificate parsing exception {}. Please ensure the certificate is valid

To fix the error, I had to regenerate a new certificate using the following command:

nutanix@PCVM:~$ncli ssl-certificate ssl-certificate-generate

Once the regeneration process was complete, my Prism Central menu started working perfectly fine again.

Official Documentation: https://portal.nutanix.com/page/documents/kbs/details?targetId=kA07V0000004UtNSAU

Read More

When deploying Prism Central on my Nutanix CE 2.1 cluster, I was faced with an error message that prevented registering my cluster on Prism Central…

Indeed, once all the parameters are filled in during validation, I get the error message “Cluster has dual stack enabled. Cannot register to a PC.” :

This error message is related to the presence of IPv6 in parallel with IPv4 on the cluster, but a solution exists and it resides in the following command:

manage_ipv6 unconfigure; manage_ipv6 disable

You must answer “Y” to the question “Proceed to remove above IPv6 configuration?” in order to validate the process:

nutanix@NTNX-436d2f97-A-CVM:192.168.84.200:~$ manage_ipv6 unconfigure; manage_ipv6 disable
[INFO] Initializing script… done
[INFO] Current IPv6 configuration on cluster: {
"svmips": {
"192.168.84.200": null
},
"hostips": {
"192.168.84.199": null
},
"prefixlen": null,
"gateway": null
}
[INFO] Note: This operation will restart the following services: ['CerebroService', 'StargateService']
Proceed to remove above IPv6 configuration? [Y/N]: Y
[+] CVM and Hypervisor IPv6 addresses unconfigured
[+] Cleared IPv6 configuration from Zeus
[+] CVM and hypervisor firewall rules updated
[+] Necessary services have been restarted
[INFO] Marked Ergon task 4673fda3-92da-4efe-59f5-1dd3fc51a6cd as kSucceeded
[INFO] Action unconfigure completed successfully
Script output logged to /home/nutanix/data/logs/manage_ipv6.out
[INFO] Initializing script… done
[+] IPv6 disabled on CVMs and Hypervisors
[INFO] Marked Ergon task 9cdfcb37-436a-479f-4d7a-08ef69e266b0 as kSucceeded
[INFO] Action disable completed successfully
Script output logged to /home/nutanix/data/logs/manage_ipv6.out
nutanix@NTNX-436d2f97-A-CVM:192.168.84.200:~$

Once the procedure is completed, the Prism Central registration goes smoothly:

If you still have issue, Goerge bring us another way to disable dual stack :

Add all these lines in /etc/sysctl.conf:

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.all.disable_policy = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.default.disable_policy = 1
net.ipv6.conf.eth0.disable_ipv6 = 1
net.ipv6.conf.eth0.disable_policy = 1
net.ipv6.conf.eth1.disable_ipv6 = 1
net.ipv6.conf.eth1.disable_policy = 1
net.ipv6.conf.eth2.disable_ipv6 = 1
net.ipv6.conf.eth2.disable_policy = 1
net.ipv6.conf.lo.disable_ipv6 = 1
net.ipv6.conf.lo.disable_policy = 1

Run :

sudo sysctl -p 

And rolling reboot of all the CVMs.

Then run manage_ipv6 unconfigure :

manage_ipv6 unconfigure; manage_ipv6 disable
Read More
Nutanix Blog Header

We’ve been waiting for them for several days, they’ve finally arrived! I’m obviously talking about AOS 7 and AHV 10 which were made available this Wednesday, December 4 by Nutanix.

What’s new in AOS 7

Among the new features in AOS 7 you will be able to find:

  • Disaster Recovery for Flow
  • Cloud KMS support (if you missed the OVHcloud announcements, it’s here)
  • Increased the amount of memory supported by nodes up to 8TB
  • Centralized password management in Prism Central for AHV system accounts
  • Nutanix API v4

More than 150 bugs have been fixed with 28 bugs still being worked on.

For the full release note, click here: https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-AOS-v7_0:Release-Notes-AOS-v7_0

What’s new in AHV 10

In AHV 10, here are some features that are coming:

  • New format for version naming
  • PCIe Passthrough for Guest VMs
  • Support for NVidia A2 Tensor Core and H100 NVL graphics cards
  • Support for configuring the boot device order from the aCLI

At the same time, more than 40 known bugs have been fixed. Be careful, however, 13 are still identified.

For the full release note, it’s here: https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-AHV-v10_0:Release-Notes-AHV-v10_0

To your updates!

Read More
nutanix university

Nutanix, through Nutanix University, has just launched the new versions of its flagship certifications: the Nutanix Certified Associate (NCA) 6.10 and the Nutanix Certified Professional – Multi-Cloud Infrastructure (NCP-MCI) 6.10.

If you are an IT professional, passionate about the cloud or if you want to validate your skills with one of the major players in the sector, this certification update is an opportunity not to be missed.

Why take these certifications?

Nutanix certifications are recognized worldwide to validate skills in the design, deployment, and management of cloud infrastructures. In particular:

The NCA 6.10 certification is aimed at beginners or intermediate professionals, those who wish to demonstrate their understanding of the fundamental principles of virtualization, storage, and cloud environments with Nutanix.

The NCP-MCI 6.10 certification is aimed at experts who want to validate their mastery of multi-cloud infrastructure concepts, resource management across multiple platforms, and how to optimize hybrid and multi-cloud environments with Nutanix.

These certifications can play a decisive role in your career path, proving your skills and opening up new opportunities in companies of all sizes.

Free Registration Until December 13!

To celebrate the launch of these new certifications, Nutanix is ​​offering an exclusive opportunity: the NCA 6.10 and NCP-MCI 6.10 certification exams are free until December 13, 2024! This is the perfect opportunity to validate your skills for just 1 cent.

Registration is quick and easy. All you have to do is go to the official Nutanix University website, log in, and select the certification you want.

Once registered, you will have access to the online training platform and will be able to schedule your exam according to your availability. At the time of payment, use the discount coupon corresponding to the chosen certification:

  • NCA 6.10: NXFRNCA24
  • NCP-MCI 6.10: NXFRNCP24

Important: The free offer is valid until December 13, 2024. After this date, the discount coupons will no longer be valid. So don’t hesitate to register now to avoid missing this opportunity!

Official Nutanix post : https://next.nutanix.com/education-blog-153/announcing-nca-ncp-mci-v6-10-get-certified-for-free-with-limited-time-offer-43744?blaid=6791730

Read More

This is a question I am regularly asked: “with which cluster do you perform your tests for your articles?”. So here is what my Nutanix homelab looks like…

My network infrastructure

Before talking about my Nutanix cluster, I will present my home infrastructure that I installed 4 years ago when the house was built.

I based my network infrastructure on Ubiquiti brand equipment. The hardware is very good, silent, robust, easy to use… but for the configuration it is really very particular and we are very far from what we are used to practicing on a daily basis in a data center.

So I set up:

a Ubiquiti Dream Machine Pro for the entire network / filtering part with

  • 2 SFP+ 10Gb ports
  • 8 1Gb ports

a Ubiquiti USW Pro 24 PoE switch which has

  • 2 10Gb SFP+ ports
  • 24 1Gb PoE ports

Ubiquiti Flex switches in various parts of the home

My internet access is currently a Free connection which relies on the Freebox Delta and which offers me a theoretical speed of 10Gb/s:

On the program:

  • a 10Gb fiber arrival
  • 4 RJ45 ports in 1Gb (B, C, D, E)
  • 1 SFP+ 10Gb port (F)

A well-stocked box to allow latency-free Internet access.

The network topology therefore looks like this:

As you can see, no professional type infrastructure at home so if you only have consumer type equipment, don’t hesitate to get started, it will do the trick.

My Nutanix cluster

My Nutanix cluster is nothing exceptional, it is quite old hardware since it is based on an Intel S2600WTTR chassis that was launched by the manufacturer in 2016!

Link to the technical sheet: https://www.intel.fr/content/www/fr/fr/products/sku/88281/intel-server-board-s2600wttr/specifications.html

I recovered it in a previous professional experience, the cluster had a hardware failure that the administration did not want to deal with given the age of the hardware and the fact that the existing infrastructure was being replaced by brand new Nutanix clusters.

I carried out the repair at my own expense so that the server would be operational again. In terms of the physical installation, the cluster is not allowed to stay in the house (due to noise), so it is in the garage, installed in an unconventional way:

The hardware configuration of my cluster is as follows:

  • 2 Intel Xeon E5-2640 v4 @ 2.4Ghz processors
  • 384 Gb of RAM
  • 1 120Gb SDD for the OS
  • 4 800Gb SAS SDDs
  • 6 1.6Tb SAS HDDs
  • 2 10Gb RJ45 network ports

This hardware configuration allows me to have disk redundancy. This is not the ideal scenario but it is already much better than no redundancy at all. The amount of CPU / RAM allows me to faire tourner un grand nombre de machines virtuelles sans que les performances ne soient dégradées :

If I had to make a hardware change to my cluster, I think I would opt for a 10Gb fiber network card in order to have a 10Gb connection from one end of the chain to the other Internet > Router > Firewall > Switch > Cluster.

The cluster is now installed with a Nutanix CE 2.1 in the latest versions available:

This allows me to test the latest features, to perform configuration tests and it also serves as a support for writing all the blog articles, each subject covered being obviously tested on the Lab before publication.

Although the Lab is essential for writing my articles, I do not leave it on permanently because it consumes a significant amount of electricity as I already mentioned in one of my previous articles.

That’s my infrastructure, hoping that it makes you want to get started and set up your own Nutanix CE cluster.

Read More
Nutanix Prism Central

About ten days ago, the latest version of Prism Central was released: pc.2024.2.

While trying to update my Prism Central this weekend, I noticed that this version is not offered via the LCM…

But that’s not what’s going to stop me…

If you haven’t installed Prism Central, I invite you to read my other dedicated article: Deploy Prism Central on Nutanix CE 2.1

Retrieving the update package

To retrieve the pc.2024.2 version of Prism Central, you need to connect to the Nutanix portal, in the “Prism Central” section of the “Downloads” menu: https://portal.nutanix.com/page/downloads?product=prism

Then, in the list of files, look for the “Prism Central LCM Bundle” file:

Wait for the download, which may take a while (nearly 10Gb to retrieve!).

Update file transfer

Once the update file has been retrieved, it must now be transferred to the cluster. To do this, connect to your Prism Central interface and go to the “LCM” menu of the Admin Center. Open the “Direct Upload” menu:

Then click on “+ Upload Bundle” and go select the file you just downloaded:

Wait while the download is complete. Once the operation is complete, the update package should appear in the list:

Prism Central Update

To install the newly transferred update, if you have followed the previous steps correctly you should find it in the “Udaptes” menu:

Check the update, click on “View Upgrade Plan”:

Check the proposed update plan and click on “Apply 1 Updates”:

The update process is launched, you just have to wait a few dozen minutes for the update to be complete:

Installing the update on your cluster will have no impact on your production environment, however you will not be able to access Prism Central for the duration of the update. You can follow the progress of the operations from the Prism Central VM console:

After about thirty minutes, the update installation should be complete and your Prism Central updated to pc.2024.2:

Allow another fifteen minutes before it is fully operational.

Read More

I talked about it in one of the articles of my ultimate guide on Nutanix Move, for versions higher than 5.3.0 of Nutanix Move, the integration of VDDK files is now a mandatory step to be able to add a VMware ESXi cluster on Move.

What is VDDK?
VDDK (VMware Virtual Disk Development Kit) is a set of libraries and utilities provided by VMware that allows applications to perform operations on virtual disks.
These operations include creating, accessing, and managing virtual disks used by VMware environments.
Nutanix Move uses VDDK to perform migrations from VMware environments.
By using VDDK, Nutanix Move can:

  • Access VMware virtual disk
  • Create and manage snapshots
  • Transfer data efficiently


Why do I need to install VDDK manually (since Move 5.3)?
You need to install VDDK manually because it is required for migration and Nutanix Move cannot download it automatically.
This requirement is now common to other migration products that use VDDK.
The integration of VDDK with Nutanix Move has therefore been updated since version 5.3. This change is to align with how other vendors integrate VDDK into their migration tools.

What are the required VDDK versions for ESXi 5.x, 6.x, and 7.x?
For ESXi 5.1: VDDK 6.0.3 is required.
For other supported ESXi versions: VDDK 7.0.3.1 is required.
Note: In Move, if you add a vCenter instead of an ESXi host, VDDK 7.0.3.1 will be required.

To be able to retrieve these files, an active account on the Broadcom website was until now necessary to download the VDDKs. The problem that arose was that the website of the new owner of VMware has suffered a series of malfunctions in recent weeks that made it impossible to download these precious files.

Faced with the wave of discontent on social networks such as X or Reddit, an alternative download solution has been set up!

You can now retrieve the VDDK file you need at the following address: https://broadcom.ent.box.com/v/vddkdownloads

Great news for anyone who needs these files for their migration to Nutanix AHV!

Read More

It’s finally complete! After about fifty hours spent installing, configuring, testing a lab environment and writing each article, my ultimate Nutanix Move guide on migrating to Nutanix AHV is finally finished.

In total, this represents:

  • 6500+ words
  • 160+ screenshots
  • 50+ hours of work

This is clearly one of the most ambitious projects on my blog! To make it easier to find all of my current or future guides, I have created a dedicated link in the menu.

On the program:

Nutanix Move – Part 1: solution overview

Nutanix Move – Part 2: My migration environments

Nutanix Move – Part 3: Prerequisites

Nutanix Move – Part 4: Deployment

Nutanix Move – Part 5: Initial Setup

Nutanix Move – Part 6: Adding the VMware ESXi Cluster to Migrate

Nutanix Move – Part 7: Adding the cluster to migrate Microsoft Hyper-V

Nutanix Move – Part 8: Adding the Nutanix AHV Target Cluster

Nutanix Move – Part 9: ESXi to AHV Migration Plan

Nutanix Move – Part 10: Hyper-V to AHV Migration Plan

Nutanix Move – Part 11: Final Tips and Best Practices

Nutanix Move – Part 12 : VMs Migration

Nutanix Move – Part 13 : Post-migration network issues

Nutanix Move – Part 14 : Post-Migration Boot issues

The guide will probably evolve if I come across new interesting cases to share to expand my feedback.

Don’t forget that the success of your migration to Nutanix AHV will depend greatly on the preparation you do in advance.

You now have the keys to successfully migrating from Microsoft Hyper-V or VMware ESXi to Nutanix AHV.

And if you are still hesitant to take the plunge, do not hesitate to come and share your questions, your fears or even ask other people who have already been there to give you their feedback.

Other guides are coming soon… Stay tuned!

Read More
nutanix move

It may happen that you are hosting servers whose operating systems are not supported by Nutanix Move. In these rare cases, you will then have to proceed manually to migrate your virtual machines and you may be faced with post-migration boot issues.

For the demonstration, I chose a really old operating system: Ubuntu 12.04.

This version is not part of the list of systems supported by Nutanix Move for migration and the migration must therefore be done manually:

I have set up a manual migration plan for this virtual machine and I am proceeding with its migration which is going smoothly:

Unfortunately, when starting the virtual machine on the Nutanix AHV side, I encounter a boot problem:

To fix this issue, I shut down the virtual machine, log into my Prism Central, and navigate to the “Compute and Storage > Images” menu:

I click on “Add Image”, select “VM Disk” as the image source and select my Ubuntu12 VM:

I name the disk with an explicit name and click on “Next”:

I leave the image placement option in its default configuration and save:

My disk image will appear in the list of images available on my Nutanix cluster:

I then go back to the “Compute and Storage > VMs” menu and open the control panel of my virtual machine:

The part that interests us concerns the machine’s 2 disks with a virtual disk and a CD-ROM type reader:

The first step is to delete all disks by clicking on the trash icon:

Then, once the 2 disks are deleted, click on “Attach Disk”:

Configure the disk as follows:

  • 1 – Type : Disk
  • 2 – Operation : Clone From Image
  • 3 – Image : select the disk you added initially
  • 4 – Capacity : you can make it any size you want
  • 5 – Bus Type : select PCI

Validate the new configuration of your virtual machine and boot it. The boot problem is now fixed, and the operating system boots perfectly fine:

Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic x86_64)
Documentation: https://help.ubuntu.com/ System information as of Thu Sep 26 15:58:58 CEST 2024 System load: 0.0 Processes: 67
Usage of /: 3.5% of 37.68GB Users logged in: 1
Memory usage: 2% IP address for eth0: 192.168.2.137
Swap usage: 0% Graph this data and manage this system at https://landscape.canonical.com/
0 packages can be updated.
0 updates are security updates.
Your Ubuntu release is not supported anymore.
For upgrade information, please visit:
http://www.ubuntu.com/releaseendoflife
New release '14.04.6 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
Last login: Thu Sep 26 15:58:05 2024
nutanix@ubuntu24hv:~$

The main problem encountered when migrating these obsolete systems is a boot issue. By following this method, you should be able to migrate all your virtual machines without any problems.

Read More