Team Leader - Nutanix Technology Champion - Nutanix NTC Storyteller

Julien DUMUR
Infrastructure in a Nutshell
Nutanix Blog Header

With only a .NEXT Barcelona as a vent since the beginning of the year, 2024 seemed to pass peacefully… But that was without counting the last quarter where events seem to be jostling…

Nutanix Partner Tech Summit in Paris

We are therefore starting this final stretch of the year before 2025 with the Nutanix Partner Tech Summit which will take place on October 9 and 10, 2024 in Paris at HPE’s premises.

The PTS is an opportunity for all Nutanix partners to meet up at an event dedicated to them and which will allow them to:

  • attend around ten mini-conferences and super-sessions on Nutanix’s flagship technologies
  • discuss with Nutanix’s pre-sales teams
  • deepen our knowledge around the Labs made available to us
  • pass the certifications we are missing

The entire .mikadolabs team will be on deck to take advantage of the event and discover the latest technological innovations in the Nutanix ecosystem.

NTC Tech Summit in San José, Californiae

Just returned from Paris, I will have to pack my suitcase again to participate in an exclusive event reserved for Nutanix Technology Champions which will take place for the very first time at Nutanix headquarters in San José, California, on October 20 and 21, 2024.

This event, initiated by Angelo Luciani, the NTC program manager at Nutanix, will be an exceptional opportunity to visit the publisher’s premises and discuss with the teams during privileged exchanges.

It is an incredible opportunity to be invited to this event and I look forward to meeting my esteemed NTC colleagues again during these 2 days which promise to be memorable in every way.

.NEXT on Tour Paris 2024

Finally, on December 10, 2024, the .NEXT on Tour Paris will take place in Paris. An event that aims to be a slightly lighter version of the .NEXT that took place in Barcelona in May, an opportunity for the publisher to share its innovations with all those who were unable to travel to Spain.

On the program, a lot of partners and technical presentations will be there. The .mikadolabs team will once again be on deck to meet customers and prospects and share our experience with Nutanix technology.

I hope to see you at one of these events, if you see me, don’t hesitate to come and say hello!

Read More

Since not everyone is lucky enough to have clusters on which to do their tests in the company, creating your homelab to be able to test and manipulate or even host services can be a serious alternative. But it is never an easy thing and it can often be expensive…

What use?

Having your own homelab under Nutanix CE is one thing, having the use of it is another. The first question you must ask yourself is: a homelab under what to do?

Is the purpose of your cluster to carry out tests with ephemeral virtual machines to improve your skills and discover new technologies? Or to host services for you, your family and other people? Will you need to back it up? Or to have redundancy? Do you have a lot or on the contrary little space to host it?

These are all questions to ask yourself in advance. It is obvious that depending on the answers to all these questions, the architecture and sizing of your homelab will not be the same. Another element to take into account: the electricity consumption which can represent a significant operating cost.

For my part, the main interest of having my homelab under Nutanix is ​​to be able to test new things and to improve my skills on a technology that I particularly like.

Sizing your homelab

Depending on the answers you have given to the various questions concerning the intended use of your homelab, you should start to have an idea of ​​what architecture and sizing you need.

In order to give a little more substance to the sizing part, it is important to take a look at the prerequisites necessary for installing a 1-node cluster under Nutanix CE 2.1:

  • a processor with at least 4 cores and embedded virtualization technology
  • 32Gb of RAM (64Gb recommended)
  • a 1Gb network card
  • 1 disk of at least 32Gb for the hypervisor
  • 2 disks for the data

If you plan to set up a cluster with several nodes, be aware that each node in the same cluster will have to be similar in terms of configuration.

Obviously, the more virtual machines you want to host or features you want to activate on Nutanix, the more resources you will need, thus increasing costs.

Setting up your homelab

Option 1: recovery

An often forgotten option for setting up your homelab is the recovery of old hardware to create a new infrastructure under Nutanix CE. Indeed, it often happens that companies get rid of their old equipment by simply throwing it away or reselling it at a low price.

This is often an ideal opportunity to recover an old server capable of running Nutanix CE 2.1, even if it means transforming several servers into a single, somewhat muscular one.

For my part, this is the option I chose for my Nutanix CE 2.1 cluster because I was lucky enough to be in this situation during one of my previous professional experiences.

Option 2: used servers

If the company has decided to resell its old equipment to brokers, you can find these servers on sales sites specializing in reconditioning.

There are plenty of them on the Internet, with quite disparate stocks and prices, I am thinking in particular of:

These sites allow you to build some pretty edgy configurations for a homelab at often reasonable prices. For example, a SuperMicro chassis can be negotiated for less than 600 euros:

On the configuration side, we will find:

  • an Intel Xeon E5-2697A 16-core processor @ 2.60GHz
  • 128Gb of RAM
  • 4 512Gb SDD
  • onboard RJ45 network ports
  • a dual power supply

The only constraint will be to add a SATA disk of about 64Gb to install the hypervisor (count about 40 euros) if you do not want to use a 512Gb disk unnecessarily. Note that you can even upgrade the configuration in the future by adding disks or RAM.

The 2 big drawbacks of this type of server are:

  • the noise (good luck negotiating to have it in the house)
  • the format because it is a bit restrictive to install

Option 3: NUC type PCs

With their small format and often the silence that characterizes them, NUCs are ideal candidates for setting up a homelab at home. If we can find NUCs with processors capable of running correct infrastructures, it is at the level of RAM and storage that the shoe pinches.

Indeed, RAM is most of the time limited to 32Gb without the possibility of expansion, which can quickly prove insufficient depending on the use to which you dedicate your cluster. On the storage side, most of the time NUCs only offer an additional port allowing you to connect a disk in addition to the basic embedded one where Nutanix requires 3. A workaround can be to deploy the hypervisor on a fast USB key connected via USB3.

The other disadvantage is the cost of this type of machine which is often at the same level as a used server but with much lower hardware configurations and power consumption.

Option 4: Assembled PC

The last viable option in my eyes is assembling a more traditional PC from A to Z. This will allow you to select each component of your server and thus be able to have a truly personalized cluster.

From the Grand Tour case to the mini ITX case, you will also have the choice of format, which can be practical if you only have a small space to install your equipment (example: the cupboard in the entrance where the Internet box is located).

In terms of cost, depending on the configuration chosen, it should not be much higher than that of a refurbished server or a NUC with moderate power consumption.

Conclusion

I hope that you will see more clearly in the path that will lead you to the start of your homelab. Be aware that apart from having substantial financial means, there is no miracle solution to setting up a lab and it is often a question of opportunities that will present themselves to you. Take the time to think carefully and explore all possible avenues before you start.

Read More

This is a question I am regularly asked: “with which cluster do you perform your tests for your articles?”. So here is what my Nutanix homelab looks like…

My network infrastructure

Before talking about my Nutanix cluster, I will present my home infrastructure that I installed 4 years ago when the house was built.

I based my network infrastructure on Ubiquiti brand equipment. The hardware is very good, silent, robust, easy to use… but for the configuration it is really very particular and we are very far from what we are used to practicing on a daily basis in a data center.

So I set up:

a Ubiquiti Dream Machine Pro for the entire network / filtering part with

  • 2 SFP+ 10Gb ports
  • 8 1Gb ports

a Ubiquiti USW Pro 24 PoE switch which has

  • 2 10Gb SFP+ ports
  • 24 1Gb PoE ports

Ubiquiti Flex switches in various parts of the home

My internet access is currently a Free connection which relies on the Freebox Delta and which offers me a theoretical speed of 10Gb/s:

On the program:

  • a 10Gb fiber arrival
  • 4 RJ45 ports in 1Gb (B, C, D, E)
  • 1 SFP+ 10Gb port (F)

A well-stocked box to allow latency-free Internet access.

The network topology therefore looks like this:

As you can see, no professional type infrastructure at home so if you only have consumer type equipment, don’t hesitate to get started, it will do the trick.

My Nutanix cluster

My Nutanix cluster is nothing exceptional, it is quite old hardware since it is based on an Intel S2600WTTR chassis that was launched by the manufacturer in 2016!

Link to the technical sheet: https://www.intel.fr/content/www/fr/fr/products/sku/88281/intel-server-board-s2600wttr/specifications.html

I recovered it in a previous professional experience, the cluster had a hardware failure that the administration did not want to deal with given the age of the hardware and the fact that the existing infrastructure was being replaced by brand new Nutanix clusters.

I carried out the repair at my own expense so that the server would be operational again. In terms of the physical installation, the cluster is not allowed to stay in the house (due to noise), so it is in the garage, installed in an unconventional way:

The hardware configuration of my cluster is as follows:

  • 2 Intel Xeon E5-2640 v4 @ 2.4Ghz processors
  • 384 Gb of RAM
  • 1 120Gb SDD for the OS
  • 4 800Gb SAS SDDs
  • 6 1.6Tb SAS HDDs
  • 2 10Gb RJ45 network ports

This hardware configuration allows me to have disk redundancy. This is not the ideal scenario but it is already much better than no redundancy at all. The amount of CPU / RAM allows me to faire tourner un grand nombre de machines virtuelles sans que les performances ne soient dégradées :

If I had to make a hardware change to my cluster, I think I would opt for a 10Gb fiber network card in order to have a 10Gb connection from one end of the chain to the other Internet > Router > Firewall > Switch > Cluster.

The cluster is now installed with a Nutanix CE 2.1 in the latest versions available:

This allows me to test the latest features, to perform configuration tests and it also serves as a support for writing all the blog articles, each subject covered being obviously tested on the Lab before publication.

Although the Lab is essential for writing my articles, I do not leave it on permanently because it consumes a significant amount of electricity as I already mentioned in one of my previous articles.

That’s my infrastructure, hoping that it makes you want to get started and set up your own Nutanix CE cluster.

Read More
Nutanix Prism Central

About ten days ago, the latest version of Prism Central was released: pc.2024.2.

While trying to update my Prism Central this weekend, I noticed that this version is not offered via the LCM…

But that’s not what’s going to stop me…

If you haven’t installed Prism Central, I invite you to read my other dedicated article: Deploy Prism Central on Nutanix CE 2.1

Retrieving the update package

To retrieve the pc.2024.2 version of Prism Central, you need to connect to the Nutanix portal, in the “Prism Central” section of the “Downloads” menu: https://portal.nutanix.com/page/downloads?product=prism

Then, in the list of files, look for the “Prism Central LCM Bundle” file:

Wait for the download, which may take a while (nearly 10Gb to retrieve!).

Update file transfer

Once the update file has been retrieved, it must now be transferred to the cluster. To do this, connect to your Prism Central interface and go to the “LCM” menu of the Admin Center. Open the “Direct Upload” menu:

Then click on “+ Upload Bundle” and go select the file you just downloaded:

Wait while the download is complete. Once the operation is complete, the update package should appear in the list:

Prism Central Update

To install the newly transferred update, if you have followed the previous steps correctly you should find it in the “Udaptes” menu:

Check the update, click on “View Upgrade Plan”:

Check the proposed update plan and click on “Apply 1 Updates”:

The update process is launched, you just have to wait a few dozen minutes for the update to be complete:

Installing the update on your cluster will have no impact on your production environment, however you will not be able to access Prism Central for the duration of the update. You can follow the progress of the operations from the Prism Central VM console:

After about thirty minutes, the update installation should be complete and your Prism Central updated to pc.2024.2:

Allow another fifteen minutes before it is fully operational.

Read More

I talked about it in one of the articles of my ultimate guide on Nutanix Move, for versions higher than 5.3.0 of Nutanix Move, the integration of VDDK files is now a mandatory step to be able to add a VMware ESXi cluster on Move.

What is VDDK?
VDDK (VMware Virtual Disk Development Kit) is a set of libraries and utilities provided by VMware that allows applications to perform operations on virtual disks.
These operations include creating, accessing, and managing virtual disks used by VMware environments.
Nutanix Move uses VDDK to perform migrations from VMware environments.
By using VDDK, Nutanix Move can:

  • Access VMware virtual disk
  • Create and manage snapshots
  • Transfer data efficiently


Why do I need to install VDDK manually (since Move 5.3)?
You need to install VDDK manually because it is required for migration and Nutanix Move cannot download it automatically.
This requirement is now common to other migration products that use VDDK.
The integration of VDDK with Nutanix Move has therefore been updated since version 5.3. This change is to align with how other vendors integrate VDDK into their migration tools.

What are the required VDDK versions for ESXi 5.x, 6.x, and 7.x?
For ESXi 5.1: VDDK 6.0.3 is required.
For other supported ESXi versions: VDDK 7.0.3.1 is required.
Note: In Move, if you add a vCenter instead of an ESXi host, VDDK 7.0.3.1 will be required.

To be able to retrieve these files, an active account on the Broadcom website was until now necessary to download the VDDKs. The problem that arose was that the website of the new owner of VMware has suffered a series of malfunctions in recent weeks that made it impossible to download these precious files.

Faced with the wave of discontent on social networks such as X or Reddit, an alternative download solution has been set up!

You can now retrieve the VDDK file you need at the following address: https://broadcom.ent.box.com/v/vddkdownloads

Great news for anyone who needs these files for their migration to Nutanix AHV!

Read More

It’s finally complete! After about fifty hours spent installing, configuring, testing a lab environment and writing each article, my ultimate Nutanix Move guide on migrating to Nutanix AHV is finally finished.

In total, this represents:

  • 6500+ words
  • 160+ screenshots
  • 50+ hours of work

This is clearly one of the most ambitious projects on my blog! To make it easier to find all of my current or future guides, I have created a dedicated link in the menu.

On the program:

Nutanix Move – Part 1: solution overview

Nutanix Move – Part 2: My migration environments

Nutanix Move – Part 3: Prerequisites

Nutanix Move – Part 4: Deployment

Nutanix Move – Part 5: Initial Setup

Nutanix Move – Part 6: Adding the VMware ESXi Cluster to Migrate

Nutanix Move – Part 7: Adding the cluster to migrate Microsoft Hyper-V

Nutanix Move – Part 8: Adding the Nutanix AHV Target Cluster

Nutanix Move – Part 9: ESXi to AHV Migration Plan

Nutanix Move – Part 10: Hyper-V to AHV Migration Plan

Nutanix Move – Part 11: Final Tips and Best Practices

Nutanix Move – Part 12 : VMs Migration

Nutanix Move – Part 13 : Post-migration network issues

Nutanix Move – Part 14 : Post-Migration Boot issues

The guide will probably evolve if I come across new interesting cases to share to expand my feedback.

Don’t forget that the success of your migration to Nutanix AHV will depend greatly on the preparation you do in advance.

You now have the keys to successfully migrating from Microsoft Hyper-V or VMware ESXi to Nutanix AHV.

And if you are still hesitant to take the plunge, do not hesitate to come and share your questions, your fears or even ask other people who have already been there to give you their feedback.

Other guides are coming soon… Stay tuned!

Read More
nutanix move

It may happen that you are hosting servers whose operating systems are not supported by Nutanix Move. In these rare cases, you will then have to proceed manually to migrate your virtual machines and you may be faced with post-migration boot issues.

For the demonstration, I chose a really old operating system: Ubuntu 12.04.

This version is not part of the list of systems supported by Nutanix Move for migration and the migration must therefore be done manually:

I have set up a manual migration plan for this virtual machine and I am proceeding with its migration which is going smoothly:

Unfortunately, when starting the virtual machine on the Nutanix AHV side, I encounter a boot problem:

To fix this issue, I shut down the virtual machine, log into my Prism Central, and navigate to the “Compute and Storage > Images” menu:

I click on “Add Image”, select “VM Disk” as the image source and select my Ubuntu12 VM:

I name the disk with an explicit name and click on “Next”:

I leave the image placement option in its default configuration and save:

My disk image will appear in the list of images available on my Nutanix cluster:

I then go back to the “Compute and Storage > VMs” menu and open the control panel of my virtual machine:

The part that interests us concerns the machine’s 2 disks with a virtual disk and a CD-ROM type reader:

The first step is to delete all disks by clicking on the trash icon:

Then, once the 2 disks are deleted, click on “Attach Disk”:

Configure the disk as follows:

  • 1 – Type : Disk
  • 2 – Operation : Clone From Image
  • 3 – Image : select the disk you added initially
  • 4 – Capacity : you can make it any size you want
  • 5 – Bus Type : select PCI

Validate the new configuration of your virtual machine and boot it. The boot problem is now fixed, and the operating system boots perfectly fine:

Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic x86_64)
Documentation: https://help.ubuntu.com/ System information as of Thu Sep 26 15:58:58 CEST 2024 System load: 0.0 Processes: 67
Usage of /: 3.5% of 37.68GB Users logged in: 1
Memory usage: 2% IP address for eth0: 192.168.2.137
Swap usage: 0% Graph this data and manage this system at https://landscape.canonical.com/
0 packages can be updated.
0 updates are security updates.
Your Ubuntu release is not supported anymore.
For upgrade information, please visit:
http://www.ubuntu.com/releaseendoflife
New release '14.04.6 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
Last login: Thu Sep 26 15:58:05 2024
nutanix@ubuntu24hv:~$

The main problem encountered when migrating these obsolete systems is a boot issue. By following this method, you should be able to migrate all your virtual machines without any problems.

Read More
nutanix move

It may happen that you host servers whose operating systems are not supported by Nutanix Move. In these rare cases, you will then have to proceed manually to migrate your virtual machines and you may encounter post-migration network issues.

In my example, I deployed on my ESXi an Ubuntu Server machine in version 24.04LTS, an operating system released only a few months ago. It is not yet officially supported by Nutanix Move according to the documentation:

When creating my migration plan, I did not encounter any errors, I was able to validate the options without any problems, in automatic mode and I start it immediately.

Unfortunately, the automatic preparation of the virtual machine fails:

The message is clear, Nutanix Move cannot install the necessary drivers:

Drivers Installation Failed. Driver(s) virtio_scsi could not be installed for kernel 6.8.0-45-generic

So I delete my migration plan, create a new one manually and shut down the virtual machine without running the scripts:

I then start my migration plan to migrate the virtual machine to my AHV cluster and everything goes well:

Once the data transfer is complete, I launch the switch immediately. The server is migrated:

On the Nutanix AHV side, I can clearly see my freshly migrated virtual machine:

I start the virtual machine to check that the migration went well. The VM starts correctly but it does not take an IP address…

This problem is common with recent operating systems. As a general rule, they integrate the modules and components necessary for full support by Nutanix. However, it is common for the configuration of the network cards not to be correctly resumed after migration.

To correct the malfunction, you must open a console on your VM and perform the configuration manually.

We start by retrieving the name of the interface:

nutanix@ubuntu24e1:~$ sudo lshw -class network
[sudo] password for nutanix:
*-network
description: Ethernet controller
product: Virtio network device
vendor: Red Hat, Inc.
physical id: 3
bus info: pci@0000:00:03.0
version: 00
width: 64 bits
clock: 33MHz
capabilities: msix bus_master cap_list rom
configuration: driver=virtio-pci latency=0
resources: irq:11 ioport:c040(size=32) memory:febd1000-febd1fff memory:fe000000-fe003fff memory:feb80000-febbffff
*-virtio0
description: Ethernet interface
physical id: 0
bus info: virtio@0
logical name: ens3
serial: 50:6b:8d:c0:82:7b
capabilities: ethernet physical

Here it is ens3. Now we will modify the network card configuration file:

nutanix@ubuntu24e1:~$ sudo vi /etc/netplan/50-cloud-init.yaml
This file is generated from information provided by the datasource. Changes to it will not persist across an instance reboot. To disable cloud-init's network configuration capabilities, write a file /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
network: {config: disabled}
network:
ethernets:
ens160:
dhcp4: true
version: 2

Correct the “ens” line with the correct name of your network card and save the file. Apply the configuration:

sudo netplan apply

That’s it, the machine retrieves an IP address.

You get the idea, my example is based on Ubuntu but reproducible on other recent Linux distributions, as long as you adapt the commands to the operating system.

Read More
nutanix move

Now that everything is ready, it is time to migrate the virtual machines from the old Hyper-V and ESXi clusters to the new Nutanix AHV cluster using the migration plans previously created.

The different states

A migration plan can be in different distinct states depending on the stage of the migration:

  • Not Started: You have created the migration plans but have not yet started them
  • In Progress: The migration plan is launched, the data is starting to be replicated.
  • Ready to Cutover: The data migration is complete, Nutanix Move continues to synchronize the changes, the virtual machines are waiting to be switched to the target cluster.
  • Paused: You have paused the process for some reason. The data migration is suspended.
  • Failed: Error during the process of preparing the machines in general. The anomaly must be corrected in order to resume the operations.
  • Completed: The virtual machines that have been successfully migrated.

Starting migrations

If you have planned to start one of your migration plans like me, you should already have virtual machines in “Ready to Cutover” status.

This means that they are ready to complete their migration.

For other pending migration plans, you need to start them manually. To do this, check the box in front of the migration plans you want to start, click on the “Action” menu at the top of the list and click on “Start”:

The migration plan starts to execute:

If you followed all the steps correctly, the migration plan should go smoothly. You can track the progress of the plan on the corresponding line, and of the virtual machines in the boxes at the very top:

By clicking on the “2 VMs” in the banner, you will have a step by step guide for each virtual machine:

You can get more details on the operations carried out in the “Events” menu at the top right of the interface:

Synchronizing the data with the new cluster is the longest step of the process and will depend on the volume of your virtual machines.

The cutover

The cutover is the operation that will allow you to finalize the migration from the old cluster to the new one.

You can only perform a Cutover on virtual machines that are in the “Ready to Cutover” state:

To check the status of the virtual machines and proceed with the cutover, click on the “2 VMs” in the “Ready to cutover” frame:

You will then have the list of VMs ready to switch to the new cluster. You can do them all at once, or one by one, it doesn’t matter. It’s up to you. I will migrate Ubuntu_4 by checking the box at the beginning of the line and then clicking on “cutover”:

Validation is required to start the process:

The failover process only takes a few minutes during which Nutanix Move will:

  • Power off the VM
  • Create a final snapshot
  • Synchronize it with the target cluster
  • Create the target VM
  • Clean up the source VM (disconnect network cards)
  • Delete all snapshots created by Move
  • Consolidate the virtual machine disks
  • Clean up the target VM

The migration status for this virtual machine then changes to “Completed” and I find it on my Nutanix AHV, started and functional:

The virtual machine is successfully migrated, I just have to do all the others.

Read More
nutanix move

You thought it was over and that we would go straight to the migration? Well, not quite! I still have a few things to say and share with you before: my final tips and best practices based on my experience!

Careful preparation

The first and I think most important advice I can give you is to prepare your migration well. List your virtual machines, identify the installed operating systems, check that they are up to date and that the migration prerequisites are met…

It takes time, but it is the key to a successful migration every time.

Measured migration plans

My second piece of advice is to create migration plans of a reasonable size. If the software limit can go up to 100 virtual machines per migration plan, I would advise you to limit yourself to a maximum of twenty machines.

This allows you to better manage your migration, to have fewer potential errors to manage and to be able to correct problems more quickly if you encounter them once the virtual machines have been switched over and to limit the risk of prolonged service interruption.

Homogeneous migration plans

As far as possible, try to create migration plans grouping virtual machines that are similar in terms of operating system. Again, in the event of a problem during the migration, it will always be easier to look for a common error on several servers with the same operating system than between disparate servers.

Take your time

Above all, do not rush! It is better to take your time and make a smooth migration, rather than rushing, encountering problems, being forced to backtrack, etc. Take the time to build your migration plans, to prepare them well, to configure them in their smallest details so that they go smoothly.

Some good practices to follow

Here are some good practices to follow that will allow you to avoid a certain number of inconveniences. There are probably others that I have not mentioned but this already constitutes a solid base:

  • Check the compatibility of your operating systems.
  • Check that all the prerequisites are respected upstream.
  • Check that your clusters can communicate with each other.
  • Stop the backup jobs before starting your migrations.

If in doubt, refer to the official documentation: https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move-v5_4:Nutanix-Move-v5_4

Now, you are ready to finalize the migration of your virtual machines…

Read More