Team Leader - Nutanix Technology Champion - Nutanix NTC Storyteller

Julien DUMUR
Infrastructure in a Nutshell

Updating a hyperconverged cluster can sometimes be time-consuming and present certain risks of production interruption if the process is poorly managed.

Nutanix has optimized the process of updating its clusters so that it is as simple and automated as possible, the famous “1-click upgrade”.

Life Cycle Manager on Prism Element

LCM has slight differences between Prism Element and Prism Central. This is what the interface looks like on Prism Element:

LCM on Prism Element allows you to manage updates to some of the bricks in your cluster:

  • AHV
  • AOS
  • Cluster Maintenance Utilities
  • File
    Flow
  • Foundation
  • Licensing
  • NCC

These are the bricks that you can update through Prism Element.

Life Cycle Manager on Prism Central

LCM on Prism Central allows you to manage the updating of the remaining bricks which are mainly the software bricks:

Life Cycle Manager: inventory

The LCM Inventory, whether on Prism Element or Prism Central, allows you to list all the software and hardware versions installed on your cluster, as well as any updates or firmware available:

The inventory process lasts around ten minutes:

It then allows access to all installed and available versions:

LCM: the recommended update order

With the multitude of software bricks and the hardware part, it is not always easy to know in what order to update the different modules.

The first step of updating your cluster takes place on Prism Central:

The actions to be carried out in order:

  • LCM inventory
  • NCC Check and Upgrade
  • Prism Central Upgrade

You must then switch to Prism Element for the second step:

The actions to be carried out in order:

  • LCM inventory
  • NCC Check and Upgrade
  • Foundation Upgrade
  • AOS Upgrade
  • Firmware Upgrade
  • AHV Upgrade

It is recommended to do another LCM inventory once the AHV update is complete to verify that there are no hardware updates remaining to be applied.

Finally comes the last step, again on Prism Central:

The actions to be carried out in order:

  • LCM inventory
  • All software updates (Nutanix Files, Self-Services (Calm), NKE (Karbon), NDB, Flow…)

To carry out the desired updates, simply check them then click on “View upgrade plan”:

Once the update plan has been developed by LCM, you must click one last time to start the process:

Each step of the process requires time because the cluster multiplies checks at each step to verify the conformity of the installed updates:

It is important to specify that the cluster update process, with the exception of certain software bricks, does not cause a service shutdown if good practices are respected regarding fault tolerance.

Official Nutanix documentation

Official documentation: https://portal.nutanix.com/page/documents/details?targetId=Acropolis-Upgrade-Guide-v6_5:upg-upgrade-recommended-order-t.html

Read More

Following the takeover of VMWare by the giant Broadcom and the subsequent runaway prices, many customers are looking for alternative solutions. Unfortunately, it is not always easy to find your way around.

VMWare vs Nutanix comparison

The most complicated thing when we are used to a technical solution is to make a radical change.

Will we find all the features we use? What do the names correspond to? What prospects for possible developments among competitors?

I took the time to make a comparison of the different VMWare and Nutanix bricks:

I hope this will help you see things more clearly and shed light on a possible future choice.

Read More

Nutanix has a tool for automating the deployment and life cycle of applications: Nutanix Self-Service (formerly Calm).

I’ll show you how to deploy Nutanix Self-Service on your Nutanix cluster.

Nutanix Self-Service Overview

Self-Service (formerly Calm) streamlines application management, deployment, and scalability across hybrid clouds through self-service, automation, and centralized role-based governance.

Deploy Nutanix Self-Service

To deploy Nutanix Self-Service, you must have a functional Prism Central on your cluster. Indeed, almost all of Nutanix’s complementary building blocks are managed by Prism Central, so don’t look for it on Prism Element.

In the side menu, look for the “Services” section and click on “Calm” (the old name for Nutanix Self-Service):

Deployment is very simple, then just click on “Enable App. Orchestration”:

The first box must be checked to be able to deploy Self-Service, the second is optional but highly recommended because it allows access to the online catalog offering a plethora of ready-to-use blueprints.

Once you have made your choice, click on “Save” and wait around ten minutes while Self-Service deploys:

Once deployment is complete, a new Volume Groups will be available on your Nutanix cluster:

That’s it, Nutanix Self-Service is deployed and ready to use:

Read More

It happens that the admin account of a Nutanix cluster is locked due to too many authentication failures and that you can no longer connect to it.

Most of the time, this is the result of changing the password of the admin account on the cluster if it is used on other systems such as Nutanix Move or HYCU for example.

Here’s how to reset the password for the “admin” account of a cluster

Remove the “admin” account from routines

To begin with, if you do not want the problem to recur, you must remove the “admin” account from the cluster from the elements that can cause this. This could be backup software, a Nutanix brick (Move for example), possibly a monitoring tool.

It is important not to use the “admin” account of a cluster to connect a tool to the cluster.

Reset “admin” password

Connect by SSH to a CVM of the Nutanix cluster on which the account is locked with the “root” account.

Then enter the following command:

passwd admin

Enter the new password twice, the password is reset.

Unlock the “admin” account

To unlock the “admin” account, enter the following command:

allssh sudo faillock --user admin --reset

The “admin” account is now unlocked.

Read More

As part of setting up labs on a Nutanix infrastructure, you may be required to deploy a hypervisor (ESXi, Promox, Hyper-V, etc.) on the AHV hypervisor (Inception!).

You will then be confronted with this type of error message when installing ESXi for example (the form differs for other hypervisors, but the substance remains the same):

The processor will not be detected as having virtualization capabilities and you will therefore not be able to deploy a hypervisor… But it is possible to bypass this restriction.

Nutanix AHV: bypass processor restriction

I assume that the virtual machine on which you want to deploy a hypervisor is already created.

To bypass the processor restriction, we must connect to one of the CVMs in our cluster and modify our virtual machine with the acli vm.update command and the “cpu_passthrough” parameter:

acli vm.update VM_NAME cpu_passthrough=true

You will get the following message:

nutanix@NTNX-a64e778d-A-CVM:192.168.2.241:~$ acli vm.update VM_NAME cpu_passthrough=true
VM_NAME: pending
VM_NAME: complete

Please note, this command will only work if your virtual machine is turned off.

Once the command is applied you can restart your installation… Except for ESXi which still requires a little subtlety!

Nutanix AHV: truncate NIC type to install ESXi

To install an ESXi nested on Nutanix AHV and have it be fully functional, you also need to modify the network adapters to make it think they are e1000 type.

To do this, with the virtual machine still off, connect to one of the CVMs, and type the following command:

acli vm.nic_create VM_NAME network=NETWORK_NAME model=e1000

Be sure to replace VM_NAME with the name of the virtual machine concerned, and NETWORK_NAME with one of the networks previously created on your Nutanix cluster. You will get the following message:

nutanix@NTNX-a64e778d-A-CVM:192.168.2.241:~$ acli vm.nic_create VM_NAME network=NETWORK_NAME model=e1000
NicCreate: pending
NicCreate: complete

You can now restart the installation of your hypervisor.

Read More

Nutanix Move : Introduction

With the surge in prices for VMWare licenses linked to the company’s acquisition by Broadcom, many customers are looking for alternative solutions to ESXi in order to avoid a hefty bill.

Among the alternatives, there is obviously Nutanix among the alternatives but customers are quite cautious regarding the migration from VMWare ESXi to Nutanix AHV, a task which can be tedious if the VMs have to be migrated one by one.

This is where Nutanix Move comes in. Nutanix Move is a tool made available by Nutanix to facilitate the migration of virtual machines to any cloud.

What will interest us is mainly a migration from ESXi or Hyper-V to Nutanix AHV.

Nutanix Move: virtual machine download

To retrieve the virtual machine image, you must connect to the Nutanix Support portal: https://portal.nutanix.com/ then go to the “Downloads” section:

Click on “Move” and download on the “Move QCOW2 file for AHV” button:

Direct link to download page: https://portal.nutanix.com/page/downloads?product=move

Deploying the Nutanix Move VM on AHV

Once the image has been downloaded, you must upload it to your cluster. To do this, I invite you to follow one of my previous articles: https://juliendumur.fr/nutanix-ahv-telecharger-une-image-sur-son-cluster/

To deploy the Nutanix Move virtual machine, go to VMs, click on “Create VM” and complete the virtual machine creation form:

In the Disks section, add a disk of type “DISK”, in operation select “Clone from Image Service” and select the previously downloaded image:

Click on “Add” then add a network interface, the virtual machine is deployed.

Access to Nutanix Move

Nutanix Move is accessible via http via the IP assigned to it:

The first step is to change the access password:

Nutanix Move is now deployed, we will see in a future article how to add target environments.

Official documentation

Nutanix Move Official Documentation: https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move-v5_1:Nutanix-Move-v5_1

Read More

In my previous tutorial, I showed you how to deploy the HYCU solution on your Nutanix cluster. It is now time to add your cluster to the software administration interface so that you can backup it.

Nutanix: Create a dedicated user

It is strongly recommended to create a dedicated user on your Nutanix cluster to manage the backup part with HYCU. To do this, go to “Settings” > “Local Users Management” and click on “New User”:

Complete the form with “Cluster Admin” and “Backup” rights:

The rights must be configured with “Cluster Admin” and “Backup”. The “Backup” right alone will not allow the HYCU solution to function correctly. Save, we can now add the cluster to HYCU.

HYCU: Add a new source

To add a new source to HYCU, you must visit the toothed wheel at the top right of the interface, then click on “Source”:

In the window that appears, there are 4 tabs available:

  • Hypervisor: this is where we add the Nutanix or VMWare hypervisors which are to date the only compatible HYCU hypervisors
  • Cloud: to back up your Google Cloud or Azure environments
  • File Servers: intended for file servers including Nutanix Files, NetApp OnTap or Dell PowerScale
  • Physical Machines: to backup physical servers

The tab that interests us here is “Hypervisor”, click on “New” then fill in the different available fields:

Enter the URL of your cluster either in IP format as in my screenshot, or in HTTP format: https://ip-address:9440, as well as the login and password of the account previously created on your cluster.

Click “Next” to access the optional addition of Prism Central login information, fill out the form with Prism Central “admin” authentication information if desired, then click “Next”:

If you have correctly filled in the information from the previous forms, a validation message appears and you can save the configuration:

Your cluster is now added to your HYCU solution and you can start backing up your virtual machines:

Read More

For a customer case, I had to configure the anti-affinity between 2 virtual machines.

Anti-affinity: what is it?

First, I’ll give some context so the foundations are laid. For one of our clients, I am deploying 2 Palo Alto virtual machines to set up a cluster which will manage flows between its different networks.

In order to ensure maximum redundancy in the event of any failure, it is essential that the virtual machines be hosted on different hosts. Indeed, if they were hosted on a single host, in the event of a host failure, the Palo Alto cluster would be out of service.

This is where anti-affinity comes into play and will allow me to ensure that the 2 virtual machines are never found on the same host.

Setting up anti-affinity

The implementation of the anti-affinity is to be carried out in command lines directly from one of the CVMs of the cluster and takes place in several stages:

  • Create a group: connect via SSH then type the following command:
nutanix@cvm$ acli vm_group.create group_name
  • Add the virtual machines to the group:
nutanix@cvm$ acli vm_group.add_vms group_name vm_list=vm_name1,vm_name2
  • Enable anti-affinity:
nutanix@cvm$ acli vm_group.antiaffinity_set group_name

After a while, virtual machines that were previously on the same host will then be spread across 2 different hosts.

In the event of failure of a host hosting one of the 2 virtual machines, the machine concerned will be restarted on one of the hosts in compliance with the anti-affinity rule.

Be careful, however, if you manually migrate a virtual machine or as part of the host being put under maintenance, the anti-affinity rule may not apply.

Official documentation

The official Nutanix documentation: https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide-v6_7:ahv-vm-anti-affinity-t.html

Read More

To be able to deploy a virtual machine on your Nutanix cluster and have it reachable on your network, you will need to start by configuring the network(s) on your cluster.

Creating a network using Prism Element

Under Prism Element, in the “Settings > Network Configuration” menu is the list of all existing networks on the cluster, click on “Create Subnet”:

Then enter your network information, namely the name and vlan ID:

If you do not have a DHCP server, you can let Nutanix manage the addressing of the network created using the “Enable IP address management” option:

You will then need to complete all the options that would normally have been delivered by a traditional DHCP server:

Click “Save” once the settings are correct. Repeat for each VLAN you need on your infrastructure.

Creating a network using Prism Central

In Prism Central, network management is carried out in “Network & Security > Subnets”:

To add a new network, click “Create Subnet”:

It is then a form similar to that of Prism Element that must be completed by activating, or not, the “IP Address Management” option if you wish to leave the management of your addressing to Nutanix.

Official Nutanix documentation

Link to official documentation: https://portal.nutanix.com/page/documents/solutions/details?targetId=BP-2071-AHV-Networking:bp-ahv-network-management.html

Read More

To be able to deploy virtual machines on your cluster, you will need images that are available to launch your installations and here is the procedure to follow.

Before you start

An image uploaded via Prism Element can be imported to Prism Central. The reverse is not possible.

An image uploaded or imported via Prism Central is visible but not editable on Prism Element.

An image uploaded via Prism Element can only be used by the cluster to which it was uploaded.

An image downloaded via Prism Central can be used by all clusters managed by this Prism Central

Nutanix provides compatibility with images in the following formats:

  • RAW
  • VHD
  • VHDX
  • VMDK
  • VDI
  • OVA
  • ISO
  • QCOW2

Download via Prism Element

To upload an image via Prism Element, connect to the web interface using your credentials, then navigate to the “Settings > Image Configuration” menu:

Click Upload Image:

Complete the “Name”, “Image Type”, “Storage Container” fields, select the image you wish to transfer then click on “Save”:

Wait during the transfer and then while your image is processed by the cluster. Its status must be “ACTIVE” for it to be operational:

Download via Prism Central

The process for transferring an image through Prism Central is essentially identical to that of Prism Element.

Connect to the web interface then navigate to “Compute & Storage > Images” and click on “Add Image”:

Click on “Add File”, select the image you want to transfer, fill in the description then click on “Next”:

On the next screen, select the image placement mode based on your environment. In most cases, the default method “Place image directly on clusters” will do the trick, click “Next”:

Wait while the cluster transfers and processes the image.

Import images from Prism Element to Prism Central

On the image management page on Prism Central, click “Import Images”:

Then select the transfer method that suits you:

“All images” will repatriate all the images from all the clusters managed by Prism Central

“Images on a cluster” will allow you to select the cluster(s) and source image(s) on a case-by-case basis.

Official Nutanix documentation

Link to official documentation: https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v6_7:wc-image-configure-acropolis-wc-t.html

Read More