Team Leader - Nutanix Technology Champion - Nutanix NTC Storyteller

Julien DUMUR
Infrastructure in a Nutshell
nutanix hycu

Now that the Nutanix cluster is added to the HYCU software, we need to add the backup target to which the files will be sent.

Adding a SMB backup target

To add a backup target on HYCU, go to the left side menu and click on “Target”:

Then click on “Add” at the top right to launch the target addition wizard:

Then, select the type of target you want to add on HYCU, then click “Continue”. In my case, I kept it simple with an SMB share on the Synology but you are spoiled for choice (NFS, SMB, Nutanix Objects, iSCSI, S3 storage…):

Then enter a name, a description if you wish, and the number of simultaneous backups then click on “Next”. In my case, I limited myself to 2:

Then provide the account information to access the SMB share that will be used as a target. In my case, I need to provide the username, password, server IP address, and shared folder name:

Once the information is correctly entered, click on “Save”. Your backup target now appears in the list of targets, ready to receive backups:

Backup target configuration is complete, my HYCU controller is ready to backup my data.

Read More
nutanix hycu

After deploying the HYCU controller and initializing it, you need to add your Nutanix cluster as a source of virtual machines to backup.

To add a Nutanix cluster on HYCU, you need to provide an account with “Cluster Admin” rights on Nutanix. If you want to do things properly, you should create a service account that is dedicated to this use rather than using the “admin” account of the Nutanix cluster.

I will show you how to create the service account on Prism Element.

Creating the service account on Prism Element

The first step is to connect to the Prism Element of your Nutanix cluster and go to “Setting > Local User Management” and click on “New User”:

Fill in the fields with the desired information and click “Save”:

It is imperative that the “Cluster Admin” and “Backup Admin” boxes are checked! The user is now created, you must now return to HYCU to continue adding the Nutanix cluster:

Creating the service account on Prism Central

The second step is optional and is only necessary if your cluster is registered on a Prism Central. Log in to the Prism Central of your Nutanix cluster, then go to “Admin Center > IAM > Identities” and click on “+ Add Local User”. Fill in the fields with the desired information and click on “Save”:

The user is now created on Prism Central, you now need to assign the correct access rights:

Go to “Admin Center > IAM > Authorization Policies”. Click on “Create Authorization Policy”:

On the page that appears, start by renaming the policy and select “Cluster Admin” in the field provided for this purpose (type “cluster” to do a search) then click on “Next” at the bottom right:

Leave the default selection and click “Next” at the bottom right of the page:

Add your previously created user to the policy and click “Save” at the bottom right of the page:

The user is now created on Prism Central and Prism Element, you must now return to HYCU to continue adding the Nutanix cluster:

Adding the Nutanix cluster on HYCU

In the “Settings” menu, click on “Sources” to open the menu for adding a backup source:

Then click on “New” to start the cluster adding process:

In the window that appears, enter the URL of your Prism Element, the username and password of the HYCU account previously created and click “Next”:

Then enter the URL of your Prism Central, the username and password of the account previously created and click on “Next”:

If you have completed all of the previous steps correctly, you should see a message like “Validation successful”:

Your Nutanix cluster is now added to your HYCU backup solution and appears in the list of sources:

Read More
nutanix hycu

In the previous blog post, I created and started the HYCU virtual machine. Now let’s move on to the initial configuration of the Backup Controller.

Initializing the HYCU backup system

To start the initialization of the HYCU backup controller, you need to connect to the virtual machine via the Nutanix console. Connect to your Prism Element interface, go to “VM”, right-click on the virtual machine you just deployed and click on “Launch Console”:

A new window will open and you should see the initialization startup window. Select “HYCU Backup Controller” from the list and validate:

On the next screen, you must now enter all the network configurations, namely:

  • Host name of the virtual machine
  • Its IPv4 address
  • The associated subnet mask
  • The default gateway
  • The DNS server
  • Possibly, the domain

Once all the information is entered, validate to launch the initialization of the backup controller:

A little less than 10 minutes later, the initialization will be complete and you will be able to access the solution’s web administration console at the address https://HYCU-IP-ADDRESS:8443:

The default login and password are: “admin” for the login and “admin” for the password, to be changed quickly for obvious security reasons:

You are now connected to the HYCU web interface, we can now move on to the basic configuration:

Basic configuration of my HYCU controller

After finishing deploying the solution, there are some basic configurations to be done to make our system fully operational. I detail some steps here, others not because the article is adapted to a use in lab mode!

Changing the password

We will start by changing the default password of the admin account which is currently “admin”. To do this, go to the top right of the interface, click on the connected username “admin” and in the menu that appears, click on “Change Password”:

Enter the old password, the new password twice and validate:

Checking the license level

Before continuing, let’s check the license level of our installation. To do this, click on the gear at the top right of the window to display the drop-down menu, then click on “Licensing”:

If everything is OK, you should now have a valid “Free license” type license:

Bandwidth Throttling Configuration

Working from home most of the time, I planned to schedule my backup slots between 8pm and 7am. But since I’m also a gamer, there’s no way I’m going to crush my network bandwidth during my sessions… So I set up a bandwidth limitation. In the settings (cogwheel), in “Networks > Throttling”:

Then, in “Throttling Windows”, I was able to enter a limitation range named “Gaming Hours” which extends from 8 p.m. to 1 a.m.:

And then I added a throughput speed limitation to 1MiBps:

Additional configurations

Among the additional configurations that can be done but that I will not detail here because it is not in place in my lab, there is in particular the AD authentication part:

And the email notifications part:

Read More
nutanix hycu

Today, I will present you the method of deploying the HYCU backup solution on my Nutanix AHV cluster.

In the previous article, I presented you the prerequisites necessary for installing the solution, including sending the HYCU installation files to the Nutanix AHV cluster. At this stage, you should therefore have the desired version of HYCU available on your cluster:

In my case, I have deliberately placed 3 different versions of the software in order to be able to show:

  • the deployment of the solution
  • the upgrade with upgrade path

Sizing

Before proceeding with the installation, it must first be sized according to our needs. For this, HYCU gives indications on VM sizes depending on the quantity of virtual machines to be backed up:

In my case, having less than 50 virtual machines to backup, I will go for the smallest size:

  • 8 vCPU
  • 8Gb RAM
  • 32Gb storage

Now that my VM is sized, let’s move on to its deployment.

Deploying the HYCU controller

To deploy the HYCU VM, you need to connect to the Prism Element, go to the “VM” menu and click on “Create VM”. In the window that appears, I give it a name and move on:

I enter the values ​​retained during dimensioning in the “Compute Details” section:

And then I add the first disk of type “DISK” as “Clone from Image Service” and select the image I previously loaded on my cluster “hycu-4.9.0-5310”:

I then add a second disk of type “DISK” this time, in “Allocate on Storage Container” since it is a blank disk of 32Gb that I place on the default storage container:

Finally, I assign a network card to my virtual machine on the subnet of my choice:

My virtual machine is now deployed, I can start it:

My next blog post will detail the initial configuration of the HYCU virtual machine which will allow to finalize the system initialization and access the solution administration interface.

Read More
nutanix hycu

In my previous article, I introduced you to the HYCU backup solution. I will now describe my deployment environment and explain how to retrieve the solution’s installation sources.

Deployment environment

My deployment environment is as follows:

  • Nutanix CE2.1 cluster to deploy and as a source for backups
  • Synology NAS as a destination for backups

This is enough to test the solution and simulate a real production environment.

I have already set up a share on the Synology NAS to receive the backups and I have made sure that the Nutanix cluster, the HYCU VM and the Synology NAS could communicate without any problem.

I will see later if I can simulate an additional S3 type location to simulate an externalization of data in “Archiving” mode.

Let’s now move on to recovering the files needed for the installation.

Solution Compatibility

In terms of compatibility, here are the main elements to keep in mind on the scope that interests us:

  • Nutanix AOS 6.5, 6.8 and 6.10
  • Nutanix Files 4.4 and 5.0
  • Nutanix Objects, Amazon S3, Wasabi
  • VM Windows Server 2016, 2019, 2022
  • Linux

I invite you to consult the compatibility matrix available on the HYCU Support site to learn about all the HYCU compatible platforms: https://download.hycu.com/ec/v5.1.0/help/en/HYCU_CompatibilityMatrix.pdf

Retrieving installation images

To retrieve the images, you must go to the publisher’s website and request a trial license: https://www.hycu.com/solutions/data-protection/nutanix#getForm

There are a few fields to fill in, a person from HYCU will then contact you by phone (so provide a number on which you can be reached!) to ask you what your motivations are.

Once all the prerequisites are filled in, HYCU will provide you with a download link that will allow you to retrieve the software installation image in qcow2 format, ready to be imported into your Nutanix cluster.

Transferring images to the Nutanix cluster

Once you have downloaded the qcow2 image from HYCU, you now need to transfer it to your Nutanix cluster.

To do this, connect to the Prism Element of your Nutanix CE cluster and go to “Settings > Image Configuration”:

Click on “Upload image”, fill in the form and select the previously uploaded image:

As this will be a prerequisite for future operations on your HYCU, I invite you to get into good habits right away and name your image according to the HYCU-VERSION-BUILD model.

Click on “Save” to start the transfer and wait until your image is processed by the cluster and is indicated as “ACTIVE”.

HYCU official documentation

For the official documentation from the publisher, it is freely available on the HYCU website in PDF format: https://support.hycu.com/hc/en-us/categories/15363454825500-Documentation

No translation other than English, so you’ll have to learn the language of Shakespeare!

There you have it, you now have everything you need to deploy the HYCU controller on your cluster, see you in a future article to find out how to deploy the controller.

Read More
nutanix hycu

In a world where data management is a major issue, companies are looking for efficient, simple and powerful backup and disaster recovery solutions. HYCU stands out as a backup solution specifically designed for hyperconverged environments, particularly Nutanix.

This article is the first of a complete guide on the HYCU backup solution, a guide that you can find with the other guides here: https://juliendumur.fr/en/guides/

Native integration with Nutanix

HYCU is the first backup solution developed exclusively for Nutanix. Unlike traditional solutions tailored to hyperconverged infrastructures, HYCU is designed to fully leverage the Nutanix ecosystem. It integrates directly with Nutanix AHV and Nutanix Files, ensuring optimal data protection without impacting system performance.

With its API-first integration, HYCU is able to align seamlessly with the Nutanix architecture. This allows IT administrators to manage backups and restores from the Nutanix Prism interface, providing centralized and intuitive management. This approach reduces the learning curve and limits operational efforts.

Key Features of HYCU for Nutanix

Unlike traditional solutions that require installing agents on each virtual machine, HYCU is based on an agentless approach, reducing resource consumption and simplifying maintenance. It is able to automatically detect workloads and adapt backup policies based on the defined configurations. This automation minimizes human intervention and reduces the risk of error.

One of HYCU’s strengths is its ability to perform granular (individual files, specific databases) or complete (entire virtual machines) restores in just a few clicks, ensuring rapid recovery in the event of a failure.

HYCU also integrates advanced data compression and deduplication technologies, reducing storage consumption and improving overall backup performance. Native integration of WORM locking also improves the immutability and security of backed up data.

However, HYCU is not limited to on-premises Nutanix infrastructures. It supports hybrid and multi-cloud environments, allowing companies to protect their data on AWS, Google Cloud, and Microsoft Azure. It is also capable of backing up VMWare ESXi environments or SaaS applications.

Unlike complex solutions requiring advanced configurations, HYCU offers a “plug-and-play” approach where setup and operation are accessible even to IT teams not specialized in backup. The intuitive user interface allows administrators to manage the entire backup and restore lifecycle in just a few clicks.

Use cases and benefits

Companies using HYCU for Nutanix benefit from optimized data protection without compromising the performance of their infrastructure. Key benefits include:

  • Reduced operational costs through simplified management and reduced storage requirements.
  • Significant time savings through automated and rapid restores.
  • Improved business continuity with rapid disaster recovery.
  • Regulatory-compliant protection through encryption and data retention features.

HYCU combined with Nutanix environments is a modern and innovative backup solution that is perfectly aligned with the principles of hyperconvergence. In the next article, I will present the different prerequisites necessary for deploying the solution.

Read More

A few months ago, I bought myself a Steamdeck to pass the time during my convalescence after a foot operation that left me couch-locked. I had already shown you that you can manage a cluster from the Steamdeck and I wanted to push the experience a little further…

Multi-boot on the Steamdeck

To make Foundation for Windows possible to run on the Steam deck, I had to find a solution to run Windows 11 instead of the natively embedded SteamOS.

To carry out the operation, I had several options:

  • replace the embedded operating system to switch from SteamOS to Windows 11 but that would add a lot of constraints to be able to play my Steam games on the console
  • set up a multi-boot system with an external drive on which I would have installed Windows 11 but that would make a potentially cumbersome device to carry around…

The goal being to have an additional boot solution for my Steamdeck in order to be able to experiment with Windows on the machine in various situations without having to remove the natively embedded operating system, I opted for the second option and I started looking for an external drive that would do the trick.

While browsing the Internet, I finally came across a Kickstarter project “Genki SavePoint”: https://www.kickstarter.com/projects/humanthings/genki-savepoint?lang=fr

The Genki Savepoint is a mini SSD enclosure designed for portable use. On paper, here are the promises of the case:

  • Compatible SSD M.2 2230
  • Max capacity of 2Tb
  • Transfer speed of 10Gb/s
  • 100w charging
  • Integrated heat sink
  • Integrated protection capacitor

So I rushed to support the project by ordering 2 cases that I finally received after a few weeks of waiting. I added a 1Tb M.2 2230 SSD to have enough space whatever use I have for it…

Exit SteamOS, hello Windows 11!

Once the case and the SSD were received, I mounted the SSD in the case and to do this, simply unscrew the heat sink to reveal the M.2 connector and insert the SSD. Once connected to the computer, the case is detected as an external hard drive.

I now had to prepare the SSD by installing Windows 11 on it using Rufus. I won’t detail the process here since the case manufacturer took care of it here: https://www.genkithings.com/blogs/blog/installing-windows-on-savepoint

Once Windows 11 was installed, I downloaded all the available drivers (https://help.steampowered.com/fr/faqs/view/6121-ECCD-D643-BAA8) and the various software that I wanted to install next (including Foundation for Windows) and I put them in preparation on the disk. The serious stuff could begin…

Installing Nutanix Foundation

At the first boot on the case, I obviously had to do all the configuration of the operating system and install all the drivers previously installed.

Then, deploying Foundation on the Steamdeck is simple since I just had to run the file I downloaded from the official website (https://portal.nutanix.com/page/downloads?product=foundation).

Once the installation was complete, I opened the internet browser and opened the address http://locahost:8000/gui/index.html to access the Nutanix Foundation interface:

Unsurprisingly, Foundation for Windows runs flawlessly on the Steamdeck, but what about a deployment without an onboard RJ45 network port? To solve this problem, I just had to purchase a mini USB-C dock with:

  • 1 RJ45
  • 3 USB2
  • 1 USB-C
  • 2 HDMI

At this stage, the Steamdeck is “Foundation Ready” and ready to deploy clusters. However, the last question that remains is: does it work? In all honesty, I don’t know because unfortunately I didn’t have a cluster available to allow me to do a full-scale test, but as soon as the opportunity arises it will be done!

Read More

In order to secure intra-cluster flows in an environment where network segmentation is non-existent, it is sometimes necessary to configure the backplane network to isolate them from production flows.

Overview of the backplane network

The backplane network creates a dedicated interface in a separate VLAN on all CVM and AHV hosts in the cluster for the exchange of storage replication traffic. The backplane network shares the same physical adapters on the br0 bridge by default, but uses a different non-routable VLAN. This allows the cluster flows to be isolated from those of the production machines logically and/or physically.

Use case

In our case, the client network has no network segmentation and all its equipment is in the same subnet (servers, PCs, printers, phones, etc.).

The goal was therefore to set up the backplane network to isolate and secure intra-cluster flows on a dedicated VLAN independent of the rest of the network (flows in red on the diagram):

The first step is to modify the configuration of the Top-of-Rack switches to add the new VLAN. In our usecase, we will do a logical segmentation.

Top-of-rack switch configuration

Before activating the backplane network, it is necessary to prepare the ports of the top-of-rack switches for this operation. In our case, we are on Mellanox switches with an active-backup port configuration with an administration VLAN in 100 and an unrouted VLAN dedicated to the backplane network in 3000:

interface ethernet 1/1
switchport mode hybrid
switchport hybrid allowed-vlan add 3000
switchport access vlan 100
exit
interface ethernet 1/2
switchport mode hybrid
switchport hybrid allowed-vlan add 3000
switchport access vlan 100
exit
interface ethernet 1/3
switchport mode hybrid
switchport hybrid allowed-vlan add 3000
switchport access vlan 100
exit
interface ethernet 1/4
switchport mode hybrid
switchport hybrid allowed-vlan add 3000
switchport access vlan 100
exit

Of course, you must adapt the commands to your switch model and reproduce this configuration on the 2nd Top of Rack switch.

BE CAREFUL not to make any mistakes when modifying your network configuration at the risk of compromising access to your cluster.

Once the configuration is complete, it is now possible to set up the backplane network on the cluster.

Configuring the backplane network

Before you can start, it is imperative to put all hosts in maintenance mode. To do this, you must connect to a CVM and type the following command:

acli host.enter_maintenance_mode HOST_IP

You must repeat the command with the IP address of each host in your cluster.

Once all the hosts are in maintenance mode, you must connect to Prism Element, go to the “Setting > Network Configuration > Internal Interfaces” menu:

Opposite “Backplane LAN” click on “Configure”:

In the window that appears, enter:

  • the IP address of the network you want to use for the backplane network
  • the subnet mask associated with this subnet
  • the ID of the VLAN you have chosen
  • the virtual switch that will have to carry it

Tips and best practices for choosing your backplane network:

  • the network must not be routed
  • it must not exist on the network
  • it must be chosen large enough to integrate the existing nodes and possibly an expand cluster
  • the VLAN ID must be unique on the network

Once the configuration operation is complete on the cluster, you must exit all the maintenance hosts with the following command:

acli host.exit_maintenance_mode HOST_IP

You have to enter the command on a CVM and repeat it with the IP address of each host in the cluster.

In network configuration, you will see that the backplane network is now configured and active:

Your intra cluster traffic is now isolated from the rest of the network.

Read More

Sometimes and for various reasons, it is necessary to configure the VLAN directly at the level of our Nutanix cluster, in particular to ensure network segmentation.

Use case

Having had a little time for myself during the Christmas holidays, I set about resuming the configuration of my local network in order to isolate my Nutanix lab from my internal network.

To do this, I had to reconfigure my Ubiquiti equipment in order to:

  • create VLAN 84 at the Dream Machine Pro level
  • propagate VLAN 84 on the 24-port switch then on the 5-port switch on which the cluster is connected

Changing the VLAN on AHV

Before starting the modifications, I start by checking the network configuration of my host:

[root@NTNX-5e8f7308-A ~]# ovs-vsctl list port br0
_uuid : b76f885d-59b2-4153-99d3-27605a729ab8
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
cvlans : []
external_ids : {}
fake_bridge : false
interfaces : [17e8b0de-2ef5-4f6f-b253-94a766ec9603]
lacp : []
mac : []
name : br0
other_config : {}
protected : false
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : 0
trunks : []
vlan_mode : []

The output of the command shows us that there is no tag on my host. We will fix this with the following command:

[root@NTNX-5e8f7308-A ~]# ovs-vsctl set port br0 tag=84

The command “ovs-vsctl set port br0 tag=” allows me to tag my host interface with the VLAN ID that I have dedicated to my Nutanix network. We then check that the configuration is applied:

[root@NTNX-5e8f7308-A ~]# ovs-vsctl show
Bridge br0
    Port vnet4
        tag: 0
        Interface vnet4
    Port br0-up
        Interface eth4
        Interface eth0
        Interface eth5
        Interface eth2
        Interface eth1
        Interface eth3
    Port br0.u
        Interface br0.u
            type: patch
            options: {peer=br.dmx.d.br0}
    Port br0
        tag: 84
        Interface br0
            type: internal
    Port br0-dhcp
        Interface br0-dhcp
            type: vxlan
            options: {key="1", remote_ip="192.168.84.200"}
    Port br0-arp
        Interface br0-arp
            type: vxlan
            options: {key="1", remote_ip="192.168.5.2"}
    Port vnet2
        Interface vnet2
ovs_version: "2.14.8"

We can now see that the VLAN is configured on my host, we must now do the configuration on the CVM side…

Configuring the VLAN on the CVM

We start by checking the network configuration of our CVM:

[root@NTNX-5e8f7308-A ~]# ovs-vsctl show
    Bridge br0
        Port br0-up
            Interface eth4
            Interface eth0
            Interface eth5
            Interface eth2
            Interface eth1
            Interface eth3
        Port br0-arp
            Interface br0-arp
                type: vxlan
                options: {key="1", remote_ip="192.168.5.2"}
        Port br0.u
            Interface br0.u
                type: patch
                options: {peer=br.dmx.d.br0}
        Port vnet5
            Interface vnet5
        Port br0
            tag: 84
            Interface br0
                type: internal
        Port br0-dhcp
            Interface br0-dhcp
                type: vxlan
                options: {key="1", remote_ip="192.168.84.200"}
        Port vnet2
            Interface vnet2
    ovs_version: "2.14.8"

Here we can see that my network interface does not have any vlan information. So I proceed to configure the VLAN ID by connecting to my CVM and then typing the command

change_cvm_vlan VLANID
nutanix@NTNX-5e8f7308-A-CVM:192.168.84.200:~$ change_cvm_vlan 84
This operation will perform a network restart. Please enter [y/yes] to proceed or any other key to cancel: y
Changing vlan tag to 84
Replacing external NIC in CVM, old XML:
<interface type="bridge">
      <mac address="52:54:00:8e:69:bc" />
      <source bridge="br0" />
      <virtualport type="openvswitch">
        <parameters interfaceid="356e3bf3-5700-4131-b1b2-4fa65195a6e2" />
      </virtualport>
      <target dev="vnet0" />
      <model type="virtio" />
      <driver name="vhost" queues="4" />
      <alias name="ua-1decc31c-2764-416a-b509-d54ecd1a684f" />
      <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" />
    </interface>

        new XML:
<interface type="bridge">
      <mac address="52:54:00:8e:69:bc" />
      <model type="virtio" />
      <driver name="vhost" queues="4" />
      <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" />
    <source bridge="br0" /><virtualport type="openvswitch" /><vlan><tag id="84" /></vlan></interface>

CVM external NIC successfully updated.
Performing a network restart

We now check the CVM network configuration to verify that the tag has been configured correctly:

[root@NTNX-5e8f7308-A ~]# ovs-vsctl show
Bridge br0
Port br0-up
Interface eth4
Interface eth0
Interface eth5
Interface eth2
Interface eth1
Interface eth3
Port br0-arp
Interface br0-arp
type: vxlan
options: {key="1", remote_ip="192.168.5.2"}
Port br0.u
Interface br0.u
type: patch
options: {peer=br.dmx.d.br0}
Port vnet5
tag: 84
Interface vnet5
Port br0
tag: 84
Interface br0
type: internal
Port br0-dhcp
Interface br0-dhcp
type: vxlan
options: {key="1", remote_ip="192.168.84.200"}
Port vnet2
Interface vnet2
ovs_version: "2.14.8"

My CVM is now on VLAN 84. All I have to do now is repeat these operations on all my nodes and then check that everything works properly.

WARNING: the change_cvm_vlan command has a known bug in 6.8 with AHV 20230302.100173 that causes the VLAN ID not to be preserved when repeating the host: https://portal.nutanix.com/page/documents/kbs/details?targetId=kA0VO0000002uJ30AI

Read More

Easter eggs are hidden in various media elements, including movies, video games, software, and even websites, and Nutanix is ​​no exception.

These little references are often intentionally placed by creators to entertain curious and observant users. They can take the form of cultural references, humorous winks, secret messages, or new features that can only be accessed through specific methods.

Easter eggs have become something of a tradition in the digital world, increasing fan engagement and creating a special connection between creators and their audience.

The “2048” game

The game 2048 is a popular puzzle game played on a 4×4 grid. The goal is to merge tiles of the same value to reach the 2048 tile. The arrow keys on the keyboard move the tiles in the four directions. Each move reveals a new tile of value 2 or 4. The game ends when no more moves are possible.

Nutanix integrates the game 2048 into its Prism Element and Prism Central consoles, offering you a fun break while using the platform. To access this game, follow these steps:

Log in to your Prism Element or Prism Central interface via your web browser and, in the upper right corner of the interface, click on your username to drop down the menu.

In the drop-down menu, choose the option titled “Nothing to do?”:

Once the option is selected, the 2048 game will open in a new window, ready to be played!

If you don’t see the “Nothing To Do?” option in the menu, it’s possible that your administrator has disabled it. To check this setting, go to Console Settings, then Appearance, and then User Interface Settings. Make sure the box to enable the game is checked.

NetHack

Nethack is an iconic game in geek culture, known for its text-based gameplay and numerous easter eggs and cultural references. Dragons are powerful creatures in this game, and the ASCII drawings are often a tribute to this type of game.

The NetHack reference is hidden in Nutanix Move. To view it, connect via SSH to the Move VM and type the following command:

cat /opt/xtract-vm/bin/dragon.txt

And this is what you will have on the screen!

Advanced customization options

You can access advanced customization options through a hidden shortcut that will allow you to customize your cluster’s login page!

To access it, connect to your cluster, go to the settings and then to the “UI Settings” section. Then, click on “UI Settings” while holding down the “Alt” key on your keyboard:

You will then have more options for customizing the login interface:

The Lord of the Rings

Among the services that allow you to run your cluster, there are 2 that are a direct reference to Aragorn’s sword in Tolkien’s work, the Lord of the Rings. I am obviously talking about Anduril, Flame of the West, forged with the fragments of Narsil.

Anduril and Narsil are 2 Nutanix services that you can find when you type the command “cluster status” on one of your CVMs:

Stargate

Still among the Nutanix services, we can find a reference to the cult film and series of the same name: Stargate.

To display it, always the same command: cluster status from a CVM:

Star Trek

Among the other easter eggs hidden in Nutanix, we can find a reference to the Star Trek universe with the Uhura service which is a reference to Nyota Uhura, a lieutenant of the USS Enterprise!

To display it, always the same command: cluster status from a CVM:

Xmen

Still among the Nutanix services, we can find a reference to the Xmen universe with the Cerebro service!

Cerebro is a high-tech computer created by Professor Charles Xavier with the help of Magneto that amplifies the brainwaves of the person using it.

To display it, always the same command: cluster status from a CVM:

Terminator

This one is very explicit, and still among the services of your cluster… Terminator, essential villain of pop culture!

To display it, always the same command: cluster status from a CVM:

Greek and Roman Mythology

The biggest chunk of references since there are almost ten references to the Gods of Greek and Roman mythologies…

Zeus, Hera, Minerva, Athena… They almost all go there!

If you want to visit Mount Olympus, it is always in CLI from a CVM with a cluster status:

And you? Have you found other hidden references in Nutanix?

Read More