Team Leader - Nutanix Technology Champion - Nutanix NTC Storyteller

Julien DUMUR
Infrastructure in a Nutshell

In order to secure intra-cluster flows in an environment where network segmentation is non-existent, it is sometimes necessary to configure the backplane network to isolate them from production flows.

Overview of the backplane network

The backplane network creates a dedicated interface in a separate VLAN on all CVM and AHV hosts in the cluster for the exchange of storage replication traffic. The backplane network shares the same physical adapters on the br0 bridge by default, but uses a different non-routable VLAN. This allows the cluster flows to be isolated from those of the production machines logically and/or physically.

Use case

In our case, the client network has no network segmentation and all its equipment is in the same subnet (servers, PCs, printers, phones, etc.).

The goal was therefore to set up the backplane network to isolate and secure intra-cluster flows on a dedicated VLAN independent of the rest of the network (flows in red on the diagram):

The first step is to modify the configuration of the Top-of-Rack switches to add the new VLAN. In our usecase, we will do a logical segmentation.

Top-of-rack switch configuration

Before activating the backplane network, it is necessary to prepare the ports of the top-of-rack switches for this operation. In our case, we are on Mellanox switches with an active-backup port configuration with an administration VLAN in 100 and an unrouted VLAN dedicated to the backplane network in 3000:

interface ethernet 1/1
switchport mode hybrid
switchport hybrid allowed-vlan add 3000
switchport access vlan 100
exit
interface ethernet 1/2
switchport mode hybrid
switchport hybrid allowed-vlan add 3000
switchport access vlan 100
exit
interface ethernet 1/3
switchport mode hybrid
switchport hybrid allowed-vlan add 3000
switchport access vlan 100
exit
interface ethernet 1/4
switchport mode hybrid
switchport hybrid allowed-vlan add 3000
switchport access vlan 100
exit

Of course, you must adapt the commands to your switch model and reproduce this configuration on the 2nd Top of Rack switch.

BE CAREFUL not to make any mistakes when modifying your network configuration at the risk of compromising access to your cluster.

Once the configuration is complete, it is now possible to set up the backplane network on the cluster.

Configuring the backplane network

Before you can start, it is imperative to put all hosts in maintenance mode. To do this, you must connect to a CVM and type the following command:

acli host.enter_maintenance_mode HOST_IP

You must repeat the command with the IP address of each host in your cluster.

Once all the hosts are in maintenance mode, you must connect to Prism Element, go to the “Setting > Network Configuration > Internal Interfaces” menu:

Opposite “Backplane LAN” click on “Configure”:

In the window that appears, enter:

  • the IP address of the network you want to use for the backplane network
  • the subnet mask associated with this subnet
  • the ID of the VLAN you have chosen
  • the virtual switch that will have to carry it

Tips and best practices for choosing your backplane network:

  • the network must not be routed
  • it must not exist on the network
  • it must be chosen large enough to integrate the existing nodes and possibly an expand cluster
  • the VLAN ID must be unique on the network

Once the configuration operation is complete on the cluster, you must exit all the maintenance hosts with the following command:

acli host.exit_maintenance_mode HOST_IP

You have to enter the command on a CVM and repeat it with the IP address of each host in the cluster.

In network configuration, you will see that the backplane network is now configured and active:

Your intra cluster traffic is now isolated from the rest of the network.

Read More

Sometimes and for various reasons, it is necessary to configure the VLAN directly at the level of our Nutanix cluster, in particular to ensure network segmentation.

Use case

Having had a little time for myself during the Christmas holidays, I set about resuming the configuration of my local network in order to isolate my Nutanix lab from my internal network.

To do this, I had to reconfigure my Ubiquiti equipment in order to:

  • create VLAN 84 at the Dream Machine Pro level
  • propagate VLAN 84 on the 24-port switch then on the 5-port switch on which the cluster is connected

Changing the VLAN on AHV

Before starting the modifications, I start by checking the network configuration of my host:

[root@NTNX-5e8f7308-A ~]# ovs-vsctl list port br0
_uuid : b76f885d-59b2-4153-99d3-27605a729ab8
bond_active_slave : []
bond_downdelay : 0
bond_fake_iface : false
bond_mode : []
bond_updelay : 0
cvlans : []
external_ids : {}
fake_bridge : false
interfaces : [17e8b0de-2ef5-4f6f-b253-94a766ec9603]
lacp : []
mac : []
name : br0
other_config : {}
protected : false
qos : []
rstp_statistics : {}
rstp_status : {}
statistics : {}
status : {}
tag : 0
trunks : []
vlan_mode : []

The output of the command shows us that there is no tag on my host. We will fix this with the following command:

[root@NTNX-5e8f7308-A ~]# ovs-vsctl set port br0 tag=84

The command “ovs-vsctl set port br0 tag=” allows me to tag my host interface with the VLAN ID that I have dedicated to my Nutanix network. We then check that the configuration is applied:

[root@NTNX-5e8f7308-A ~]# ovs-vsctl show
Bridge br0
    Port vnet4
        tag: 0
        Interface vnet4
    Port br0-up
        Interface eth4
        Interface eth0
        Interface eth5
        Interface eth2
        Interface eth1
        Interface eth3
    Port br0.u
        Interface br0.u
            type: patch
            options: {peer=br.dmx.d.br0}
    Port br0
        tag: 84
        Interface br0
            type: internal
    Port br0-dhcp
        Interface br0-dhcp
            type: vxlan
            options: {key="1", remote_ip="192.168.84.200"}
    Port br0-arp
        Interface br0-arp
            type: vxlan
            options: {key="1", remote_ip="192.168.5.2"}
    Port vnet2
        Interface vnet2
ovs_version: "2.14.8"

We can now see that the VLAN is configured on my host, we must now do the configuration on the CVM side…

Configuring the VLAN on the CVM

We start by checking the network configuration of our CVM:

[root@NTNX-5e8f7308-A ~]# ovs-vsctl show
    Bridge br0
        Port br0-up
            Interface eth4
            Interface eth0
            Interface eth5
            Interface eth2
            Interface eth1
            Interface eth3
        Port br0-arp
            Interface br0-arp
                type: vxlan
                options: {key="1", remote_ip="192.168.5.2"}
        Port br0.u
            Interface br0.u
                type: patch
                options: {peer=br.dmx.d.br0}
        Port vnet5
            Interface vnet5
        Port br0
            tag: 84
            Interface br0
                type: internal
        Port br0-dhcp
            Interface br0-dhcp
                type: vxlan
                options: {key="1", remote_ip="192.168.84.200"}
        Port vnet2
            Interface vnet2
    ovs_version: "2.14.8"

Here we can see that my network interface does not have any vlan information. So I proceed to configure the VLAN ID by connecting to my CVM and then typing the command

change_cvm_vlan VLANID
nutanix@NTNX-5e8f7308-A-CVM:192.168.84.200:~$ change_cvm_vlan 84
This operation will perform a network restart. Please enter [y/yes] to proceed or any other key to cancel: y
Changing vlan tag to 84
Replacing external NIC in CVM, old XML:
<interface type="bridge">
      <mac address="52:54:00:8e:69:bc" />
      <source bridge="br0" />
      <virtualport type="openvswitch">
        <parameters interfaceid="356e3bf3-5700-4131-b1b2-4fa65195a6e2" />
      </virtualport>
      <target dev="vnet0" />
      <model type="virtio" />
      <driver name="vhost" queues="4" />
      <alias name="ua-1decc31c-2764-416a-b509-d54ecd1a684f" />
      <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" />
    </interface>

        new XML:
<interface type="bridge">
      <mac address="52:54:00:8e:69:bc" />
      <model type="virtio" />
      <driver name="vhost" queues="4" />
      <address type="pci" domain="0x0000" bus="0x00" slot="0x03" function="0x0" />
    <source bridge="br0" /><virtualport type="openvswitch" /><vlan><tag id="84" /></vlan></interface>

CVM external NIC successfully updated.
Performing a network restart

We now check the CVM network configuration to verify that the tag has been configured correctly:

[root@NTNX-5e8f7308-A ~]# ovs-vsctl show
Bridge br0
Port br0-up
Interface eth4
Interface eth0
Interface eth5
Interface eth2
Interface eth1
Interface eth3
Port br0-arp
Interface br0-arp
type: vxlan
options: {key="1", remote_ip="192.168.5.2"}
Port br0.u
Interface br0.u
type: patch
options: {peer=br.dmx.d.br0}
Port vnet5
tag: 84
Interface vnet5
Port br0
tag: 84
Interface br0
type: internal
Port br0-dhcp
Interface br0-dhcp
type: vxlan
options: {key="1", remote_ip="192.168.84.200"}
Port vnet2
Interface vnet2
ovs_version: "2.14.8"

My CVM is now on VLAN 84. All I have to do now is repeat these operations on all my nodes and then check that everything works properly.

WARNING: the change_cvm_vlan command has a known bug in 6.8 with AHV 20230302.100173 that causes the VLAN ID not to be preserved when repeating the host: https://portal.nutanix.com/page/documents/kbs/details?targetId=kA0VO0000002uJ30AI

Read More

To be able to deploy a virtual machine on your Nutanix cluster and have it reachable on your network, you will need to start by configuring the network(s) on your cluster.

Creating a network using Prism Element

Under Prism Element, in the “Settings > Network Configuration” menu is the list of all existing networks on the cluster, click on “Create Subnet”:

Then enter your network information, namely the name and vlan ID:

If you do not have a DHCP server, you can let Nutanix manage the addressing of the network created using the “Enable IP address management” option:

You will then need to complete all the options that would normally have been delivered by a traditional DHCP server:

Click “Save” once the settings are correct. Repeat for each VLAN you need on your infrastructure.

Creating a network using Prism Central

In Prism Central, network management is carried out in “Network & Security > Subnets”:

To add a new network, click “Create Subnet”:

It is then a form similar to that of Prism Element that must be completed by activating, or not, the “IP Address Management” option if you wish to leave the management of your addressing to Nutanix.

Official Nutanix documentation

Link to official documentation: https://portal.nutanix.com/page/documents/solutions/details?targetId=BP-2071-AHV-Networking:bp-ahv-network-management.html

Read More

Recently, my Ubiquiti USW-Pro-24-PoE switch had some connection issues with my Unifi console.

Read More