For a customer case, we had to create more than 70 subnets on 2 clusters and performing the operation through the graphical interface would have been much too long and tedious. Here is the method to perform the operation in CLI in just a few minutes.
Creating unmanaged subnets
To make my task much easier, I created an Excel file in .csv format in which I put 3 columns:
The idea is to fill the VLAN_NAME and VLAN_ID column with the name of the VLANs and their associated IDs, provided by the customer in the Predelivery Questionnaire:
Which would give:
Save the file and then open it with Notepad++:
Replace the “;” character with a space:
Then replace “vlan=” with “vlan=” to attach the VLAN ID to the command:
Then connect to a CVM in your cluster and copy and paste all the lines at once into the command line interface:
Nutanix Controller VM (CVM) is a virtual storage appliance.
Alteration of the CVM (unless advised by Nutanix Technical Support or
Support Portal Documentation) is unsupported and may result in loss
of User VMs or other data residing on the cluster.
Unsupported alterations may include (but are not limited to):
- Configuration changes / removal of files.
- Installation of third-party software/scripts not approved by Nutanix.
- Installation or upgrade of software packages from non-Nutanix
sources (using yum, rpm, or similar).
** SSH to CVM via 'nutanix' user will be restricted in coming releases. **
** Please consider using the 'admin' user for basic workflows. **
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_10 vlan=10
acli net.create VLAN_26 vlan=26
acli net.create VLAN_27 vlan=27
acli net.create VLAN_28 vlan=28
acli net.create VLAN_29 vlan=29
acli net.create VLAN_30 vlan=30
acli net.create VLAN_31 vlan=31
acli net.create VLAN_32 vlan=32
acli net.create VLAN_33 vlan=33
acli net.create VLAN_34 vlan=34
acli net.create VLAN_35 vlan=35
acli net.create VLAN_36 vlan=36
acli net.create VLAN_37 vlan=37
acli net.create VLAN_38 vlan=38
acli net.create VLAN_39 vlan=39
acli net.create VLAN_40 vlan=40nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_11 vlan=11
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_12 vlan=12
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_13 vlan=13
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_14 vlan=14
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_15 vlan=15
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_16 vlan=16
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_17 vlan=17
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_18 vlan=18
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_19 vlan=19
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_20 vlan=20
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_21 vlan=21
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_22 vlan=22
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_23 vlan=23
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_24 vlan=24
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_25 vlan=25
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_26 vlan=26
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_27 vlan=27
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_28 vlan=28
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_29 vlan=29
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_30 vlan=30
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_31 vlan=31
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_32 vlan=32
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_33 vlan=33
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_34 vlan=34
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_35 vlan=35
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_36 vlan=36
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_37 vlan=37
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_38 vlan=38
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_39 vlan=39
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$ acli net.create VLAN_40 vlan=40
nutanix@NTNX-e854cc31-A-CVM:192.168.2.200:~$
A quick check on the Prism Element interface to confirm that your VLANs have been created correctly:
Then, you must of course redo the cleaning of your file via Notepad to replace with a space or remove the “;” before importing all this into your SSH terminal.
In our example, you will create the following network:
The command first returns general information about the cluster (name, ID, UUID, Version, IPs, etc.):
####################################################
# TIMESTAMP : Fri Jul 26 09:07:43 2024 (UTC +0000) #
####################################################
Cluster Name: MiddleEarth
Cluster Id: 585086141872525258
Cluster UUID: 00061c09-6abd-7835-081e-a4bf0150cfca
Cluster Version: 6.5.2
NCC Version: 4.6.6.3-5e8b6399
CVM ID(Svmid) : 2
CVM external IP : 192.168.2.200
Hypervisor IP : 192.168.2.199
Hypervisor version : Nutanix 20220304.342
IPMI IP : 192.168.2.139
Node serial : BQWL80251503
Model : CommunityEdition
Node Position : A
Block S/N : e854cc31
Then the command will start to return general information about the cluster, namely the hypervisor, the manufacturer, all the node information (position, host name, manufacturer, model, serial number, etc.), the BIOS and management interface information as well:
Running /hardware_info/show_hardware_info [ INFO ]
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Detailed information for show_hardware_info:
Node 192.168.2.200:
INFO:
--------------------------------------------------------------------------------
Updated: Thu, 25 Jul 2024 12:54:59 UTC
Host UUID: 22dd9131-373f-482a-818f-c60b06988a7d
CVM IP: 192.168.2.200
Hypervisor Version:el7.nutanix.20220304.342
NOS Version:el7.3-release-fraser-6.5.2-stable-f2ce4db7d67f495ebfd6208bef9ab0afec9c74af
NCC Version:4.6.6.3-5e8b6399
--------------------------------------------------------------------------------
Nutanix Product Info
+--------------------------------------------------------------------------------------------------+
| Manufacturer | Intel Corporation |
| Product name | CommunityEdition |
| Product part number | CommunityEdition |
| Configured Serial Number | e854cc31 |
+--------------------------------------------------------------------------------------------------+
Chassis
+--------------------------------------------------------------------------------------------------+
| Bootup state | Safe |
| Manufacturer | ............................... |
| Serial number | .................. |
| Thermal state | Safe |
| Version | .................. |
+--------------------------------------------------------------------------------------------------+
Node Module
+--------------------------------------------------------------------------------------------------+
| Node Position | A |
| Bootup state | Safe |
| Host name | NTNX-e854cc31-A |
| Hypervisor type | KVM |
| Manufacturer | Intel Corporation |
| Product name | S2600WTTR |
| Product part number | G92187-372 |
| Serial number | BQWL80251503 |
| Thermal state | Safe |
| Version | G92187-372 |
+--------------------------------------------------------------------------------------------------+
BIOS Information
+--------------------------------------------------------------------------------------------------+
| Release date | 09/02/2020 |
| Revision | 0.0 |
| Rom size | 16384 KB |
| Vendor | Intel Corporation |
| Version | SE5C610.86B.01.01.1029.090220201031 |
+--------------------------------------------------------------------------------------------------+
BMC
+--------------------------------------------------------------------------------------------------+
| Device id | 33 |
| Device available | True |
| Device revision | 1 |
| Firmware revision | 1.61.1 |
| Ipmi version | 2.0 |
| Manufacturer | Intel Corporation |
| Manufacturer id | 343 |
| Product id | 111 (0x006f) |
+--------------------------------------------------------------------------------------------------+
The storage controller is then listed:
Storage Controller
+--------------------------------------------------------------------------------------------------+
| Location | ioc0 |
| Driver name | virtio-pci |
| Manufacturer | Red Hat, Inc. |
| Status | running |
+--------------------------------------------------------------------------------------------------+
| Location | ioc0 |
| Driver name | virtio-pci |
| Manufacturer | Red Hat, Inc. |
| Status | running |
+--------------------------------------------------------------------------------------------------+
Then it’s the turn of all the general information on the installed memory:
Physical Memory Array
+--------------------------------------------------------------------------------------------------+
| Bank | NODE 1 |
| Configured slots | 5 |
| Max size | 384 GB |
| Num slots | 12 |
| Total installed size | 160 GB |
+--------------------------------------------------------------------------------------------------+
| Bank | NODE 2 |
| Configured slots | 6 |
| Max size | 384 GB |
| Num slots | 12 |
| Total installed size | 192 GB |
+--------------------------------------------------------------------------------------------------+
Then the information concerning the power supplies is displayed:
System Power Supply
+--------------------------------------------------------------------------------------------------+
| Location | Pwr Supply 1 FRU (ID 2) |
| Manufacturer | None |
| Max power capacity | 0.0 W |
| Product part number | G84027-009 |
| Revision | None |
| Serial number | EXWD80400190 |
| Status | OK |
+--------------------------------------------------------------------------------------------------+
| Location | Pwr Supply 2 FRU (ID 3) |
| Manufacturer | None |
| Max power capacity | 0.0 W |
| Product part number | G84027-009 |
| Revision | None |
| Serial number | EXWD82000830 |
| Status | OK |
+--------------------------------------------------------------------------------------------------+
| Location | To Be Filled By O.E.M. |
| Manufacturer | To Be Filled By O.E.M. |
| Max power capacity | 0.0 W |
| Product part number | To Be Filled By O.E.M. |
| Revision | To Be Filled By O.E.M. |
| Serial number | To Be Filled By O.E.M. |
| Status | Unknown |
+--------------------------------------------------------------------------------------------------+
Then those concerning the processor(s):
Processor Information
+--------------------------------------------------------------------------------------------------+
| Socket designation | Socket 1 |
| Core count | 10 |
| Core enabled | 10 |
| Current speed | 2400 MHz |
| External clock | 100 MHz |
| Id | 0xbfebfbff000406f1L |
| L1 cache size | 640 KB |
| L2 cache size | 2560 KB |
| L3 cache size | 25600 KB |
| Max speed | 4000 MHz |
| Status | POPULATED |
| Thread count | 20 |
| Type | Central |
| Version | Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz |
| Voltage | 1.8 V |
+--------------------------------------------------------------------------------------------------+
| Socket designation | Socket 2 |
| Core count | 10 |
| Core enabled | 10 |
| Current speed | 2400 MHz |
| External clock | 100 MHz |
| Id | 0xbfebfbff000406f1L |
| L1 cache size | 640 KB |
| L2 cache size | 2560 KB |
| L3 cache size | 25600 KB |
| Max speed | 4000 MHz |
| Status | POPULATED |
| Thread count | 20 |
| Type | Central |
| Version | Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz |
| Voltage | 1.8 V |
+--------------------------------------------------------------------------------------------------+
Next comes the detailed information for each installed RAM unit:
Memory Module
+--------------------------------------------------------------------------------------------------+
| Location | DIMM_A1 |
| Bank connection | NODE 1 |
| Capable speed | 2667.000000 MHz |
| Current speed | 2134 MHz |
| Installed size | 32768 MB |
| Manufacturer | Samsung |
| Product part number | M393A4K40CB2-CTD |
| Serial number | 401EA4F9 |
| Type | DDR4 |
+--------------------------------------------------------------------------------------------------+
| Location | DIMM_A2 |
| Bank connection | NODE 1 |
| Capable speed | 2667.000000 MHz |
| Current speed | 2134 MHz |
| Installed size | 32768 MB |
| Manufacturer | Samsung |
| Product part number | M393A4K40CB2-CTD |
| Serial number | 401E96DC |
| Type | DDR4 |
+--------------------------------------------------------------------------------------------------+
| Location | DIMM_B1 |
| Bank connection | NODE 1 |
| Capable speed | 2667.000000 MHz |
| Current speed | 2134 MHz |
| Installed size | 32768 MB |
| Manufacturer | Samsung |
| Product part number | M393A4K40CB2-CTD |
| Serial number | 401E97B9 |
| Type | DDR4 |
+--------------------------------------------------------------------------------------------------+
| Location | DIMM_B2 |
| Bank connection | NODE 1 |
| Capable speed | 2667.000000 MHz |
| Current speed | 2134 MHz |
| Installed size | 32768 MB |
| Manufacturer | Samsung |
| Product part number | M393A4K40CB2-CTD |
| Serial number | 401EA624 |
| Type | DDR4 |
+--------------------------------------------------------------------------------------------------+
| Location | DIMM_C1 |
| Bank connection | NODE 1 |
| Capable speed | 2667.000000 MHz |
| Current speed | 2134 MHz |
| Installed size | 32768 MB |
| Manufacturer | Samsung |
| Product part number | M393A4K40CB2-CTD |
| Serial number | 401EA4AB |
| Type | DDR4 |
+--------------------------------------------------------------------------------------------------+
| Location | DIMM_E1 |
| Bank connection | NODE 2 |
| Capable speed | 2667.000000 MHz |
| Current speed | 2134 MHz |
| Installed size | 32768 MB |
| Manufacturer | Samsung |
| Product part number | M393A4K40CB2-CTD |
| Serial number | 401EA625 |
| Type | DDR4 |
+--------------------------------------------------------------------------------------------------+
| Location | DIMM_E2 |
| Bank connection | NODE 2 |
| Capable speed | 2667.000000 MHz |
| Current speed | 2134 MHz |
| Installed size | 32768 MB |
| Manufacturer | Samsung |
| Product part number | M393A4K40CB2-CTD |
| Serial number | 401E97AC |
| Type | DDR4 |
+--------------------------------------------------------------------------------------------------+
| Location | DIMM_F1 |
| Bank connection | NODE 2 |
| Capable speed | 2667.000000 MHz |
| Current speed | 2134 MHz |
| Installed size | 32768 MB |
| Manufacturer | Samsung |
| Product part number | M393A4K40CB2-CTD |
| Serial number | 401EA591 |
| Type | DDR4 |
+--------------------------------------------------------------------------------------------------+
| Location | DIMM_F2 |
| Bank connection | NODE 2 |
| Capable speed | 2667.000000 MHz |
| Current speed | 2134 MHz |
| Installed size | 32768 MB |
| Manufacturer | Samsung |
| Product part number | M393A4K40CB2-CTD |
| Serial number | 401EA5D8 |
| Type | DDR4 |
+--------------------------------------------------------------------------------------------------+
| Location | DIMM_G1 |
| Bank connection | NODE 2 |
| Capable speed | 2667.000000 MHz |
| Current speed | 2134 MHz |
| Installed size | 32768 MB |
| Manufacturer | Samsung |
| Product part number | M393A4K40CB2-CTD |
| Serial number | 401EA1E7 |
| Type | DDR4 |
+--------------------------------------------------------------------------------------------------+
| Location | DIMM_H1 |
| Bank connection | NODE 2 |
| Capable speed | 2667.000000 MHz |
| Current speed | 2134 MHz |
| Installed size | 32768 MB |
| Manufacturer | Samsung |
| Product part number | M393A4K40CB2-CTD |
| Serial number | 401EA4A9 |
| Type | DDR4 |
+--------------------------------------------------------------------------------------------------+
Then the information concerning the installed network cards:
NIC
+--------------------------------------------------------------------------------------------------+
| Location | eth0 |
| Device name | eth0 |
| Driver name | ixgbe |
| Firmware version | 0x800004f8 |
| Mac address | a4:bf:01:50:cf:ca |
| Manufacturer | Intel Corporation(8086) |
| Product name | Ethernet Controller 10-Gigabit X540-AT2(1528) |
| Sub device | Subsystem device 35c5(35c5) |
| Sub vendor | Intel Corporation(8086) |
| Driver Version | 5.16.5 |
+--------------------------------------------------------------------------------------------------+
| Location | eth1 |
| Device name | eth1 |
| Driver name | ixgbe |
| Firmware version | 0x800004f8 |
| Mac address | a4:bf:01:50:cf:cb |
| Manufacturer | Intel Corporation(8086) |
| Product name | Ethernet Controller 10-Gigabit X540-AT2(1528) |
| Sub device | Subsystem device 35c5(35c5) |
| Sub vendor | Intel Corporation(8086) |
| Driver Version | 5.16.5 |
+--------------------------------------------------------------------------------------------------+
All installed storage disks, HDD and SSD, are then displayed:
SSD
+--------------------------------------------------------------------------------------------------+
| Capacity | 120.0 GB |
| Firmware version | 2.22 |
| Hypervisor disk | True |
| Power on hours | 0 |
| Product part number | OCZ-AGILITY3 |
| Secured boot disk | False |
| Serial number | OCZ-6F95UNJ029ACBIFD |
| Smartctl status | PASSED |
+--------------------------------------------------------------------------------------------------+
| Capacity | 800.0 GB |
| Firmware version | C925 |
| Hypervisor disk | True |
| Manufacturer | WDC |
| Power on hours | 0 |
| Product part number | WUSTR6480ASS204 |
| Secured boot disk | False |
| Serial number | V6V2TU8A |
| Smartctl status | OK |
+--------------------------------------------------------------------------------------------------+
| Location | 5 |
| Capacity | 500.0 GB |
| Firmware version | RVT04B6Q |
| Hypervisor disk | True |
| Power on hours | 3921 |
| Product part number | Samsung SSD 860 EVO 500GB |
| Secured boot disk | False |
| Serial number | S4CNNX0N803693N |
| Smartctl status | PASSED |
+--------------------------------------------------------------------------------------------------+
| Location | 6 |
| Capacity | 800.0 GB |
| Firmware version | C925 |
| Hypervisor disk | True |
| Manufacturer | WDC |
| Power on hours | 0 |
| Product part number | WUSTR6480ASS204 |
| Secured boot disk | False |
| Serial number | V6V2PDUA |
| Smartctl status | OK |
+--------------------------------------------------------------------------------------------------+
| Location | 8 |
| Capacity | 800.0 GB |
| Firmware version | C925 |
| Hypervisor disk | True |
| Manufacturer | WDC |
| Power on hours | 0 |
| Product part number | WUSTR6480ASS204 |
| Secured boot disk | False |
| Serial number | V6V2U2TA |
| Smartctl status | OK |
+--------------------------------------------------------------------------------------------------+
| Location | 9 |
| Capacity | 800.0 GB |
| Firmware version | C925 |
| Hypervisor disk | True |
| Manufacturer | WDC |
| Power on hours | 0 |
| Product part number | WUSTR6480ASS204 |
| Secured boot disk | False |
| Serial number | V6V2SYZA |
| Smartctl status | OK |
+--------------------------------------------------------------------------------------------------+
| Location | 10 |
| Capacity | 500.0 GB |
| Firmware version | RVT04B6Q |
| Hypervisor disk | True |
| Power on hours | 3926 |
| Product part number | Samsung SSD 860 EVO 500GB |
| Secured boot disk | False |
| Serial number | S4CNNX0N803688F |
| Smartctl status | PASSED |
+--------------------------------------------------------------------------------------------------+
| Location | 12 |
| Capacity | 800.0 GB |
| Firmware version | C925 |
| Hypervisor disk | True |
| Manufacturer | WDC |
| Power on hours | 0 |
| Product part number | WUSTR6480ASS204 |
| Secured boot disk | False |
| Serial number | V6V2L6LA |
| Smartctl status | OK |
+--------------------------------------------------------------------------------------------------+
HDD
+--------------------------------------------------------------------------------------------------+
| Capacity | 1800.0 GB |
| Firmware version | C004 |
| Hypervisor disk | True |
| Manufacturer | SEAGATE |
| Power on hours | 0 |
| Product part number | ST1800MM0129 |
| Rotation rate | 10500 rpm |
| Secured boot disk | False |
| Serial number | WBN3AW060000C036516V |
| Smartctl status | OK |
+--------------------------------------------------------------------------------------------------+
| Capacity | 1800.0 GB |
| Firmware version | C003 |
| Hypervisor disk | True |
| Manufacturer | SEAGATE |
| Power on hours | 0 |
| Product part number | ST1800MM0129 |
| Rotation rate | 10500 rpm |
| Secured boot disk | False |
| Serial number | WBN1FVRH0000K9277TAC |
| Smartctl status | OK |
+--------------------------------------------------------------------------------------------------+
| Capacity | 1800.0 GB |
| Firmware version | C003 |
| Hypervisor disk | True |
| Manufacturer | SEAGATE |
| Power on hours | 0 |
| Product part number | ST1800MM0129 |
| Rotation rate | 10500 rpm |
| Secured boot disk | False |
| Serial number | WBN1B7QZ0000K923B2PF |
| Smartctl status | OK |
+--------------------------------------------------------------------------------------------------+
| Capacity | 1800.0 GB |
| Firmware version | None |
| Hypervisor disk | False |
| Manufacturer | Seagate |
| Power on hours | 0 |
| Product part number | ST1800MM0129 |
| Rotation rate | 0 rpm |
| Secured boot disk | False |
| Serial number | None |
+--------------------------------------------------------------------------------------------------+
| Capacity | 500.0 GB |
| Firmware version | None |
| Hypervisor disk | False |
| Power on hours | 0 |
| Product part number | Samsung SSD 860 |
| Rotation rate | 0 rpm |
| Secured boot disk | False |
| Serial number | None |
+--------------------------------------------------------------------------------------------------+
| Capacity | 1800.0 GB |
| Firmware version | None |
| Hypervisor disk | False |
| Manufacturer | Seagate |
| Power on hours | 0 |
| Product part number | ST1800MM0129 |
| Rotation rate | 0 rpm |
| Secured boot disk | False |
| Serial number | None |
+--------------------------------------------------------------------------------------------------+
| Capacity | 1800.0 GB |
| Firmware version | None |
| Hypervisor disk | False |
| Manufacturer | Seagate |
| Power on hours | 0 |
| Product part number | ST1800MM0129 |
| Rotation rate | 0 rpm |
| Secured boot disk | False |
| Serial number | None |
+--------------------------------------------------------------------------------------------------+
| Capacity | 1800.0 GB |
| Firmware version | None |
| Hypervisor disk | False |
| Manufacturer | Seagate |
| Power on hours | 0 |
| Product part number | ST1800MM0129 |
| Rotation rate | 0 rpm |
| Secured boot disk | False |
| Serial number | None |
+--------------------------------------------------------------------------------------------------+
| Capacity | 1800.0 GB |
| Firmware version | None |
| Hypervisor disk | False |
| Manufacturer | Seagate |
| Power on hours | 0 |
| Product part number | ST1800MM0129 |
| Rotation rate | 0 rpm |
| Secured boot disk | False |
| Serial number | None |
+--------------------------------------------------------------------------------------------------+
| Capacity | 800.0 GB |
| Firmware version | None |
| Hypervisor disk | False |
| Power on hours | 0 |
| Product part number | WUSTR6480ASS204 |
| Rotation rate | 0 rpm |
| Secured boot disk | False |
| Serial number | None |
+--------------------------------------------------------------------------------------------------+
| Capacity | 500.0 GB |
| Firmware version | None |
| Hypervisor disk | False |
| Power on hours | 0 |
| Product part number | Samsung SSD 860 |
| Rotation rate | 0 rpm |
| Secured boot disk | False |
| Serial number | None |
+--------------------------------------------------------------------------------------------------+
| Capacity | 800.0 GB |
| Firmware version | None |
| Hypervisor disk | False |
| Power on hours | 0 |
| Product part number | WUSTR6480ASS204 |
| Rotation rate | 0 rpm |
| Secured boot disk | False |
| Serial number | None |
+--------------------------------------------------------------------------------------------------+
| Capacity | 800.0 GB |
| Firmware version | None |
| Hypervisor disk | False |
| Power on hours | 0 |
| Product part number | WUSTR6480ASS204 |
| Rotation rate | 0 rpm |
| Secured boot disk | False |
| Serial number | None |
+--------------------------------------------------------------------------------------------------+
| Capacity | 1800.0 GB |
| Firmware version | C003 |
| Hypervisor disk | True |
| Manufacturer | SEAGATE |
| Power on hours | 0 |
| Product part number | ST1800MM0129 |
| Rotation rate | 10500 rpm |
| Secured boot disk | False |
| Serial number | WBN1EH950000K93000FK |
| Smartctl status | OK |
+--------------------------------------------------------------------------------------------------+
| Capacity | 800.0 GB |
| Firmware version | None |
| Hypervisor disk | False |
| Power on hours | 0 |
| Product part number | WUSTR6480ASS204 |
| Rotation rate | 0 rpm |
| Secured boot disk | False |
| Serial number | None |
+--------------------------------------------------------------------------------------------------+
| Capacity | 1800.0 GB |
| Firmware version | None |
| Hypervisor disk | False |
| Manufacturer | Seagate |
| Power on hours | 0 |
| Product part number | ST1800MM0129 |
| Rotation rate | 0 rpm |
| Secured boot disk | False |
| Serial number | None |
+--------------------------------------------------------------------------------------------------+
| Capacity | 1800.0 GB |
| Firmware version | C001 |
| Hypervisor disk | True |
| Manufacturer | SEAGATE |
| Power on hours | 0 |
| Product part number | ST1800MM0129 |
| Rotation rate | 10500 rpm |
| Secured boot disk | False |
| Serial number | WBN00K6G0000E802B5B0 |
| Smartctl status | OK |
+--------------------------------------------------------------------------------------------------+
| Capacity | 1800.0 GB |
| Firmware version | C003 |
| Hypervisor disk | True |
| Manufacturer | SEAGATE |
| Power on hours | 0 |
| Product part number | ST1800MM0129 |
| Rotation rate | 10500 rpm |
| Secured boot disk | False |
| Serial number | WBN0DZLT0000E837B2FX |
| Smartctl status | OK |
+--------------------------------------------------------------------------------------------------+
And finally, information regarding the GPU(s) possibly installed with residual details regarding the logs displayed:
GPU
+--------------------------------------------------------------------------------------------------+
| Class | Display controller:VGA compatible controller:VGA controller |
| Device | MGA G200e [Pilot] ServerEngines (SEP1) |
| Revision | 5 |
| Slot | 0000:08:00.0 |
| Sub device | Subsystem device 0103 |
| Sub vendor | Intel Corporation |
| Vendor | Matrox Electronics Systems Ltd. |
+--------------------------------------------------------------------------------------------------+
INFO: Hardware Info log file can be found at : /home/nutanix/data/hardware_logs
INFO: NuCollector output written to: /home/nutanix/data/hardware_logs/192.168.2.200_output
INFO: The command to verify the output is: cd /home/nutanix/data/hardware_logs && sha224sum -c output.checksum
Refer to KB 7084 (http://portal.nutanix.com/kb/7084) for details on show_hardware_info or Recheck with: ncc hardware_info show_hardware_info
+-----------------------+
| State | Count |
+-----------------------+
| Info | 1 |
| Total Plugins | 1 |
+-----------------------+
Plugin output written to /home/nutanix/data/logs/ncc-output-latest.log
This command is really useful for quickly exporting an exhaustive list of the entire hardware configuration of your cluster, especially if it is necessary to check compatibility or to transmit the information to support if this proves necessary.
When deploying a new cluster, the default storage container name is automatically generated and is not particularly aesthetically pleasing.
To rename it, there is only one solution: go through the Command Line Interface.
To carry out this operation, connect to a CVM in your cluster and list all the existing containers on the cluster:
nutanix@CVM: ncli container list
All the containers and their associated details will then be displayed. Find the container you want to rename in the list and type the following command:
Replace “CURRENT_NAME” with the name automatically generated by the system when creating the container, and NEW_NAME with the name you wish to assign to this container, leaving no spaces or special characters other than – and _
Then check that your container has been correctly renamed with the command:
nutanix@CVM: ncli container list
Sur Prism Element, vous verrez également apparaitre le nouveau nom que vous avez attribué à votre container de stockage :
Nutanix has just announced the availability of version 6.8 eSTS of AOS and with this new version comes a lot of new features including…. Prism Central version pc2024.1!
I am not going to detail all the features added or updated in this new version of AOS and let you consult the Release Notes which detail their content.
I decided to focus on a feature that arrives with the new version of Prism Central awaited by many customers who have a modest infrastructure: Prism Central X-Small.
Prism Central X-Small
Among all the new features made available by the new pc2024.1 version of Prism Central, one feature addition caught my attention: Prism Central X-Small.
Until now, the deployment of Prism Central was only possible according to 3 templates:
Template
VM configuration
Limitations
Small
6 vCPU / 28Gb RAM / 500Gb Storage
2500 VMs / 10 Clusters / 200 Nodes
Large
10 vCPU / 46Gb RAM / 2500Gb Storage
12500 VMs / 25 Clusters / 500 Nodes
X-Large
14 vCPU / 62Gb RAM / 2500Gb Storage
12500 VMs / 25 Clusters / 500 Nodes
Like the X-Large deployment which offers an imposing configuration of Prism Central, until now a minimum size deployment was missing. Prism Central X-Small fills this void:
VM configuration
Limitations
4 vCPU 18Gb RAM 100Gb Storage
500 VMs 5 Clusters 50 Nodes
As you can see, this Prism Central template has a lightweight hardware configuration, this is not the only point of differentiation with other deployment templates.
Indeed, due to its configuration, this deployment of Prism Central does not allow you to exploit all the functionalities usually offered. Here are the points of differentiation:
Supported
Unsupported
Multi-cluster management (Up to 5)
Scale-out
VM management
Flow Virtual Networking
Host management
Flow Network Security
Infrastructure management, monitoring and health
Self-Service
Enterprise authentication and RBAC
Intelligent Operations
REST APIs
Nutanix Kubernetes Engine
Comprehensive search
Objects
Life Cycle Manager (LCM)
Files
Pulse Insights
Foundation Central
Prism Central Backup and Restore
Foundation
Categories
Quotas
Projects
Multi-site DR
Microservices infrastructe
Marketplace
Identity and access management
Reporting and Dashboards
Security dashboard
Nearsync / Synchronous replication
If you want to benefit from a feature not supported by Prism Central X-Small, you will need to consider deploying a Small / Large / X-Large template.
Use cases
The main use case that immediately comes to mind is the following:
a simple infrastructure
1 to 3 modest-sized clusters
a hundred virtual machines
no need for additional services (Flow, Self-Service, NKE, etc.)
This is the type of installation that we encounter in many SMEs or local authorities for example, the arrival of Prism Central X-Small is therefore timely.
It’s in Nutanix’s roadmap! Password authentication is in the sights of the publisher who intends to put an end to it and warn its users via an informational alert:
The objective is to gradually switch clients to SSH key authentication in order to impose it in a future version of its hypervisor.
Creating SSH keys
Supported SSH encryption algorithms are:
AES128-CTR
AES192-CTR
AES256-CTR
If you already have such a key pair, you can proceed directly to cluster integration.
To create an SSH key pair, we will need a tool like PuttyGen.
Click “Generate” and move the mouse cursor over the window. Then indicate a passphrase then save the public key and the private key.
WARNING: be sure to use a strong, non-predictable passphrase.
We must now integrate the public key into the cluster.
Integration of the public key on the cluster
To integrate your public key into your cluster, connect to the Prism interface and go to “Settings > Cluster Lockdown”
Click on “New Public Key”, give it a name, paste the public key content and validate.
At this stage, classic password authentication and SSH key authentication are both active and functional, it is time to test.
Testing and activation of the cluster lockdown feature
First, we will test authentication via SSH key. Don’t panic, whatever happens, even if the SSH connection via the keys does not work after activating the cluster lockdown, you can always backtrack via the Prism interface.
Configure your favorite SSH connection tools, integrate your private key then launch a connection to your Nutanix cluster. First, enter the login you want to use, here I chose “nutanix”:
Then enter the passphrase that you configured when creating your SSH key. Validate, you are now connected to your cluster via your SSH key without having to use the password for the “nutanix” account.
Now let’s deactivate password authentication by returning to the “Settings > Cluster Lockdown” menu. Uncheck the “Enable Remote Login with Password” box:
Try logging in again using the “nutanix” account and the usual password and notice that you can no longer log in with this method:
Try with your private key and the associated passphrase :
Your cluster is now SSH accessible only via the SSH key system. If there are several administrators working on the server, don’t forget to repeat the operation for each of them.
Important point: remember to keep your private keys in a safe place and use a strong passphrase.
Updating a hyperconverged cluster can sometimes be time-consuming and present certain risks of production interruption if the process is poorly managed.
Nutanix has optimized the process of updating its clusters so that it is as simple and automated as possible, the famous “1-click upgrade”.
Life Cycle Manager on Prism Element
LCM has slight differences between Prism Element and Prism Central. This is what the interface looks like on Prism Element:
LCM on Prism Element allows you to manage updates to some of the bricks in your cluster:
AHV
AOS
Cluster Maintenance Utilities
File Flow
Foundation
Licensing
NCC
These are the bricks that you can update through Prism Element.
Life Cycle Manager on Prism Central
LCM on Prism Central allows you to manage the updating of the remaining bricks which are mainly the software bricks:
Life Cycle Manager: inventory
The LCM Inventory, whether on Prism Element or Prism Central, allows you to list all the software and hardware versions installed on your cluster, as well as any updates or firmware available:
The inventory process lasts around ten minutes:
It then allows access to all installed and available versions:
LCM: the recommended update order
With the multitude of software bricks and the hardware part, it is not always easy to know in what order to update the different modules.
The first step of updating your cluster takes place on Prism Central:
The actions to be carried out in order:
LCM inventory
NCC Check and Upgrade
Prism Central Upgrade
You must then switch to Prism Element for the second step:
The actions to be carried out in order:
LCM inventory
NCC Check and Upgrade
Foundation Upgrade
AOS Upgrade
Firmware Upgrade
AHV Upgrade
It is recommended to do another LCM inventory once the AHV update is complete to verify that there are no hardware updates remaining to be applied.
Finally comes the last step, again on Prism Central:
The actions to be carried out in order:
LCM inventory
All software updates (Nutanix Files, Self-Services (Calm), NKE (Karbon), NDB, Flow…)
To carry out the desired updates, simply check them then click on “View upgrade plan”:
Once the update plan has been developed by LCM, you must click one last time to start the process:
Each step of the process requires time because the cluster multiplies checks at each step to verify the conformity of the installed updates:
It is important to specify that the cluster update process, with the exception of certain software bricks, does not cause a service shutdown if good practices are respected regarding fault tolerance.
Following the takeover of VMWare by the giant Broadcom and the subsequent runaway prices, many customers are looking for alternative solutions. Unfortunately, it is not always easy to find your way around.
VMWare vs Nutanix comparison
The most complicated thing when we are used to a technical solution is to make a radical change.
Will we find all the features we use? What do the names correspond to? What prospects for possible developments among competitors?
I took the time to make a comparison of the different VMWare and Nutanix bricks:
I hope this will help you see things more clearly and shed light on a possible future choice.
Nutanix has a tool for automating the deployment and life cycle of applications: Nutanix Self-Service (formerly Calm).
I’ll show you how to deploy Nutanix Self-Service on your Nutanix cluster.
Nutanix Self-Service Overview
Self-Service (formerly Calm) streamlines application management, deployment, and scalability across hybrid clouds through self-service, automation, and centralized role-based governance.
Deploy Nutanix Self-Service
To deploy Nutanix Self-Service, you must have a functional Prism Central on your cluster. Indeed, almost all of Nutanix’s complementary building blocks are managed by Prism Central, so don’t look for it on Prism Element.
In the side menu, look for the “Services” section and click on “Calm” (the old name for Nutanix Self-Service):
Deployment is very simple, then just click on “Enable App. Orchestration”:
The first box must be checked to be able to deploy Self-Service, the second is optional but highly recommended because it allows access to the online catalog offering a plethora of ready-to-use blueprints.
Once you have made your choice, click on “Save” and wait around ten minutes while Self-Service deploys:
Once deployment is complete, a new Volume Groups will be available on your Nutanix cluster:
That’s it, Nutanix Self-Service is deployed and ready to use:
It happens that the admin account of a Nutanix cluster is locked due to too many authentication failures and that you can no longer connect to it.
Most of the time, this is the result of changing the password of the admin account on the cluster if it is used on other systems such as Nutanix Move or HYCU for example.
Here’s how to reset the password for the “admin” account of a cluster
Remove the “admin” account from routines
To begin with, if you do not want the problem to recur, you must remove the “admin” account from the cluster from the elements that can cause this. This could be backup software, a Nutanix brick (Move for example), possibly a monitoring tool.
It is important not to use the “admin” account of a cluster to connect a tool to the cluster.
Reset “admin” password
Connect by SSH to a CVM of the Nutanix cluster on which the account is locked with the “root” account.
Then enter the following command:
passwd admin
Enter the new password twice, the password is reset.
Unlock the “admin” account
To unlock the “admin” account, enter the following command:
As part of setting up labs on a Nutanix infrastructure, you may be required to deploy a hypervisor (ESXi, Promox, Hyper-V, etc.) on the AHV hypervisor (Inception!).
You will then be confronted with this type of error message when installing ESXi for example (the form differs for other hypervisors, but the substance remains the same):
The processor will not be detected as having virtualization capabilities and you will therefore not be able to deploy a hypervisor… But it is possible to bypass this restriction.
Nutanix AHV: bypass processor restriction
I assume that the virtual machine on which you want to deploy a hypervisor is already created.
To bypass the processor restriction, we must connect to one of the CVMs in our cluster and modify our virtual machine with the acli vm.update command and the “cpu_passthrough” parameter:
Please note, this command will only work if your virtual machine is turned off.
Once the command is applied you can restart your installation… Except for ESXi which still requires a little subtlety!
Nutanix AHV: truncate NIC type to install ESXi
To install an ESXi nested on Nutanix AHV and have it be fully functional, you also need to modify the network adapters to make it think they are e1000 type.
To do this, with the virtual machine still off, connect to one of the CVMs, and type the following command:
Be sure to replace VM_NAME with the name of the virtual machine concerned, and NETWORK_NAME with one of the networks previously created on your Nutanix cluster. You will get the following message:
We use cookies to improve your experience on our site. By using our site, you consent to cookies.
This website uses cookies
Websites store cookies to enhance functionality and personalise your experience. You can manage your preferences, but blocking some cookies may impact site performance and services.
Essential cookies enable basic functions and are necessary for the proper function of the website.
Name
Description
Duration
Cookie Preferences
This cookie is used to store the user's cookie consent preferences.
30 days
These cookies are needed for adding comments on this website.
Name
Description
Duration
comment_author
Used to track the user across multiple sessions.
Session
comment_author_email
Used to track the user across multiple sessions.
Session
comment_author_url
Used to track the user across multiple sessions.
Session
Statistics cookies collect information anonymously. This information helps us understand how visitors use our website.
Matomo is an open-source web analytics platform that provides detailed insights into website traffic and user behavior.