Team Leader - Nutanix Technology Champion - Nutanix NTC Storyteller

Julien DUMUR
Infrastructure in a Nutshell
header nutanix

While working on one of our clients’ clusters, I encountered a major problem: the impossibility of deleting a storage container that had previously been emptied of its contents…

A Little Background

The need to delete a storage container arose when we decided to migrate the virtual machines to other containers. Why do we have to do this? As part of an On Demand Cross Cluster Live Migration, one of the prerequisites is to have identical names on the storage containers.

To do this, we migrated the vdisks using the CLI, a long and tedious process that I may detail in a future article.

Once all the vdisks were migrated, we still couldn’t delete the container.

Diagnosing the Problem

To begin, I listed all the containers on my cluster to identify the one I was concerned about:

nutanix@CVM:~$ ncli container list
Id : 00060f28-23f5-dbbb-7c17-7cc255a7f766::7
Uuid : ecc3d02d-57c0-4253-a5a4-a335aa3bdcd4
Name : default-container-15796345896744
Storage Pool Id : 00060f28-23f5-dbbb-7c17-7cc255a7f766::6
Storage Pool Uuid : 4fa56350-ece0-4932-8ec9-70669e241470
Free Space (Logical) : 31.16 TiB (34,265,616,109,568 bytes)
Used Space (Logical) : 90.79 GiB (97,483,378,688 bytes)
Allowed Max Capacity : 31.25 TiB (34,363,099,488,256 bytes)
Used by other Containers : 574.87 GiB (617,261,654,016 bytes)
Explicit Reservation : 0 bytes
Thick Provisioned : 0 bytes
Replication Factor : 2
Oplog Replication Factor : 2
NFS Whitelist Inherited : false
Container NFS Whitelist : 
VStore Name(s) : default-container-15796345896744
Random I/O Pri Order : SSD-MEM-NVMe, SSD-PCIe, SSD-SATA, DAS-SATA
Sequential I/O Pri Order : SSD-PCIe, SSD-SATA, SSD-MEM-NVMe, DAS-SATA
Compression : on
Compression Delay : 0 mins
Fingerprint On Write : off
On-Disk Dedup : off
Erasure Code : off
Software Encryption : off

Once my container is identified (default-container-15796345896744), I try to delete it in CLI with the following command line:

nutanix@CVM:~$ ncli ctr rm name=default-container-15796345896744
Error: Storage container default-container-15796345896744 contains VDisk(s) not marked for removal.

The message is quite clear: there are still VDisks in my container, which is completely blocking the deletion. As a reminder, I’ve migrated all the VDisks from my virtual machines, so there shouldn’t be anything left on them…

I check the contents of my VDisk with the following command (replace the container_id with the one corresponding to your container):

nutanix@CVM:~$ vdisk_config_printer -container_id=7 -skip_to_remove_vdisks

vdisk_id: 1247912
vdisk_name: "$NTNX$-$CURATOR$-$CHAINVDISK$-$0630848d-af0a-440b-9bf2-14ed432105c2$7$"
vdisk_size: 0
container_id: 7
creation_time_usecs: 1724796346611475
mutability_state: kImmutable
snapshot_chain_id: 8053639
vdisk_creation_time_usecs: 1724796346611475
last_updated_pithos_version: kChainIdKey
vdisk_uuid: "faa173c5-be0a-4502-9977-02d9c125b02e"
chain_id: "0630848d-af0a-440b-9bf2-14ed432105c2"
last_modification_time_usecs: 1724800080940533
vdisk_id: 22562934
vdisk_name: "NFS:2:0:2513"
vdisk_size: 4398046511104
container_id: 7
creation_time_usecs: 1741004981139322
vdisk_creator_loc: 4
vdisk_creator_loc: 789428
vdisk_creator_loc: 9451518309
nfs_file_name: "nutanix_guest_tools.iso"
may_be_parent: true
never_hosted: false
snapshot_draining: false
snapshot_chain_id: 22562935
vdisk_creation_time_usecs: 1741004981139322
oplog_type: kVDiskOplog
vdisk_snapshot_time_usecs: 1741004983089382
last_updated_pithos_version: kChainIdKey
always_write_emap_extents: true
vdisk_uuid: "5b601b32-eaf5-4612-9bdf-f8ce3a24c96b"
chain_id: "ebd35e10-4917-476a-9cb7-8447478af99c"
last_modification_time_usecs: 1741004985848279
vdisk_id: 28693745
vdisk_name: "NFS:2:0:2703"
parent_vdisk_id: 9701970
vdisk_size: 10737418240
container_id: 7
creation_time_usecs: 1724235742966570
mutability_state: kImmutableSnapshot
closest_named_ancestor: "NFS:4611686018456079974"
avoid_vblock_copy_when_leaf: true
vdisk_creator_loc: 4
vdisk_creator_loc: 27341647
vdisk_creator_loc: 454849015
nfs_file_name: "a8893eb4-a9b8-4710-80c8-e69fb0093bb0"
may_be_parent: true
parent_nfs_file_name_hint: "a8893eb4-a9b8-4710-80c8-e69fb0093bb0"
scsi_name_identifier: "naa.6506b8dc3a35444155c2731c8d7a8b94"
never_hosted: false
snapshot_draining: false
parent_draining: false
clone_parent_draining: false
snapshot_chain_id: 8053639
has_complete_data: true
clone_source_vdisk_id: 7790073
vdisk_creation_time_usecs: 1745309380225031
originating_vdisk_snapshot_time_usecs: 1724235742966570
oplog_type: kVDiskOplog
vdisk_snapshot_time_usecs: 1745310092381733
last_updated_pithos_version: kChainIdKey
always_write_emap_extents: true
vdisk_uuid: "c1b33a68-dd3e-4afc-917e-67f8c257f40f"
chain_id: "0630848d-af0a-440b-9bf2-14ed432105c2"
parent_chain_id: "af907cc8-795b-4382-907c-4b318529bfcb"
vdisk_snapshot_uuid: "bdbf64db-51ad-4310-86cc-ce68107c20f0"
last_modification_time_usecs: 1745310094964928
vdisk_id: 7783219
vdisk_name: "NFS:2:0:472"
vdisk_size: 4398046511104
container_id: 7
creation_time_usecs: 1723639427516394
vdisk_creator_loc: 3
vdisk_creator_loc: 1247802
vdisk_creator_loc: 675287599
nfs_file_name: "pc.2022.6.0.10-pc-boot.qcow2"
never_hosted: false
snapshot_chain_id: 7783220
vdisk_creation_time_usecs: 1723639427516394
oplog_type: kVDiskOplog
last_updated_pithos_version: kChainIdKey
always_write_emap_extents: true
vdisk_uuid: "db1edf28-7241-451a-b74a-e2f6b68ef394"
chain_id: "874ae2f3-1f2e-4d90-87f3-a755b14c374c"
last_modification_time_usecs: 1723639427524005
vdisk_id: 7783222
vdisk_name: "NFS:2:0:473"
vdisk_size: 4398046511104
container_id: 7
creation_time_usecs: 1723639427831077
vdisk_creator_loc: 3
vdisk_creator_loc: 1247802
vdisk_creator_loc: 675288112
nfs_file_name: "pc.2022.6.0.10-pc-home.qcow2"
never_hosted: false
snapshot_chain_id: 7783223
vdisk_creation_time_usecs: 1723639427831077
oplog_type: kVDiskOplog
last_updated_pithos_version: kChainIdKey
always_write_emap_extents: true
vdisk_uuid: "1d4c82bc-01b2-44da-af1c-fbedda5d3dc2"
chain_id: "83600772-9e50-4050-9957-37da1dac220f"
last_modification_time_usecs: 1723639427839142
vdisk_id: 7783355
vdisk_name: "NFS:2:0:474"
vdisk_size: 4398046511104
container_id: 7
creation_time_usecs: 1723639463452092
vdisk_creator_loc: 3
vdisk_creator_loc: 1247802
vdisk_creator_loc: 675381384
nfs_file_name: "pc.2022.6.0.10-pc-boot.img"
may_be_parent: true
never_hosted: false
snapshot_chain_id: 7783356
vdisk_creation_time_usecs: 1723639463452092
oplog_type: kVDiskOplog
vdisk_snapshot_time_usecs: 1723639536592896
last_updated_pithos_version: kChainIdKey
always_write_emap_extents: true
vdisk_uuid: "cbec4f1c-06e5-400e-bb4b-c71df4cd7e38"
chain_id: "d42ab524-a749-4a51-a1a3-3bf225c0fc12"
last_modification_time_usecs: 1723639536600475
vdisk_id: 7783363
vdisk_name: "NFS:2:0:475"
vdisk_size: 4398046511104
container_id: 7
creation_time_usecs: 1723639463571411
vdisk_creator_loc: 3
vdisk_creator_loc: 1247802
vdisk_creator_loc: 675381960
nfs_file_name: "pc.2022.6.0.10-pc-home.img"
may_be_parent: true
never_hosted: false
snapshot_draining: false
snapshot_chain_id: 7783364
vdisk_creation_time_usecs: 1723639463571411
oplog_type: kVDiskOplog
vdisk_snapshot_time_usecs: 1723639536653959
last_updated_pithos_version: kChainIdKey
always_write_emap_extents: true
vdisk_uuid: "358783f7-b35d-42ae-82cc-24a8dc0bf084"
chain_id: "0805ab92-e816-4fe6-8851-7ffa9af75188"
last_modification_time_usecs: 1723639548309692

This command allows me to display all files that are not marked “for deletion.” These are:

  • Old snapshots whose virtual machines no longer exist or are no longer linked
  • Nutanix Guest Tools installation ISOs
  • Old Prism Central update files

Now that I know where the problem is coming from, it’s time to fix it.er…

Deleting Files

To delete files, edit them one by one with the following command:

edit_vdisk_config --vdisk_id=vdisk_id --editor=vim

The bold “vdisk_id” in the command must be replaced with the vdisk_id of each file returned by the “vdisk_config_printer -container_id=7 -skip_to_remove_vdisks” command, for example:

vdisk_id: 1247912
vdisk_name: "$NTNX$-$CURATOR$-$CHAINVDISK$-$0630848d-af0a-440b-9bf2-14ed432105c2$7$"
vdisk_size: 0
container_id: 7
creation_time_usecs: 1724796346611475
mutability_state: kImmutable
snapshot_chain_id: 8053639
vdisk_creation_time_usecs: 1724796346611475
last_updated_pithos_version: kChainIdKey
vdisk_uuid: "faa173c5-be0a-4502-9977-02d9c125b02e"
chain_id: "0630848d-af0a-440b-9bf2-14ed432105c2"
last_modification_time_usecs: 1724800080940533

Will give the following command:

edit_vdisk_config --vdisk_id=1247912 --editor=vim

This opens the relevant file:

vdisk_id: 1247912
vdisk_name: "$NTNX$-$CURATOR$-$CHAINVDISK$-$0630848d-af0a-440b-9bf2-14ed432105c2$7$"
vdisk_size: 0
container_id: 7
creation_time_usecs: 1724796346611475
mutability_state: kImmutable
snapshot_chain_id: 8053639
vdisk_creation_time_usecs: 1724796346611475
last_updated_pithos_version: kChainIdKey
vdisk_uuid: "faa173c5-be0a-4502-9977-02d9c125b02e"
chain_id: "0630848d-af0a-440b-9bf2-14ed432105c2"
last_modification_time_usecs: 1724800080940533

You then need to add “to_remove: true” at the very end of the file and save it:

vdisk_id: 1247912
vdisk_name: "$NTNX$-$CURATOR$-$CHAINVDISK$-$0630848d-af0a-440b-9bf2-14ed432105c2$7$"
vdisk_size: 0
container_id: 7
creation_time_usecs: 1724796346611475
mutability_state: kImmutable
snapshot_chain_id: 8053639
vdisk_creation_time_usecs: 1724796346611475
last_updated_pithos_version: kChainIdKey
vdisk_uuid: "faa173c5-be0a-4502-9977-02d9c125b02e"
chain_id: "0630848d-af0a-440b-9bf2-14ed432105c2"
last_modification_time_usecs: 1724800080940533
to_remove: true

Repeat the operation with all the files until the “vdisk_config_printer -container_id=7 -skip_to_remove_vdisks” command no longer displays anything.

Run the container removal command:

nutanix@CVM:~$ ncli ctr rm name=default-container-15796345896744
Error: Storage container default-container-15796345896744 contains small NFS files

Another mistake! And yes, there are still small files lying around and you need to force delete them:

nutanix@CVM:~$ ncli ctr rm id=7 ignore-small-files=true
Storage container deleted successfully

This operation is quite long and tedious to apply, but it will allow you to delete a stubborn storage container. If after these manipulations the deletion of the storage container is still impossible, it is because there are still files that have not been moved / migrated.

Read More

When deploying a new cluster, the default storage container name is automatically generated and is not particularly aesthetically pleasing.

To rename it, there is only one solution: go through the Command Line Interface.

To carry out this operation, connect to a CVM in your cluster and list all the existing containers on the cluster:

nutanix@CVM: ncli container list

All the containers and their associated details will then be displayed. Find the container you want to rename in the list and type the following command:

nutanix@CVM: ncli container edit name=CURRENT_NAME new-name=NEW_NAME

Replace “CURRENT_NAME” with the name automatically generated by the system when creating the container, and NEW_NAME with the name you wish to assign to this container, leaving no spaces or special characters other than – and _

Then check that your container has been correctly renamed with the command:

nutanix@CVM: ncli container list

Sur Prism Element, vous verrez également apparaitre le nouveau nom que vous avez attribué à votre container de stockage :

Read More