Understanding VM-Storage-Policies

I highly encourage everyone to go thru the Blog/Article about “understanding-vsan-objects-and-component” which explains how components are created based on the policy that you create . So that you understand the fundamentals of  Objects and components which will help you “Understanding VM-Storage-Policies” .

 

Lets pick up a single virtual machines try and apply different policy , look at the object placement view and number of components created their placement , object reservation and number of child components created . Look at this from Webclient / RVC and from ESXi perspective and understand the data/outputs . We really need to know the RVC method and ESXCLI method because during an outage we might have lost vcenter , to look at status of the objects esxcli commands will be still helpful .

 

View Component placement using vsphere Webclient

vsphere webclient view is mostly user friendly to view the object placement for all the VMs and the health status of each object . We can do so by navigating to the vSAN-cluster ⇒  Monitor   ⇒ vSAN   ⇒ Virtual Objects  or the other way is by looking at individual VMs and their Object Placement by navigating thru  vSAN-cluster ⇒ VM  ⇒  Monitor   ⇒  Policies .

  

View Component Placement using RVC

Component placement for virtual machines can be easily retrieved using RVC command “vsan.vm_object_info” , first login to RVC session for vCenter server (vCSA/windows VC) see : http://virtuallysensei.com/how-to-login-into-rvc/ 

Now once you are in RVC , change Directory to VM directory from root location : cd “/localhost/Datacenter-NAME”/vms” and then do an ls to see list of vms . You  will see that every VM is associated with a number prefixed to it , its easy for you to run this command against the prefixed number To view vm object placement information for each VM simply run {vsan.vm_object_info “Pre-fix-number” } see below screenshots.

 

View Component Placement Using ESXCLI (Version 6.5 and above)

Component placement for all objects on vSAN can be exported to a file and then later viewed offline using a command on any of the hosts part of the vSAN cluster in question :

esxcli vsan debug object list   > /tmp/objects.txt” , this helps us export all output from command “esxcli vsan debug object list” to file called “objects.txt”  under “/tmp/” directory . This can be viewed later by either exporting it to local system using winscp or can be viewed in the ESXi host using cat or less command . Below is a sample output .

esxcli vsan debug object list | less

Object UUID: f205b559-3185-a000-db32-ecf4bbec65d8
 Version: 5
 Health: healthy
 Owner: is-tse-d155.isl.vmware.com
 Policy:
 cacheReservation: 0
 stripeWidth: 1
 spbmProfileGenerationNumber: 2
 forceProvisioning: 0
 spbmProfileName: vSAN Default Storage Policy
 hostFailuresToTolerate: 1
 proportionalCapacity: [0, 100]
 spbmProfileId: aa6d5a82-1c88-45da-85d3-3d74b91a5bad
 CSN: 99
 SCSN: 97

Configuration:

RAID_1
 Component: f205b559-3e35-1901-59e7-ecf4bbec65d8
 Component State: ACTIVE, Address Space(B): 273804165120 (255.00GB), Disk UUID: 529e9a5d-a5d8-6a18-3933-ed69eca58c36, Disk Name: naa.5002538c4044d6ae:2
 Votes: 1, Capacity Used(B): 452984832 (0.42GB), Physical Capacity Used(B): 444596224 (0.41GB), Host Name: is-tse-d157.isl.vmware.com
 Component: f205b559-1c8f-1a01-1fb3-ecf4bbec65d8
 Component State: ACTIVE, Address Space(B): 273804165120 (255.00GB), Disk UUID: 5270b13f-a6a0-50cf-0cb3-88d86b7d323e, Disk Name: naa.5002538c4044d6ab:2
 Votes: 1, Capacity Used(B): 448790528 (0.42GB), Physical Capacity Used(B): 440401920 (0.41GB), Host Name: is-tse-d155.isl.vmware.com
 Witness: f205b559-f362-1b01-d0e5-ecf4bbec65d8
 Component State: ACTIVE, Address Space(B): 0 (0.00GB), Disk UUID: 525b1f66-2d22-7897-d936-6eefd92c7019, Disk Name: naa.5002538c4044d6a3:2
 Votes: 1, Capacity Used(B): 12582912 (0.01GB), Physical Capacity Used(B): 8388608 (0.01GB), Host Name: is-tse-d156.isl.vmware.com

Type: vmnamespace
 Path: /vmfs/volumes/vsan:523d5e5605a4d751-0c3304ae7a42599b/
 Group UUID: 00000000-0000-0000-0000-000000000000
 Directory Name: ESXi-66-Stretch-7


View Objects thru ESXCLI (version 6.0 and below)

The “esxcli vsan debug object list” works only on hosts running ESXi 6.5 and above this is unavailable on the 6.0/5.5 releases . However the object placement can be still pulled with another command which can be run as a python script : “python /usr/lib/vmware/vsan/bin/vsan-health-status.pyc > /tmp/Object-health.txt” , we later use less command to look at individual objects by searching with the Object for either vDisk UUID or VM-NameSpace UUID .

EX:
python /usr/lib/vmware/vsan/bin/vsan-health-status.pyc > /tmp/health.txt
DOM Object reference:
=====================
Object 5f8fb459-7807-2d00-67d3-0cc47ac2b550 (v4, owner: esxi03, policy: {'spbmProfileGenerationNumber': 4, 'forceProvisioning': 0, 'cacheReservation': 0, 'checksumDisabled': 0, 'hostFailuresToTolerate': 1, 'stripeWidth': 1, 'spbmProfileId': 'aa6d5a82-1c88-45da-85d3-3d74b91a5bad', 'proportionalCapacity': 0}):
 Configuration
 RAID_1
 Component: 5f8fb459-c99f-a500-afd9-0cc47ac2b550 (state = 5, addr space = 16106127360 (15.00GB), disk = 5217932d-cbac-3ba3-74ec-63f5b845b6f2,
 votes = 1, used = 29360128 (0.00GB), physUsed = 29360128 (0.00GB), hostname = esxi03)
 Component: 5f8fb459-1f9c-a700-a485-0cc47ac2b550 (state = 5, addr space = 16106127360 (15.00GB), disk = 522babe3-1d47-5801-06bb-e24e9d8ee174,
 votes = 1, used = 29360128 (0.00GB), physUsed = 29360128 (0.00GB), hostname = esxi01)
 Witness: 5f8fb459-2df8-a800-8209-0cc47ac2b550 (state = 5, addr space = 0 (0.00GB), disk = 52797061-4f37-8319-4713-63d073b5fbb4,
 votes = 1, used = 4194304 (0.00GB), physUsed = 4194304 (0.00GB), hostname = esxi02)

 

Lets now see what we understand the differences between all different policy by applying to the same VM and see how the components are created /placed and some key information on the output from RVC and ESXCLI commands that we run. We have choosen a VM called  “Windows_10” Lets see the comparison between different Polices.

Virtual SAN Default Storage Policy

RVC :

In this example we see :

  • Force-provisioning=  0 , hence any VM that is created with this policy will need to strictly adhere to the policy rules and will not be forcefully created if we do not have enough number of hosts/diskgroups/disks  to create and place the object . Object Creation is expected to fail .
  • CacheReservation = 0 , hence all VM created with this policy will not have any reservation for reads in the available read cache on your Cache Tier drive in the Disk group .
  • ChecksumDisabled = 0 , All VMs created with this policy will use the embedded checksum with vSAN , hence all reads against the objects using this policy will be validated against the checksum
  • HostFailuresToTolerate = 1 ,this is number of hosts/disks/disk group failure that the object can take before it goes inaccessible for use .
  • stripeWidth = 1 , This tells that with how many disks are we going to stripe the data across within a DG .
  • spbmProfileId = aa6d5a82-1c88-45da-85d3-3d74b91a5bad , This is the unique UUID assigned to this Policy within vSAN.
  • proportionalCapacity = 0 , (see vdisk) this tell us that the Object reservation for any VM created with this policy is 0 , hence the objects are fully thin and will grow as and when the data fills within the objects .

Note**: In vSAN creating of virtual disks with thin / thick is not recommended for space reservation , this needs to be defined thru Object Space reservation .

Some Important Observations from Below example :

We see that there is a slight difference the way this object placement is interpreted on the RVC output and ESXCLI output . We see that the physical location where the component is placed is shown with the “naa.xxxxxxxxx” the LUN ID of the physical disk vs the disk UUID on the ESXCLI output , also on the RVC we see that SSD (cache tier) which is backing this Magnetic disk’s DiskGroup .

When we look at the vDISK output from RVC and compare it with ESXCLI output , we see an additional section on the ESXCLI output  EX : “used = 47571795968 (44.00GB), physUsed = 47571795968 (44.00GB)” , This tells us that the reserved and physical disk utilization by this component is same . You will see the difference when have object space reservation . The used space will be the actual size of the VMDK allocated space during the time of vDISK creation when the Object space reservation is 100% .

VM Windows_10:

 Namespace directory :
 
 DOM Object: 1aaa5659-6b1a-a7ba-05dd-0cc47ac2b452 (v3, owner: 10.109.9.102, policy: spbmProfileGenerationNumber = 4, forceProvisioning = 0,
cacheReservation = 0, checksumDisabled = 0, hostFailuresToTolerate = 1, stripeWidth = 1, 
spbmProfileId = aa6d5a82-1c88-45da-85d3-3d74b91a5bad, proportionalCapacity = [0, 100])
 RAID_1
 Component: b398a759-303c-a6e3-c09d-0cc47ac2b452 (state: ACTIVE (5), host: 10.109.9.103, md: naa.55cd2e404c211107, ssd: naa.55cd2e404c20d56d,
 votes: 1, usage: 0.4 GB)
 Component: 609eb859-c975-1b18-6be1-0cc47ac2b546 (state: ACTIVE (5), host: 10.109.9.101, md: naa.55cd2e404c212a4c, ssd: naa.55cd2e404c20d56f,
 votes: 1, usage: 0.4 GB)
 Witness: 629eb859-8d15-b76e-c30b-0cc47ac2b546 (state: ACTIVE (5), host: 10.109.9.102, md: naa.55cd2e404c210ffd, ssd: naa.55cd2e404c20ce55,
 votes: 1, usage: 0.0 GB)

vDISK :
 
 Disk backing: [vsanDatastore] 1aaa5659-6b1a-a7ba-05dd-0cc47ac2b452/Windows_10.vmdk
 DOM Object: 1daa5659-ac55-70f6-bc12-0cc47ac2b452 (v3, owner: 10.109.9.102, policy: spbmProfileGenerationNumber = 4, forceProvisioning = 0, 
cacheReservation = 0, checksumDisabled = 0, hostFailuresToTolerate = 1, stripeWidth = 1, 
spbmProfileId = aa6d5a82-1c88-45da-85d3-3d74b91a5bad, proportionalCapacity = 0)
 RAID_1
 Component: b898a759-f8d6-0092-b302-0cc47ac2b452 (state: ACTIVE (5), host: 10.109.9.103, md: naa.55cd2e404c212d55, ssd: naa.55cd2e404c20d56d,
 votes: 1, usage: 44.3 GB)
 Component: 7d00ae59-ffba-5b7d-c6cd-0cc47ac2b550 (state: ACTIVE (5), host: 10.109.9.101, md: naa.55cd2e404c211ff5, ssd: naa.55cd2e404c20d56f,
 votes: 1, usage: 44.3 GB)
 Witness: e89eb859-8dac-b7c1-4081-0cc47ac2b550 (state: ACTIVE (5), host: 10.109.9.102, md: naa.55cd2e404c210ffd, ssd: naa.55cd2e404c20ce55,
 votes: 1, usage: 0.0 GB)

ESXi output from “python /usr/lib/vmware/vsan/bin/vsan-health-status.pyc”

Namespace directory :
 
Object 1aaa5659-6b1a-a7ba-05dd-0cc47ac2b452 (v4, owner: esxi02, policy: {'spbmProfileGenerationNumber': 4, 'forceProvisioning': 0, 
'cacheReservation': 0, 'checksumDisabled': 0, 'hostFailuresToTolerate': 1, 'stripeWidth': 1, 
'spbmProfileId': 'aa6d5a82-1c88-45da-85d3-3d74b91a5bad', 'proportionalCapacity': [0, 100]}):
 Configuration
 RAID_1
 Component: b398a759-303c-a6e3-c09d-0cc47ac2b452 (state = 5, addr space = 273804165120 (255.00GB), disk = 52ca462c-7d81-3e39-7bcc-858fe854fb81,
 votes = 1, used = 461373440 (0.00GB), physUsed = 461373440 (0.00GB), hostname = esxi03)
 Component: 609eb859-c975-1b18-6be1-0cc47ac2b546 (state = 5, addr space = 273804165120 (255.00GB), disk = 52f67161-bba6-9e7c-2a98-79bd160d14e6,
 votes = 1, used = 452984832 (0.00GB), physUsed = 452984832 (0.00GB), hostname = esxi01)
 Witness: 629eb859-8d15-b76e-c30b-0cc47ac2b546 (state = 5, addr space = 0 (0.00GB), disk = 52b7c9b9-01c1-1770-22af-7f9b78ee3d2f,
 votes = 1, used = 4194304 (0.00GB), physUsed = 4194304 (0.00GB), hostname = esxi02)

vDISK : 
 
Object 1daa5659-ac55-70f6-bc12-0cc47ac2b452 (v4, owner: esxi02, policy: {'spbmProfileGenerationNumber': 4, 'forceProvisioning': 0, 
'cacheReservation': 0, 'checksumDisabled': 0, 'hostFailuresToTolerate': 1, 'stripeWidth': 1,
 'spbmProfileId': 'aa6d5a82-1c88-45da-85d3-3d74b91a5bad', 'proportionalCapacity': 0}):
 Configuration
 RAID_1
 Component: b898a759-f8d6-0092-b302-0cc47ac2b452 (state = 5, addr space = 214748364800 (200.00GB), disk = 5280ff9a-9880-8549-ad8a-5ee212ee21ee,
 votes = 1, used = 47571795968 (44.00GB), physUsed = 47571795968 (44.00GB), hostname = esxi03)
 Component: 7d00ae59-ffba-5b7d-c6cd-0cc47ac2b550 (state = 5, addr space = 214748364800 (200.00GB), disk = 52006064-ff9f-b525-e976-0142b74b4392,
 votes = 1, used = 47571795968 (44.00GB), physUsed = 47571795968 (44.00GB), hostname = esxi01)
 Witness: e89eb859-8dac-b7c1-4081-0cc47ac2b550 (state = 5, addr space = 0 (0.00GB), disk = 52b7c9b9-01c1-1770-22af-7f9b78ee3d2f,
 votes = 1, used = 4194304 (0.00GB), physUsed = 4194304 (0.00GB), hostname = esxi02)

Virtual SAN Default Storage Policy with 100% Object reservation

RVC

Here we see  all parameter values remain the same when compared to the default  policy explained above . The only change is “proportionalCapacity = 100” , this tells us that the Object space reservation is 100% (Similar to a thick vDisk on a  VMFS volumes)

/localhost/vSAN6U3-AF/vms> vsan.vm_object_info 21
VM Windows_10:
 Namespace directory
 DOM Object: 1aaa5659-6b1a-a7ba-05dd-0cc47ac2b452 (v3, owner: 10.109.9.102, policy: spbmProfileGenerationNumber = 0, forceProvisioning = 0, cacheReservation = 0, checksumDisabled = 0, hostFailuresT
oTolerate = 1, stripeWidth = 1, spbmProfileId = d4898518-158c-452f-907e-8a3febb0f1a1, proportionalCapacity = [0, 100])
 RAID_1
 Component: b398a759-303c-a6e3-c09d-0cc47ac2b452 (state: ACTIVE (5), host: 10.109.9.103, md: naa.55cd2e404c211107, ssd: naa.55cd2e404c20d56d,
 votes: 1, usage: 0.4 GB)
 Component: 609eb859-c975-1b18-6be1-0cc47ac2b546 (state: ACTIVE (5), host: 10.109.9.101, md: naa.55cd2e404c212a4c, ssd: naa.55cd2e404c20d56f,
 votes: 1, usage: 0.4 GB)
 Witness: 629eb859-8d15-b76e-c30b-0cc47ac2b546 (state: ACTIVE (5), host: 10.109.9.102, md: naa.55cd2e404c210ffd, ssd: naa.55cd2e404c20ce55,
 votes: 1, usage: 0.0 GB)
 
vDISK : 
 Disk backing: [vsanDatastore] 1aaa5659-6b1a-a7ba-05dd-0cc47ac2b452/Windows_10.vmdk
 DOM Object: 1daa5659-ac55-70f6-bc12-0cc47ac2b452 (v3, owner: 10.109.9.102, policy: spbmProfileGenerationNumber = 0, forceProvisioning = 0, cacheReservation = 0, checksumDisabled = 0, hostFailuresT
oTolerate = 1, stripeWidth = 1, spbmProfileId = d4898518-158c-452f-907e-8a3febb0f1a1, proportionalCapacity = 100)
 RAID_1
 Component: b898a759-f8d6-0092-b302-0cc47ac2b452 (state: ACTIVE (5), host: 10.109.9.103, md: naa.55cd2e404c212d55, ssd: naa.55cd2e404c20d56d,
 votes: 1, usage: 200.0 GB)
 Component: 7d00ae59-ffba-5b7d-c6cd-0cc47ac2b550 (state: ACTIVE (5), host: 10.109.9.101, md: naa.55cd2e404c211ff5, ssd: naa.55cd2e404c20d56f,
 votes: 1, usage: 200.0 GB)
 Witness: e5553f5a-c283-5d3d-e202-0cc47ac2b452 (state: ACTIVE (5), host: 10.109.9.104, md: naa.55cd2e404c212d05, ssd: naa.55cd2e404c20d047,
 votes: 1, usage: 0.0 GB)

ESXi Output “python /usr/lib/vmware/vsan/bin/vsan-health-status.pyc”

Namespace directory : 
Object 1aaa5659-6b1a-a7ba-05dd-0cc47ac2b452 (v4, owner: esxi02, policy: {'spbmProfileGenerationNumber': 0, 'forceProvisioning': 0, 'cacheReservation': 0, 'checksumDisabled': 0, 'hostFailuresToTolerate': 1, 'stripe
Width': 1, 'spbmProfileId': 'd4898518-158c-452f-907e-8a3febb0f1a1', 'proportionalCapacity': [0, 100]}):
 Configuration
 RAID_1
 Component: b398a759-303c-a6e3-c09d-0cc47ac2b452 (state = 5, addr space = 273804165120 (255.00GB), disk = 52ca462c-7d81-3e39-7bcc-858fe854fb81,
 votes = 1, used = 461373440 (0.00GB), physUsed = 461373440 (0.00GB), hostname = esxi03)
 Component: 609eb859-c975-1b18-6be1-0cc47ac2b546 (state = 5, addr space = 273804165120 (255.00GB), disk = 52f67161-bba6-9e7c-2a98-79bd160d14e6,
 votes = 1, used = 452984832 (0.00GB), physUsed = 452984832 (0.00GB), hostname = esxi01)
 Witness: 629eb859-8d15-b76e-c30b-0cc47ac2b546 (state = 5, addr space = 0 (0.00GB), disk = 52b7c9b9-01c1-1770-22af-7f9b78ee3d2f,
 votes = 1, used = 4194304 (0.00GB), physUsed = 4194304 (0.00GB), hostname = esxi02)
 
vDISK :

Object 1daa5659-ac55-70f6-bc12-0cc47ac2b452 (v4, owner: esxi02, policy: {'spbmProfileGenerationNumber': 0, 'forceProvisioning': 0, 
'cacheReservation': 0, 'checksumDisabled': 0, 'hostFailuresToTolerate': 1, 'stripeWidth': 1, 
'spbmProfileId': 'd4898518-158c-452f-907e-8a3febb0f1a1', 'proportionalCapacity': 100}):
 Configuration
 RAID_1
 Component: b898a759-f8d6-0092-b302-0cc47ac2b452 (state = 5, addr space = 214748364800 (200.00GB), disk = 5280ff9a-9880-8549-ad8a-5ee212ee21ee,
 votes = 1, used = 214752559104 (200.00GB), physUsed = 47571795968 (44.00GB), hostname = esxi03)
 Component: 7d00ae59-ffba-5b7d-c6cd-0cc47ac2b550 (state = 5, addr space = 214748364800 (200.00GB), disk = 52006064-ff9f-b525-e976-0142b74b4392,
 votes = 1, used = 214752559104 (200.00GB), physUsed = 47571795968 (44.00GB), hostname = esxi01)
 Witness: e5553f5a-c283-5d3d-e202-0cc47ac2b452 (state = 5, addr space = 0 (0.00GB), disk = 52eb0e0e-bab1-d12e-4631-df25d1bad218,
 votes = 1, used = 4194304 (0.00GB), physUsed = 4194304 (0.00GB), hostname = esxi04)

vSAN Default Policy with higher stripe width

RVC Output

Namespace directory

 DOM Object: 1aaa5659-6b1a-a7ba-05dd-0cc47ac2b452 (v3, owner: 10.109.9.102, proxy owner: None, policy: spbmProfileGenerationNumber = 1, 
forceProvisioning = 0, cacheReservation = 0, checksumDisabled = 0, hostFailuresToTolerate = 1, 
stripeWidth = 1, spbmProfileId = d4898518-158c-452f-907e-8a3febb0f1a1, proportionalCapacity = [0, 100])
 RAID_1
 Component: b398a759-303c-a6e3-c09d-0cc47ac2b452 (state: ACTIVE (5), host: 10.109.9.103, md: naa.55cd2e404c211107, ssd: naa.55cd2e404c20d56d,
 votes: 1, usage: 0.4 GB, proxy component: false)
 Component: 609eb859-c975-1b18-6be1-0cc47ac2b546 (state: ACTIVE (5), host: 10.109.9.101, md: naa.55cd2e404c212a4c, ssd: naa.55cd2e404c20d56f,
 votes: 1, usage: 0.4 GB, proxy component: false)
 Witness: 629eb859-8d15-b76e-c30b-0cc47ac2b546 (state: ACTIVE (5), host: 10.109.9.102, md: naa.55cd2e404c210ffd, ssd: naa.55cd2e404c20ce55,
 votes: 1, usage: 0.0 GB, proxy component: false)
 
vDISK :
Disk backing: [vsanDatastore] 1aaa5659-6b1a-a7ba-05dd-0cc47ac2b452/Windows_10.vmdk
DOM Object: 1daa5659-ac55-70f6-bc12-0cc47ac2b452 (v3, owner: 10.109.9.102, proxy owner: None, policy: spbmProfileGenerationNumber = 1,forceProvisioning = 0,cacheReservation = 0, 
checksumDisabled = 0, hostFailuresToTolerate = 1, stripeWidth = 4, spbmProfileId = d4898518-158c-452f-907e-8a3febb0f1a1, proportionalCapacity = 0)

 RAID_1
 RAID_0
 Component: b550405a-258f-8b6e-eda1-0cc47ac2b452 (state: ACTIVE (5), host: 10.109.9.101, md: naa.55cd2e404c212bd1, ssd: naa.55cd2e404c20d56f,
 votes: 3, usage: 11.1 GB, proxy component: false)
 Component: b550405a-81c0-8e6e-ae04-0cc47ac2b452 (state: ACTIVE (5), host: 10.109.9.103, md: naa.55cd2e404c211107, ssd: naa.55cd2e404c20d56d,
 votes: 4, usage: 11.1 GB, proxy component: false)
 Component: b550405a-0ec8-906e-314a-0cc47ac2b452 (state: ACTIVE (5), host: 10.109.9.101, md: naa.55cd2e404c2110e4, ssd: naa.55cd2e404c20d56f,
 votes: 2, usage: 11.1 GB, proxy component: false)
 Component: b550405a-6add-926e-7200-0cc47ac2b452 (state: ACTIVE (5), host: 10.109.9.102, md: naa.55cd2e404c2129ec, ssd: naa.55cd2e404c20ce55,
 votes: 4, usage: 11.1 GB, proxy component: false)
 RAID_0
 Component: b550405a-15fe-946e-7ed6-0cc47ac2b452 (state: ACTIVE (5), host: 10.109.9.104, md: naa.55cd2e404c212d08, ssd: naa.55cd2e404c20d047,
 votes: 1, usage: 11.1 GB, proxy component: false)
 Component: b550405a-dbe6-966e-ebf3-0cc47ac2b452 (state: ACTIVE (5), host: 10.109.9.104, md: naa.55cd2e404c2110eb, ssd: naa.55cd2e404c20d047,
 votes: 1, usage: 11.1 GB, proxy component: false)
 Component: b550405a-9575-986e-7fcd-0cc47ac2b452 (state: ACTIVE (5), host: 10.109.9.104, md: naa.55cd2e404c212d05, ssd: naa.55cd2e404c20d047,
 votes: 1, usage: 11.1 GB, proxy component: false)
 Component: b550405a-922a-9a6e-504d-0cc47ac2b452 (state: ACTIVE (5), host: 10.109.9.104, md: naa.55cd2e404c212d4e, ssd: naa.55cd2e404c20d047,
 votes: 1, usage: 11.1 GB, proxy component: false)

 

ESXCLI Output

 

Namespace directory
 
Object 1aaa5659-6b1a-a7ba-05dd-0cc47ac2b452 (v4, owner: esxi02, policy: {'spbmProfileGenerationNumber': 1, 'forceProvisioning': 0, 'cacheReservation': 0, 'checksumDisabled': 0, 'hostFailuresToTolerate': 1, 's
tripeWidth': 1, 'spbmProfileId': 'd4898518-158c-452f-907e-8a3febb0f1a1', 'proportionalCapacity': [0, 100]}):
 Configuration
 RAID_1
 Component: b398a759-303c-a6e3-c09d-0cc47ac2b452 (state = 5, addr space = 273804165120 (255.00GB), disk = 52ca462c-7d81-3e39-7bcc-858fe854fb81,
 votes = 1, used = 461373440 (0.00GB), physUsed = 461373440 (0.00GB), hostname = esxi03)
 Component: 609eb859-c975-1b18-6be1-0cc47ac2b546 (state = 5, addr space = 273804165120 (255.00GB), disk = 52f67161-bba6-9e7c-2a98-79bd160d14e6,
 votes = 1, used = 452984832 (0.00GB), physUsed = 452984832 (0.00GB), hostname = esxi01)
 Witness: 629eb859-8d15-b76e-c30b-0cc47ac2b546 (state = 5, addr space = 0 (0.00GB), disk = 52b7c9b9-01c1-1770-22af-7f9b78ee3d2f,
 votes = 1, used = 4194304 (0.00GB), physUsed = 4194304 (0.00GB), hostname = esxi02)

vDISK :

Object 1daa5659-ac55-70f6-bc12-0cc47ac2b452 (v4, owner: esxi02, policy: {'spbmProfileGenerationNumber': 1, 'forceProvisioning': 0, 'cacheReservation': 0, 
'checksumDisabled': 0, 'hostFailuresToTolerate': 1, 'stripeWidth': 4, 'spbmProfileId': 'd4898518-158c-452f-907e-8a3febb0f1a1', 'proportionalCapacity': 0}):
 Configuration
 RAID_1
 RAID_0
 Component: b550405a-258f-8b6e-eda1-0cc47ac2b452 (state = 5, addr space = 53687091200 (50.00GB), disk = 52ad1443-71bd-84fc-20c4-024b8e260ee4,
 votes = 3, used = 11941183488 (11.00GB), physUsed = 11941183488 (11.00GB), hostname = esxi01)
 Component: b550405a-81c0-8e6e-ae04-0cc47ac2b452 (state = 5, addr space = 53687091200 (50.00GB), disk = 52ca462c-7d81-3e39-7bcc-858fe854fb81,
 votes = 4, used = 11949572096 (11.00GB), physUsed = 11949572096 (11.00GB), hostname = esxi03)
 Component: b550405a-0ec8-906e-314a-0cc47ac2b452 (state = 5, addr space = 53687091200 (50.00GB), disk = 5218da7e-cf6c-35d4-2dcf-215b93249581,
 votes = 2, used = 11936989184 (11.00GB), physUsed = 11936989184 (11.00GB), hostname = esxi01)
 Component: b550405a-6add-926e-7200-0cc47ac2b452 (state = 5, addr space = 53687091200 (50.00GB), disk = 52aeebab-02c1-4425-ac85-399569d4fec9,
 votes = 4, used = 11941183488 (11.00GB), physUsed = 11941183488 (11.00GB), hostname = esxi02)
 RAID_0
 Component: b550405a-15fe-946e-7ed6-0cc47ac2b452 (state = 5, addr space = 53687091200 (50.00GB), disk = 525ad3e9-40dc-aa54-9cc9-4bc6fa70aa15,
 votes = 1, used = 11941183488 (11.00GB), physUsed = 11941183488 (11.00GB), hostname = esxi04)
 Component: b550405a-dbe6-966e-ebf3-0cc47ac2b452 (state = 5, addr space = 53687091200 (50.00GB), disk = 524392d2-4987-5264-fbea-358cc6013aef,
 votes = 1, used = 11949572096 (11.00GB), physUsed = 11949572096 (11.00GB), hostname = esxi04)
 Component: b550405a-9575-986e-7fcd-0cc47ac2b452 (state = 5, addr space = 53687091200 (50.00GB), disk = 52eb0e0e-bab1-d12e-4631-df25d1bad218,
 votes = 1, used = 11936989184 (11.00GB), physUsed = 11936989184 (11.00GB), hostname = esxi04)
 Component: b550405a-922a-9a6e-504d-0cc47ac2b452 (state = 5, addr space = 53687091200 (50.00GB), disk = 5278dc1c-85d6-49ae-0779-9a67e20c3109,
 votes = 1, used = 11941183488 (11.00GB), physUsed = 11941183488 (11.00GB), hostname = esxi04)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

admin

I am a Technical Support Engineer with VMware global support from the year 2015. My current focus is with VMware vSAN ® and VxRail™ ,my overall expertise is around storage availability business unit (VMware vSAN ®, VMware Site Recovery Manager® and Vsphere Data Protection® ). I had initially started my carrier with EMC support for clarion and VNX block storage in 2012

You may also like...