This post is also available in: Italian
It could be really useful for block based storage, but also for NFS storage it can be used to provide a complete per-VM granularity, policy based management and integration with specific storage services:
But note that storage vendors can implement different features in different way, so does not just have a look at the different check-list but learn more on each different solutions.
Same can be applied to VVols: in some storages VVols objects are treated differently from the “native” (LUNs or files) storage objects and some storage level features could be not available on VVols object (for example storage replication).
Each storage vendor is providing more information on its VVols implementation and why or how is different from others and there are some good resources also vendor agnostic.
The biggest one, and most obvious is that VVols DOES NOT make all storage equal, just because each storage vendor can provide different features how explained before. You have to read carefully how your storage will and can support VVols and which kind of benefit can you have from them.
Also the VASA providers are not the same: in some storage array are inside the array itself with a reasonable availability. But on others are just implemented with a virtual appliance that you have to consider also from the point of view of availability (can be VMware HA enough? VMware FT could be better? Or the storage vendor has an HA configuration?) and from the point of view of the dependency (having on a management cluster or at least on a storage that is not “served” by this VASA provider).
Also the maximum numbers can be quite important: if is true that you don’t have to care anymore on how many VMs you have in each datastore (really important rules especially for block-level storage), maybe you have to consider how much VVols you can have in your storage (because you may have a lot of this kind of objects!).
Those numbers are well explained in the “Comparing Virtual Volume (VVol) limits to VMFS/NFS limits” post, that also demostrate how vendor specific implementation can change those limits:
- While multiple VVol Storage Containers are supported, it’s up to each vendor to decide what they want to support. Today many vendors only support a single Storage Container which encompasses an entire storage array.
- While multiple VVol Protocol Endpoints are supported, it’s up to each vendor to decide what they want to support. Today most vendors only support a single Protocol Endpoint for the entire storage array.
- The maximum number of VVols supported by a storage array is up to each vendor to decide what they want to support. The maximum number of VVols required by VMs in a cluster of ESXi hosts is the product of maximum number of virtual disks per VM (60), maximum number of snapshots per virtual disk (32), and maximum number of VMs per vCenter cluster (10,000). This make the theoretical maximum around 19 million total VVols.
- The minimum number VVols a powered-on VM will have is 3 (config, swap, data) (swap goes away when VM is powered off). Each snapshot will add at least one additional VVol per virtual disk (plus an additional if memory state is selected). The maximum number of VVols a powered-on VM could have is around 2,000: 1 – config, 1 – swap, 60 – data, 1,920 – snapshots (60×32), 32 – memory state.
- Virtual Volumes Links
- The 7 Big Myths About VVol
- Not all VVols are Created Equal : Hitachi Storage for VMware vSphere Virtual Volumes
- VMware Virtual Volumes – The Next Level of Storage Integration
- Why you don’t need Vvols in Coho Data