This post is also available in: Italian

Reading Time: 3 minutes

One of the “issue” with vmdk in thin format is that they start “small” and then grow when you add new data… But when you delete some data, the vmdk file size is not reduced.

To be honest this issue is more related to the guest file systems that does never delete the block data, but only the metada (or some of them). Of course at guest OS level you will see the right disk usage, but this will probably not match the one that you see at VMware level (that usually will be bigger).

The document solution to reclaim the free space also in the VMFS is fill the free blocks in the guest OS with zeroed (on Windows can be done with sdelete and on Linux with dd) and then use a storage migration (cold or hot with Storage vMotion) to move the VM file from one datastore to another. See also:

The problem with this technique are: you need at least two datastores (and enough space to move the entire VM), you can do a live operation only with Enterprise and Enterprise+ editions (that have Storage vMotion), it require time and resources (but VAAI can help to reduce some resource usage).

But there is one other big requirement, otherwise the reclaim will not work at all: in vSphere 4.x the two datastores must have a different block size. The reason is well described on Duncan Epping blog in a specific post (Blocksize impact?).

And what about vSphere 5.0? I’ve make some tests and seems that this requirement still exist (as confirmed by Duncan in the notes of the previous post). Also with VMFS5 datastores!

And for new VMFS5 datastores could be a big problem… because they use a unified block size of 1MB. It doesn’t matter if you move between VMFS3 or VMFS5, if you try to force the thin conversion, of if you try to converter to thick and then to thin… Seems that you need different block size to reclaim the zeroed blocks in a thin vmdk.

So which kind of solutions could exist? Actually I think that the possible workaround are:

  • Use the William Lam “trick”: temporally set the hidden advanced parameter /config/VMFS3/intOpts/EnableDataMovement to 0 (by using SSH and vsish command) in order to turn on the old datamover.
  • Build at least a VMFS datastore with a different block size (you can simple create a VMFS3 datastore and then convert it to VMFS5).
  • Don’t use thin disks or simple don’t worry about the free space.
  • Use the VMware Standalone Converter and make a V2V at volume level (but it could mean downtime and other small issues, like MAC address changes).
  • Use NFS datastores instead of block orient datastores.
Share

Virtualization, Cloud and Storage Architect. Tech Field delegate. VMUG IT Co-Founder and board member. VMware VMTN Moderator and vExpert 2010-24. Dell TechCenter Rockstar 2014-15. Microsoft MVP 2014-16. Veeam Vanguard 2015-23. Nutanix NTC 2014-20. Several certifications including: VCDX-DCV, VCP-DCV/DT/Cloud, VCAP-DCA/DCD/CIA/CID/DTA/DTD, MCSA, MCSE, MCITP, CCA, NPP.