Reading Time: 10 minutes

Now that both Microsoft and VMware have officially announced the new released of their virtualization products it’s possible to make some kind of comparison between Hyper-V available on Windows Server 2019 and vSphere 6.7 (like I’ve done some years ago with the Microsoft Hyper-V 2016 vs. VMware vSphere 6.5 article).

Comparing two different product is not so easy, also if they are released closed one each other (at least in the same year). You need to found some homogenous aspects to make the comparison, at least at the technical level (but as written, it’s not so much important now where the technology gap is really reduced). For numbers could be really easy, but numbers are not enough: for example memory management it’s still really different (VMware implement different technologies, Hyper-V only Dynamic Memory, only on some OSes)… so it’s not the same what you can do with the same amount of memory.

VMware-LogoVMware vSphere 6.7 introduces several enhancements (especially in the security aspect) and slightly improve some scalability aspects. Most of the real improvement in scalability are on the management layer (the vCenter Server).

The latest version is VMware vSphere 6.7 Update 1 released on October 16th.

Microsoft-LogoMicrosoft Windows Server 2019 with Hyper-V is a new milestone after the Windows Server 2016 version released just 3 years ago!

But most of features gap was already filled by version 2016 and the new version does not have much difference in the Hyper-V or the scalability aspect. For the security aspect, now it extended security support of Shielded VMs to Linux VMs.

Hardware requirements are becoming much similar, considering that also VMware requires hardware assisted technologies for processors (but Hyper-V now require mandatory also memory virtualization assisted). Note that both drop the compatibility with several old processors and hardware, so be sure to plan carefully your upgrade or deployment.

Hypervisor space requirements are completely different (ESXi could be installed on a 1 GB USB or SD card, Hyper-V actually cannot be installed on a SD card, but nano server installation finally require less than 1 GB of disk space!) and also minimum memory requirements are singly different.

Scalability

As written there isn’t much difference between previous products scalability and most of the maximum numbers remain the same.

System Resource Microsoft Hyper-V 2019
VMware vSphere 6.7
Free Hypervisor Essential Plus Enterprise Plus
Host Logical Processors 512 768 768 768
Physical Memory 24 TB 4 TB? 4 TB? 16 TB
Virtual CPUs per Host 2048 4096 4096 4096
VM per Host 1024 1024 1024 1024
Nested Hypervisor Yes (only some OSes) Yes Yes Yes
VM Virtual CPUs per VM 240 for Generation2
64 for Generation1
8 128 128
Memory per VM 12 TB for Generation2
1 TB for Generation1
6128 GB 6128 GB 6128 GB
Maximum Virtual Disk 64 TB for VHDX format
2040 GB for VHD format
62 TB 62 TB 62 TB
Number of disks 256 (SCSI) 256 (SCSI) 256 (SCSI) 256 (SCSI)
Cluster Maximum Nodes 64 N/A 64 64
Maximum VMs 8000 N/A 8000 8000

As written memory management it’s really different and is not so easy to be compared because VMware ESXi has several optimization techniques.

But some features disappear or becoming less relevant. For example, VMware Transparent Page Sharing feature has some limitations with new OSes (and also that it’s working on a page hash, and not on a real page comparison) and starting with vSphere 6.0 has been disabled by default across different workloads.

But both have some kind of dynamic memory management and the possibility also to hot-add static memory running workload.

Microsoft Dynamic Memory it’s better or worse compare to VMware memory management? For supported OS, in my opinion, it’s an interesting approach (and VMware could implement it, considering that they already have the RAM hot-add feature), but of course, having much more memory it’s always a better option. And anyway, for business critical application the most common configuration is to pre-allocate the memory and maybe just use the hot-add feature.

For more information about scalability see also:

VM features

Other comparisons are now no more interesting (just because the features are quite similar, just with other names) or are really complex (just because the features are very different).

The main difference is that in Hyper-V all features of the Standard edition are available also in the free edition (big change from the previous version where the free edition has the same features of the datacenter edition), and there are some difference between the Standard and the Datacenter edition (like nano server support, Storage Replica, …). In VMware each edition has different feature sets (see the editions comparison) and the free edition remains limited on the backup capabilities (no VADP support).

For example, Live Migration and Storage Live migration are pretty the same, with communication encryption (added in vSphere 6.5), multichannel support, dedicated network (for the VM migration across hosts). Formally Hyper-V does not have a Geo-vMotion, but has only replication across clouds.

Also, Hyper-V has got some limitation in VM live migration across host with different versions, but starting with 2016 version seems now possible (at least across 2012 R2 and 2016 versions).

Features Microsoft Hyper-V 2019
VMware vSphere 6.7
Free Hypervisor Essential Plus Enterprise Plus
VM host live migration Yes No Yes Yes
VM storage live migration Yes No No Yes
Storage/Network QoS Yes No (just disk shares) No (just disk shares
at host level)
Yes
Hardware passthrough Discrete Device Assignment PCI VMDirectPath
USB redirection
PCI VMDirectPath
USB redirection
PCI VMDirectPath
USB redirection
Hot-Add Disks/vNIC/RAM Disks/vNIC/USB Disks/vNIC/USB Disks/vNIC/USB/
CPU/RAM
Hot-Remove Disks/vNIC/RAM Disks/vNIC/USB Disks/vNIC/USB Disks/vNIC/USB/CPU
Disk resize Hot-grow and shrink Hot-grow Hot-grow Hot-grow
VM encryption Yes  No No? Yes

VMware vSphere adds much more options in the VM configuration, like using different types of controller, including NNVMe for disks and RDMA for networking (only for Linux VMs). And with version 6.7 add also the support for the emerging Persisten Memory.

Guest Clustering

A guest cluster is a cluster between VM, that means usually install and configure a Microsoft Failover Cluster across two or more VMs.

Both VMware vSphere and Microsoft Hyper-V supports guest clustering, but with different configurations, requirements, and limitations.

Microsoft Hyper-V use a specific type of virtual disk (a shared VHDX) to implement shared storage across VM cluster nodes.

And starting with Windows Server 2016 it support interesting features on shared disks:

  • Dynamic resize (resizing while VMs are running)
  • Host level backup
  • Hyper-V Replica support

In VMware vSphere the required configuration can vary depending by the cluster types:

Management

Also management capabilities are difficult to be compared, just because Hyper-V does not require System Center VMM to implement most of the cluster features (like VM template and a better resource provisioning); on the other side VMware vCenter is mandatory, but now the vCSA has been finally improved and has become the first choice, but the vCenter remain mandatory and needed to implement several features.

With Windows Server 2019, finally there is updated Windows Admin Center (WAC), version 1809, formerly known as “Project Honolulu“, to support a web-oriented management and both products now have powerful HTML5 UI tool to manage them.

Both could be controlled from the command line (PowerShell is the first choice for Microsoft, but PowerCLI is gaining attraction also for VMware).

Most of the changes are on the vSphere 6.7 vCSA that delivers great performance improvements (all metrics compared at cluster scale limits, versus vSphere 6.5):

  • 2X faster performance in vCenter operations per second
  • 3X reduction in memory usage
  • 3X faster DRS-related operations (e.g. power-on virtual machine)

But comparing vCSA and System Center VMM it’s quite difficult, also because they have a different purpose (with VMM you can really build a private cloud). It could be more correct compare VMM with vCenter plus vRA, but also in this case could not be homogeneous. The best comparison should be between the Azure Stack and the entire vCloud Suite.

About the cluster resiency, finally Microsoft has also done major work on failover clustering in Windows Server 2019. Becoming much more resilient and less dependent from the AD structure, but also adding a new capabilities to easily move an entire failover cluster from one domain to another.

Also, Windows Server 2019 has a new technology called “True Two-Node” clusters: a greatly simplified architecture for Windows Server Failover Clustering topology.

Cost

The cost comparison is now complicated: VMware ESXi remain licensed per socket (physical CPU), for Microsoft stating with Windows Server 2016 is now licensed also per core and there are again features difference between the standard and the datacenter editions.

VMware has different editions (Essential, Essential Plus, Standard, Enterprise Plus and now also Platinum) with different features; and the free edition is very very limited.

For Hyper-V a zero license cost option could use Hyper-V Server (that is the free version of Hyper-V, but completely full features). Of course, you still need the guest OS licenses (but the same apply for ESXi) and the licenses for the rest of the physical infrastructure (for Hyper-V a physical Domain Controller could be useful).

For the management part, for VMware vCenter is mandatory (if you want cluster features), but does not require anymore a Windows license (neither for VUM). For Hyper-V SCVMM could be useful (like the rest of System Center suite), but not mandatory.

For a comparison between the different Windows and vSphere editions see:

HCI

Both have also an hyper-converged infrastructure (HCI) solution integrated at the kernel level: VSAN for vSphere and Storage Space Direct for Microsoft (it’s a feature of Windows Server, so could be used not only for hyper-converged deployment).

Both types of solutions are able to build a 2-nodes cluster: vSAN require an external ESXi host (virtual) for the quorum, SD2 is just based on the Windows Fail-Over cluster, so an external withness (file or cloud) should be fine.

With Windows Server 2019, there are a lot of improvements in the S2D part: now supports a maximum of 4 Petabytes of storage per cluster. And seems that also performance has been improved: during the last Ignite event, Microsoft has demonstrated how an 8 node S2D cluster can reach 13 million IOPS (although this kind of demo does not mean anything withoud detailed workload specification).

Previous versions

Share

Virtualization, Cloud and Storage Architect. Tech Field delegate. VMUG IT Co-Founder and board member. VMware VMTN Moderator and vExpert 2010-24. Dell TechCenter Rockstar 2014-15. Microsoft MVP 2014-16. Veeam Vanguard 2015-23. Nutanix NTC 2014-20. Several certifications including: VCDX-DCV, VCP-DCV/DT/Cloud, VCAP-DCA/DCD/CIA/CID/DTA/DTD, MCSA, MCSE, MCITP, CCA, NPP.