This post is also available in: Italian

Reading Time: 3 minutes

VMware best practices for virtual networking, starting with vSphere 5, usually recommend the vmxnet3 virtual NIC adapter for all VMs with a “recent” operating systems: starting from NT 6.0 (Vista and Windows Server 2008) for Windows and for Linux that include this driver in the kernel, and for virtual machines version 7 and later.

For those operating systems the choice is normally between the e1000 or the vmxnet3 adapter: the new virtual machine wizard suggest the e1000 for the recent Windows systems, but only because this driver is included in the OSes.

Historically there were some issues both with e1000 and vmxnet3, but now with vSphere 6.0 seems that the vmxnet3 choice can bring some big problems.

The biggest one is related to a specific PSOD bug affecting vmxnet3 with virtual hardware 11 VMs and seems ESXi 6.0U2 hosts. The VMware KB 2144968 (ESXi 6.0 Update 2 host fails with a purple diagnostic screen containing the error: Vmxnet3VMKDevRxWithLock and Vmxnet3VMKDevRx) give more detail about this issue that can cause purple screen on the host (so not much isolation for the VM).

This is a known issue affecting ESXi 6.0 Update 2, but I’m not sure that previous 6.0 version are completely bug free. For sure does not affect 5.x versions.

Currently there is no resolution and to work around this issue the only solution is disable hardware LRO for VMXNET3 on the ESXi host:

  • To disable the hardware LRO on the ESXi host change the advanced configuration option /Net/Vmxnet3HwLRO to 0. For more information, see Configuring advanced options for ESXi/ESX (1038578)
    Note:

    • Software LRO is enabled for VMXNET3 when hardware LRO is disabled.
    • This also can be set with this esxcli command:esxcli system settings advanced set -o /Net/Vmxnet3HwLRO -i 0

The solution to roll back to previous ESXi 6.0 version does not seems really applicable and smart: other issues affect early version of vSphere 6.0.

Updated of May 12, 2016: VMware has fixed this issues with a patch described in KB 2144685 (VMware ESXi 6.0, Patch ESXi600-201605401-BG: Updates esx-base, vsanhealth, vsan VIBs).

Also there are several vmxnet3 performance issues both on Linux and Windows OSes that you have to consider.

For Linux VMs you can have more information on VMware KB 1027511 (Poor TCP performance might occur in Linux virtual machines with LRO enabled) and VMware KB 2077393 (Poor network performance when using VMXNET3 adapter for routing in a Linux guest operating system). Funny how the second one was an old issue affecting e1000 adapter and now also vmxnet3.

For Windows, the main issue is limited to Windows Server 2012, Windows 8 or later running on a VM with virtual hardware 11 and, of course, using vmxnet3 adapter: VMware KB 2129176 (After upgrading a virtual machine to hardware version 11 network dependent workloads experience performance degradation) describe this issue and, funny, say that this issue is resolved in ESXi 6.0 Update 2 (but does not mention the PSOD problem!).

In all this cases the implementation of Large Receive Offload (LRO) Support for VMXNET3 Adapters with Windows VMs on vSphere 6 seems a way to solve or minimize this problems: by disabling it at VM level or host level. Let’s see when there will be a real fix for all vmxnet3 issues.

For the virtual network adapter choice see also:

Share

Virtualization, Cloud and Storage Architect. Tech Field delegate. VMUG IT Co-Founder and board member. VMware VMTN Moderator and vExpert 2010-20 and vExpert Pro. Dell TechCenter Rockstar 2014-15. Microsoft MVP 2014-16. Veeam Vanguard 2015-19. Nutanix NTC 2014-20. Several certifications including: VCDX-DCV, VCP-DCV/DT/Cloud, VCAP-DCA/DCD/CIA/CID/DTA/DTD, MCSA, MCSE, MCITP, CCA, NPP.