Recent VMware security bug (VMSA-2017-0006) is related to one of the worst possible security issue in a virtualization environment: a possible “guest escape” vulnerability that allows arbitrary code execution on a host system from the guest system.
It’s not the first time of a similar risk (see, for example, Microsoft Edge used to escape VMware Workstation at Pwn2Own 2017) but this kind of issue is a different risk level if it affect Worksation (so “just” a client environment) or a ESXi (potentially a datacenter environment).
One of the pillars of virtualization should be the guest isolation concept: a VM cannot directly interact with the host or the other VMs.
Of course, the virtual networking expose all kind of network attacs from and to the guest, but there are several solutions (like, for example, NSX) to try to minimize this kind of risks. But the network is needed, because a real isolated system is quite useless in a typical client-server approach.
Quite different if a VM can break the host isolation: VMware (and other enterprise hypervisor vendor) have work hard to avoid these types of issues or to try to minimize.
But there are potential security risks because some parts are shared across the virtual environment. One good example was the theorically risk related to VMware TPS (see Bye bye Transparent Page Sharing) where some academic papers have demonstrated that by forcing a flush and reload of cache memory, it is possible to measure memory timings to try and determine an AES encryption key in use on another virtual machine running on the same physical processor of the host server if Transparent Page Sharing is enabled between the two virtual machines. This is related to privacy aspect, but does not exist a real exploit that can used it, and TPS is now turned off by default.
Potentially also in virtual switch can be a risk in privacy aspect, because if you can capture all the traffic on it from a VM you can analyse non encrypted traffic. There is a promiscous mode denied by default, but could be interesting see if, in the future, somebody found how break it. In this case NSX may not help, because VXLAN encapsulation it’s normally non encrypted.
But more interesting are the bug related to virtual hardware because the real physical drivers are impleted at virtualization layer (at least in VMware ESXi that is a monolitich hypervisor): breaking a guest driver access may give the access to the one in ESXi and a way to run code direcly on the host (and with high privilege).
And those kind of risks have been explored from several years: a good example is CVE-2009-1244 document (aka “Cloudburst”) that describe how use the SVGA guest driver to access at the host memory (and the latest VMware vulnerability is also related to the virtual SVGA device).
Another driver related issue is the one on the USB controller: the ESXi, Workstation, and Fusion XHCI controller has uninitialized memory usage. This issue may allow a guest to execute code on the host. The issue is reduced to a Denial of Service of the guest on ESXi 5.5.
Microkernel based hypervisors can also be affected of this kind of driver-related risks: doesn’t matter if the “real” drivers are in “parent” VM, breaking them can bring some possible security risks.
And there are, of course, all type of possible DoS attach from a VM to the host level, but in this case good resources management technologies could help to mitigate those possible risks, but what about deadlocks and race conditions? Historically OSes have just ignored those risks and I think we will see more example of attach trying to using them in the future.
With this article I don’t want to say that VM are insecure, but probably the real VM security level has still to been proved in future security attach focused on virtual environments. It’s like the security of Linux system: at the beginning, there were a lot of attach at application level, but now there are a lot of attach specific at kernel level.
And hypervisor security can also affect container security, because some vendors (like VMware and Microsoft) are pushing on containers on VMs:
The main justification of this approach is the security aspect in order to have a better container isolation. OF course, containers isolation and security have also to be more explored and demostrated, but, as writtend, Linux kernel (at least for Linux containers) has already been stressed from several years in all security aspects.