Browsing Posts in vSphere

One month ago, VMware has released a new branch of the VMware Tools, the version VMware Tools 10.2.0 some a new interesting feature: the offline VIB bundle. With this package, you can simply upgrade the embedded VMware Tools components in your VMware ESXi hosts in order to continue to update your VMware Tools as usual!

Now that the PSOD on vSphere 6.5 and 10 Gbps NICs issue is finally solved seems that vSphere 6.5 critical bugs are closed, but it’s not totally true. During an upgrade from a vSphere 6.0, I’ve found a really strange iSCSI storage issues where all the VMs on the iSCSI datastore were so slow to become un-usable. First I was thinking about drivers or firmware, in the hosts and in the NIC (1 Gbps) or the firmware on the storage.

On October 2017, I wrote a post about a possible issue with vSphere 6.5 and 10 Gbps NICs (mostly standard on new deployment). The final result was a PSOD (Purple Screen Of the Death) and no solution was available (yet). VMware KB 2151749 describe this issue as related to possible upgrade at vSphere 6.5. But other customers have report the issue also on new deployment. Veeam, one of the first vendor to found this issues (from their customers), reports that the issues is due to network-intensive activities such as backup over NBD or vMotion randomly triggering one. […]

As written in a previous post, some months ago, I’ve started a huge personal project that consumes all my spare free time. This project was a new book on VMware vSphere 6.5, really ambitious considering that will be a “Mastering” book, but the title and part of the content were not negotiable with the editor. Finally, the book Mastering VMware vSphere 6.5 is done and it’s now available on Packt site (and in the future also on Amazon).

This is an article realized for StarWind blog and focused on the design and implementation of stretched cluster. A stretched cluster, sometimes called metro-cluster, is a deployment model in which two or more host servers are part of the same logical cluster but are located in separate geographical locations, usually two sites. In order to be in the same cluster, the shared storage must be reachable in both sites. Stretched cluster, usually, are used and provided high availability (HA) and load balancing features and capabilities and build active-active sites.

VMware vSAN should manage better VM snapshots compared with traditional storage and VMFS datastores. The reason is the new (v2) on-disk format in VSAN 6.0 and the new filesystem that is used: VirstoFS. VirstoFS is the first implementation of technology that was acquired when VMware bought a company called Virsto a number of years ago. Also there is a new sparse format called vsanSparse. These replace the traditional vmfsSparse format (redo logs).

In the VMware ESXi 6.x partitions layout, usually there is a partition called “scratch” that hosts the log, the updates and other temporally files. Scratch space is configured automatically during installation or first boot of an ESXi host, and is not required to be manually configured. If you install ESXi on a local hardware disks (or also if you are using a remote LUN in “boot from SAN” mode), this partition is build during the installation phase (it’s 4 GB Fat16 partition created on the target device during installation, if there is sufficient space). If […]

One of the best virtualization related book of 2017, in my opinion, is the VMware vSphere 6.5 Host Resources Deep Dive written by Frank Denneman and Niels Hagoort. It’s main target is for administrators, architects, consultants, aspiring VCDX-es and people eager to learn more about the elements that control the behavior of CPU, memory, storage and network resources. But the most valuable part, is that is not only update to vSphere 6.5, but also with new technologies, like new Xeon family, new type of disks, NVMe, …

Seems that there are still some issues with vSphere 6.5, with a possible PSOD (Purple Screen Of the Death) after upgrade to 6.5U1 on ESXi hosts using 10 Gbps NICs. The VMware KB 2151749 describe this issue and explains that this occurs because Netqueue commit phase abruptly stop due to the failure of hardware activation of a Rx queue. As a result, Internal data-structure of the Netqueue layer’s could go out of sync with the device and cause PSOD.

One of the reasons why my blog is starving in the last months is that I’ve started a huge personal project that consumes all my spare free time. This project is a book on VMware vSphere 6.5, really ambitious considering that will be a “Mastering” book, but the title and part of the content were not negotiable.

One of the big advantages of the virtual appliance version of VMware vCenter (vCSA) is the ability to update both the OS components and the VMware parts with a simple menu. Just use the administrative UI available at https://vCSA_IP:5480 and login with user root and the password that you have choose during the deployment.

The Link Layer Discovery Protocol (LLDP) is a vendor-neutral link layer protocol used by network devices for advertising their identity, capabilities, and neighbors on an IEEE 802 local area network, usually with Ethernet standard. Compared to Cisco Discovery Protocol (CDP) it’s not proprietary and can be used from different vendors. VMware vSphere adds LLDP capability in the Distribuited Virtual Switches (DVS). CDP it’s also available both in DVS, but also in standard virtual switches (by default it’s enabled in listen mode).

If you choose to install the vCSA 6.5 in two different components, you may have an error during the PSC custominization (happens also on latest 6.5U1): An error occurred while starting service ‘pschealth’ This it’s related to a failure of identity management service error on first boot, so during the phase 2 of the deployment where the appliance has already been upload, but still must be configured.

In a VMware infrastructure, when you build a new VM, the default compatibility level could depend on your vSphere version, from which client you are using (the legacy vSphere Client does not ask for VM virtual hardware version in the default wizard), but also from your cluster settings. VM virtual hardware version defines exactly the compatibility level, but you can define the default level using the vSphere Web Client or the new HTML5 Client.

© 2018-2011 vInfrastructure Blog | Hosted by Assyrus