Reading Time: 9 minutes

VMware by Broadcom has just released vSphere 8.0 Update 3.

Note that ESXi 8.0 Update 3 releaseis an Initial Availability (IA) designation. The vCenter Server 8.0 Update 3 release instead is a General Availability (GA) designation.

For more information on the vSphere 8.0 IA/GA Release Model of vSphere Update releases, see The vSphere 8 Release Model Evolves.

For internationalization, compatibility, installation, upgrade, open source components and product support notices, see the VMware vSphere 8.0 Release Notes.

For more information on vCenter Server supported upgrade and migration paths, please refer to VMware knowledge base article 67077.

The list of new features and enhancements in vSphere 8.0 Update 3 is quite impressive.

Hardware support

  • DPU/SmartNIC
    • High Availability with VMware vSphere Distributed Services Engine: Starting with ESXi 8.0 Update 3, vSphere Distributed Services Engine adds support for 2 data processing units (DPUs) to provide high availability or increase offload capacity per ESXi host. Dual-DPU systems can use NVIDIA or Pensando devices. In ESXi 8.0 Update 3, dual-DPU systems are supported by Lenovo server designs. For more information, see High Availability with VMware vSphere Distributed Services Engine.
  • CPU
    • PCIe hot plug is updated for server platforms utilizing newer generation AMD Genoa and Intel Sapphire Rapid CPUs: Starting with vSphere 8.0 Update 3, kernel hot plug is supported for newer generation CPUs such as AMD Genoa and Intel Sapphire Rapids.
    • Support for Intel Xeon Max Series processors with integrated High Bandwidth Memory (HBM): vSphere 8.0 Update 3 adds support for Intel Xeon Max Series processors (formerly with code name Sapphire Rapids HBM) with 64 GB of integrated HBM, aimed to enhance performance for workloads like high performance computing apps, artificial intelligence (AI), and machine learning (ML).
    • CPU C-State Power virtualization: With vSphere 8.0 Update 3, for use cases such as Virtualized Radio Access Network (vRAN) workloads, you can configure and control the C-State power of the physical CPUs dedicated to vRAN VMs from the vSphere Client.
    • Cluster-wide option to retain virtual NUMA topology: vSphere 8.0 Update 3 adds a setting to retain preconfigured vNUMA topology even if the VM moves, allowing for better NUMA topology tuning for VMs across all the hosts in the cluster. This is the advanced vCenter Server setting VPXD_PersistVnuma to keep the virtual NUMA topology on a cluster level under Configure > Advanced Settings in the vSphere Client.
  • GPU
    • Support for switching between Time Sliced and Multi-Instance GPU (MIG) modes for NVIDIA virtual GPUs: Starting with vSphere 8.0 Update 3, you do not need to reboot an ESXi host to switch between time sliced and MIG modes for NVIDIA virtual GPUs. vGPU VMs can automatically set the correct device mode according to their vGPU type.
    • Zero-copy support for vGPUs to enhance vSphere vMotion and vSphere DRS tasks: vSphere 8.0 Update 3 adds zero-copy support for vGPUs to enhance vSphere vMotion and vSphere DRS tasks by utilizing throughput of up to 100 Gbps.
    • Support for heterogeneous vGPU profiles on physical GPUs: you can now set vGPU profiles with different types or sizes on a single physical GPU to achieve greater flexibility with vGPU workloads and better utilization of GPU devices.
  • ESXi 8.0 Update 3 adds support for vSphere Quick Boot to multiple servers, including:
    • Cisco
      • UCSC-C220-M7N
      • UCSC-C240-M7SN
      • UCSC-C240-M7SX
      • UCSX-210C-M7
      • UCSX-410C-M7
    • Dell
      • PowerEdge HS5610
      • PowerEdge HS5620
    • HPE
      • Alletra Storage Server 4120
    • Supermicro
      • SYS-221BT-HNC8R

Cluster

  • vSphere Cluster Service
    • Introducing Embedded vSphere Cluster Service (vCLS): vSphere 8.0 Update 3 introduces a redesign of vCLS to Embedded vCLS, which utilizes vSphere Pod technology. Deployment and lifecycle of these VMs are managed within ESXi and are no longer managed by the vSphere ESX Agent Manager (EAM). Earlier versions of vCLS are termed External vCLS. Issues previously encountered with External vCLS are resolved in this release of Embedded vCLS. While existing API compatibility is preserved, minor modifications to customer scripts or automation tools might be necessary.

VM

  • Virtual Machine Management
    • New Virtual Machine Compute Policy for Best Effort Virtual Machine Evacuation: vSphere 8.0 Update 3 adds a compute policy for best effort evacuation of virtual machines on ESXi hosts that are entering maintenance mode. When the host enters maintenance mode, all VMs are shut down. If shutdown fails, the VMs are powered off. If power off fails, you need to evacuate the VMs. With the new capability, when the VMs are in a powered-off state, vCenter attempts to power them on every few minutes on the best available ESXi host at the time. This policy overrules any DRS overrides set at the VM level, and the hosts on which the VMs are powered on might be different from the original host.
  • GuestOS
    • Guest customization supports RHEL NetworkManager keyfile format: In vSphere 8.0 Update 3, guest customization adds support for RHEL NetworkManager keyfile format and you can store network configuration in both keyfile and ifcfg format.

Core storage

  • Storage/Memory
    • Memory Tiering: vSphere 8.0 Update 3 launches in tech preview the Memory Tiering capability, which allows you to use NVMe devices that you can add locally to an ESXi host as tiered memory. Memory tiering over NVMe optimizes storage performance by directing VM memory allocations to either NVMe devices or faster dynamic random access memory (DRAM) in the host. This allows you to increase your memory footprint and workload capacity, while reducing the total cost of ownership (TCO). For more details on the tech preview, see KB 95944.
    • Fabric Notification support for SAN clusters: ESXi 8.0 Update 3 introduces support for Fabric Performance Impact Notifications Link Integrity (FPIN-LI). With FPIN-LI, the vSphere infrastructure layer can manage notifications from SAN switches or targets, identifying degraded SAN links and ensuring only healthy paths are used for storage devices. FPIN can also notify ESXi hosts for storage link congestion and errors.
    • Support for space reclamation requests from guest operating systems on NVMe-backed vSphere Virtual Volumes datastores and Config-vVol: ESXi 8.0 Update 3 adds support for automatic space reclamation requests from guest operating systems on NVMe-backed vSphere Virtual Volumes datastores. ESXi 8.0 Update 3 also adds support for both command line-based and automatic unmap for vSphere Virtual Volumes objects of type Config-vVol, formatted with VMFS-6. For more information, see Reclaim Space on the vSphere Virtual Volumes Datastores.
    • Manage the UNMAP load from ESXi hosts at a VMFS datastore level: Starting with ESXi 8.0 Update 3, you can control the unmap load at datastore level to avoid time delays from space reclamation and reduce overall unmap load on the arrays in your environment. For more information, see Space Reclamation on vSphere VMFS Datastores.
    • Windows Server Failover Clustering (WSFC) enhancements on vSphere Virtual Volumes: vSphere 8.0 Update 3 adds support for a WSFC solution for NVMe-backed disks on vSphere Virtual Volumes. This capability allows NVMe reservations support in NVMe/TCP environments apart from Fibre Channel support for WSFC on vSphere Virtual Volumes. Virtual NVMe (vNVME) controllers are supported as the frontend for WSFC with NVMe-backed disks, not with SCSI-backed disks. For more information, see VMware vSphere® Virtual Volumes Support for WSFC.
    • Support for active-active vSphere Metro Storage Cluster (vMSC) with vSphere Virtual Volumes: vSphere 8.0 Update 3 introduces a new version of the VMware vSphere Storage APIs for Array Integration (VASA) to add support to active-active stretched storage clusters with vSphere Virtual Volumes, with active-active deployment topologies for SCSI block access between two sites. VASA version 6 includes new architecture and design for VASA Provider High Availability support for both stretched and non-stretched storage clusters. For more information, see Using Stretched Storage Clustering with Virtual Volumes.
    • VMkernel port binding for NFS v4.1 datastores: With vSphere 8.0 Update 3, you can bind an NFS 4.1 connection to a specific VM kernel adapter. If you use multipathing, you can provide multiple vmknics for each connection to ensure path isolation and security by directing NFS traffic across a specified subnet/VLAN, so that the NFS traffic does not use other vmknics. For more information, see Configure VMkernel Binding for NFS 4.1 Datastores.
    • Support for nConnect for NFS v4.1 datastores: ESXi 8.0 Update 3 adds support to multiple TCP connections for a NFS v4.1 volume, also referred to as nConnect. For NFS v4.1, multiple TCP connections are created for a single session that many datastores can share in parallel. It can be configured by using either the vSphere API or ESXCLI directly on an ESXi host. For more information, see Configure Multiple TCP Connections for NFS.
    • Reduced time to inflate VMFS disks: With vSphere 8.0 Update 3, a new VMFS API allows you to inflate a thin-provisioned disk to eagerzeroedthick (EZT) while the disk is in use and up to 10 times faster than previous alternative methods. During the inflation, all blocks are fully allocated and zeroed upfront to allow faster run-time performance. For more information, see Virtual Disk Options.
    • Improved resiliency against memory corruption on RAM-heavy ESXi hosts: vSphere 8.0 Update 3 adds proactive measures to prevent memory errors in systems with VMs of more than 1TB that might bring down an entire ESXi host and increase application downtime.
    • Advanced setting to block deletion and removal of disks for VMs with snapshots: ESX 8.0 Update 3 adds a per-host advanced option blockDiskRemoveIfSnapshot to prevent the removal of disks from a VM that has snapshots, even if you choose to delete files, which might lead to orphaned disks.  For more information, see VMware knowledge base article 94545.
    • Hardware accelerated move (clone operation) support on NVMe devices: vSphere 8.0 Update 3 supports NVMe copy command for hardware accelerated move (also called clone blocks or copy offload) across or within NVMe namespaces that belong to the same NVMe subsystem.

Security

  • TLS
    • TLS 1.3 and 1.2 support by using TLS profiles: Starting with vSphere 8.0 Update 3, you can use TLS profiles to simplify the configuration of TLS parameters and improve supportability in your vSphere system. With vSphere 8.0 Update 3, you get a default TLS profile on ESXi and vCenter Server hosts, COMPATIBLE, which supports TLS 1.3 and some TLS 1.2 connections. For more information, see vSphere TLS Configuration.
  • Identity sources
    • Removal of Integrated Windows Authentication (IWA): vSphere 8.0 Update 3 is the final release to support Integrated Windows Authentication. IWA was deprecated in vSphere 7.0 and will be removed in the next major release. To ensure continued secure access, migrate from IWA to Active Directory over LDAPS or to Identity Federation with Multi-Factor Authentication. For more information, see vSphere Authentication with vCenter Single Sign-On and Deprecation of Integrated Windows Authentication.
    • New identity sources have been added
Share