The RDM disks are a feature of VMware vSphere (but was present also in Virtual Infrastructure) to make a “mapping” between a LUN (or logical disk) to a VM (is similar to a disk pass-through). This feature can be used in different cases, for example: to support disk larger than 2 TB (only in vSphere 5 with physical RDM) and to implement guest clustering with shared storage (still only with physical RDM).

But there is an issue (or a feature :) ) that does not allow to add a RDM disk from the GUI for local disk, and this also with vSphere 5. But the “silly” thing is that also a shared storage based on SAS connection is threaded in the same way (for example, in a Dell PowerVault MD3x00 series).

continue reading…

With vSphere 5, the VMware HA part has been completely change on the implementation part, but the nice aspect is that it seems still the change on the user part (this is a good example on how improve in a painlessly way).

In vSphere 5.0 the new HA agent is “FDM” (Fault Domain Manager) and replace the old AAM agent (from EMC Legato). But not only the agent has changed:

  • The old Primary / Secondary node concept has been replaced by a new and more simple Master/Slave node concept
  • A new Datastore Heartbeat function as been added (HA Architecture Series – Datastore Heartbeating)
  • No dependency on DNS
  • Syslog functionality

For more information see the well explained Duncan’s HA Deep Dive.

Storage failure
With the new Datastore Heartbeat seems possible handle also the datastore failure. But it’s not true, this functions is not used to detect a storage failure but only to improve the isolation detection. So actually the only way to handle a storage failure is with a strong design for the maximum  availability (and/or with specific storage solutions).

Mixed clusters
With vSphere 5 and FDM,  a good question is if a mixed cluster (with 5.0 and previous hosts) is supported or not.  Meaning what happens if there is an ESX 4 host in the cluster that has not been upgraded yet and how it will be handled.
There is an excellent blog on this by Frank Denneman.  The short answer is it is supported, and will get the FDM agent, but you should upgrade the host as soon as reasonable.

HA & DRS appear disabled when a Storage Profile is enabled / disabled on a cluster
This came up recently and it turns out that while HA and DRS appear disabled, it is not correct.  You need to re-enable HA and DRS to see things correctly.  However it is important to understand that HA and DRS are in fact working.  See http://kb.vmware.com/kb/2008203

Advanced options
See the previous post.

In the previous post I’ve consider the cases and scenarios of a “non supported” configurations. But what’s happen with “supported” configurations? Are they always working and always in the best way? A supported configuration means that it can work well, but in specific situations, cases, scenarios.

Usually a good rule could be make a good analysis and a good virtual design before choose the single pieces. A supported configuration does not mean that also meets requirements like availability, scalability and performance. For example there are a lot of entry storage that are VMware certified, but they does not meet any of the previous requirements. This could be a simple example of a wrong design that is “too low”, but it’s funny that there are also example of wrong design that are “too high”. In some cases I’ve seen that a FC storage based solution was used in small environment only because the customer (or the consultant) was not confinded with an iSCSI or NFS (but still enterprise) solution.

continue reading…

In previous posts related to vSphere 5 upgrade, I’ve talked several time about the HCL and his relevance. For a production environment, have a completed supported configuration, in each parts (hardware, software, firmware, …) IMHO is mandatory. But “not supported” not always means “not working”.

There are different scenarios with an “unsupported configuration”:

continue reading…

In a vSphere upgrade process, there are two different approach for the host upgrade: a fresh re-install or a in-line upgrade. In the VMware site there is an interesting post about this choice.

The differences between an upgraded host and a freshly installed host are:

continue reading…

The new major release of Citrix’s hypervisor was released on Sep, 30 2011 (XenServer 6.0 is here! ). For more info see also the Release Notes for Citrix XenServer 6.0.

Architectural Changes:

  • The Boston release is based on the open-source Xen 4.1 hypervisor.   XenServer is another commercial product to ship with the Xen 4 hypervisor.  For those of you who like to follow the open source world, Oracle VM 3 launched a few weeks ago, and is based on the Xen 4 hypervisor.  Ubuntu Server 11.10 will soon follow with support for Xen 4, and will be the Linux distro to benefit from Xen being included in Linux 3.0.
  • The Open vSwitch (OVS) is now the default network stack for the product.  OVS was first introduced in XS 5.6 FP1 as a post-install configuration option, and is the basis for the distributed virtual networking (DVS) features like NetFlow, RSPAN and security ACLs, as well as NIC bonding improvements and Jumbo frames support.  Improvements to DVS include improved availability through the “fail-safe” option, as well various improvements based on customer feedback from XenServer 5.6 FP1.  Note that the legacy Linux bridging stack is still available via a post-install configuration option, for those that need it.
  • Support for hardware-assisted (SR-IOV) network performance optimization has been improved, particularly for use with the NetScaler VPX and SDX products.  A future version of NetScaler SDX will ship with XenServer 6.0 onboard.

Product simplification:

  • Citrix has simplified the management infrastructure requirements for features such as Workload Balancing, StorageLink, and Site Recovery.  In fact, for StorageLink and Site Recovery, no additional management infrastructure is required at all.
  • Workload Balancing (and the historical reporting features which rely on its database) is available as a Linux-based virtual appliance for easy installation and management.

Virtual Appliance and V2V improvements:

  • Virtual Appliance support.  Within XenCenter you can create multi-VM virtual appliances (vApps), with relationships between the VMs for use with the boot sequence during Site Recovery.  vApps can be easily imported and exported using the Open Virtualization Format (OVF) standard.
  • VMDK and VHD import functionality is integrated into XenCenter for interoperability with VMware VMDK and Microsoft VHD disk images.  Reliability of the “transfer VM” used for appliance import/export has been improved.

Guest OS support updates:

  • Formal guest support for Ubuntu 10.04
  • Updates for support of RHEL 5.6, Oracle Enterprise Linux 5.6, Oracle Enterprise Linux 6.0, CentOS 5.6, and SLES 10 SP4
  • Experimental VM templates for Ubuntu 10.10, CentOS 6.0, and Solaris
  • RHEL 6.0 and Debian Squeeze are fully supported (these are also supported with XS 5.6 SP2)

Performance Improvements:

  • XenServer 6.0 will ship on-board a future version of NetScaler SDX, which uses XenServer’s SR-IOV enhancements to drive line-speed (up to 10 Gbps) performance to virtual machines
  • Host (dom0) network throughput has been increased by 70-100%
  • Improved XenMotion performance, especially as compared to XenServer 5.6.0.

Other enhancements and improvements:

  • High Availability (HA) permits configuration of a boot sequence for recovery, as well as storage of the heartbeat disk on NFS.
  • NIC bonding improvements, including more formal support for active/passive mode

Comparison with vSphere 5.0:

XenServer 6.0
vSphere 5.0
Max logical physical processor per host 64 160
Max RAM per host 1 TB 2 TB
Max vCPU per VM 16 vCPU 32 vCPU
Max vRAM per VM 128 GB 1 TB

Some days ago Red Hat has announced the availability of its Red Hat Enterprise Virtualization (RHEV) 3.0 public beta. The first beta of RHEV 3.0 was announced in August, but was not available to the general public. You needed to have an active RHEV subscription at that time. The evaluation is immediately available to anyone with a Red Hat Network account.

About the new features and the improvements there is a specific page on RedHat site. Red Hat Enterprise Virtualization 3.0 includes updates such as:

  • Red Hat Enterprise Virtualization Manager is now a Java application running on JBoss Enterprise Application Platform on Red Hat Enterprise Linux. Was silly that first versions of this products required a Windows Server to run.
  • A power user portal that provides end users with a self-service interface to provision virtual machines, define templates and administer their own environments.
  • Extended multi-level administrative capabilities, allowing fine-grained resource control, Role Based Access control, delegation and hierarchical management.
  • An integrated and embedded reporting engine allowing for analysis of historic usage trends and utilization reports.

continue reading…

Some days ago, VKernel has release a post (Hyper-V 3.0: Closing the Gap With vSphere 5) that compare the new Hyper-V 3.0 with the existing vSphere 5.0.

I don’t know if the post was written before of after the Quest acquisition, but it doesn’t matter: it’s a comparison of two products not homogeneous, because will be released probably next year and and one was released on August of this year.

But the data can still be used to see how Microsoft is working to reduce and close the gap with VMware, at least on the hypervisor part:

Here’s some other awesome additions in the upcoming Hyper-V 3.0 release:

  • Simultaneously live migrate both the VM and the VM’s disk to a new location.
  • Live migrate VMs without shared storage
  • NIC teaming with out special third-party hardware – something VMware has already been doing and also Microsoft Partners Already Implementing Hyper-V 3 Virtual Switch
  • Drag-and-drop files from one virtual machine to another to directly transfer without having to pass through the host or your workstation.
  • The ability to host virtual disks on file servers – CIFS, SMB, NFS

Since Hyper-V will also be packaged with the Windows 8 desktop OS, some kind of migration (probably cold) between desktop and server could be implemented. But Workstation 8 can already do this today.

It will be interesting to see how it matches up against vSphere in performance. In August, VMware released a third-party performance study, showing that vSphere 5.0 outperformed the current release of Hyper-V by 20%. If Hyper-V wants to compete on all levels, this is something Microsoft will most likely be addressing.

If you’re curious to dig into the upcoming Hyper-V 3.0 a bit more, here’s a couple of the better posts and articles:

To close the series of post on the vSphere upgrade path to version 5 I will make a few final considerations:

  • As written several time, check the hardware and software HCL before start the migration. The HCL may change from the beta release (where, for example, there was’t any SQL Express 2005 support) but also from one week to another (for example, some weeks the Dell PoverVault MD3x00/MD3x00i were not yet included).
  • Actually most additional software are already compliant with vSphere, or with new version (like View 5) or with simple patch.
  • Remember that vSphere 5 brings new features, but also a new license model, that require a minimum of initial analysis. For more info see: vSphere 5 vRAM Entitlement.
  • The vCenter Server is also available in the Linux version (but only in the virtual appliance mode). And remember that it has some limits.
  • The new Web Client could be interesting, but is not complete. If you plan to install the server part on the same machine of vCenter Server remember to increase the RAM (or vRAM). Also notice that the license report require this component.
  • The new virtual hardware 8 has some interesting benefit, but also implies a less VM portability (for example with View and local mode). Consider if you need it before upgrade the VMs.
  • As written more times, vSphere 5 has several new features but also dropped features. See: News in vSphere 5 – Who is in? Who is out?

Previous posts:

Similar posts:

When the vSphere infrastructure is upgraded, than also the backup part needs a check (and maybe a refresh). But, in order to avoid big issue, is better check it also before the entire upgrade, just to be sure that all remain supported. I also suggest to use this as a potential review of the the entire backup policies to find how to improve them and/or how to use different solutions (not necessary different products).

About the backup programs, most of them could still work with vSphere, because the vStorage APIs for Data Protection (VADP) are quite compatible with old versions. But on the vStorage side there are some changes like the new VMFS5 and the datastore clusters that can create some issues, especially when SAN transport is used. For this reason the backup programs needs some fixes to support the new features.

Some considerations:

  • VCB: officially with vSphere 5 the consolidated backup’s scripts are no more supported (and not that with vSphere 4.1 there was the VCB version from vSphere 4.0). But I was just curios and I’ve tried with the VCB 1.5U2 (transport NBD) and seems that it still works also with vSphere 5, both in fullvm and file level model!
  • VDR: the new VDR has some improvements (some are described in my previous post), but I’ve found some issues related with the upgrade path of this component (in some cases the destination upgrade, after a lot of hours fails and the destination was un-usable). I suggest to avoid the upgrade, and start with a new appliance and new destinations. After some backups you can choose to remove the old appliance and his destinations. Note that seems that the VDR 1.2 can work also with vSphere 5 and the new VDR plugin (I’ve only tried to make a simple backup and restore).
  • Veeam Backup: the current version (5.0.2) works with vSphere 5, but you need a patch from support team to fix some bugs (for example to support the SAN transport mode). The new version (6) will be fully vSphere 5 compliant.
  • Quest vRanger: the new version (5.2) is certified for vSphere 5.
  • PHD Virtual Backup: new version (5.3) must be compatible, as written on the web site.
  • Symantec Backup Exec: seems that both 2010 R2 and R3 work with vSphere 5 (but I’ve tested only with R3). For more info see: Backup Exec ™ 2010, 2010 R2 and 2010 R3 Software Compatibility List (SCL)

The storage part of a vSphere 5 upgrade path has two different steps: a firmware upgrade, that may be needed before start the vSphere upgrade (see the hardware part) and a modules/plugins/utilities upgrade, that is applied after the vSphere upgrade.

In my case, for an EqualLogic array, the minimum firmware version compatible with ESXi 5 is the 4.3 release (but I’ve tried also with a firmware 4.0.6 and it works), but you will need at least the 5.0 series in order to support VAAI and VASA (see vStorage API). New firmware works also with vSphere 4.1, so I suggest to apply the firmware upgrade before the vSphere upgrade, also if you have a supported firmware.

About the version, actually there are the 5.0.8 and the 5.1.2 versions. Both could be fine, but VMware HCL suggest the 5.1.x series and if you have a new EqualLogic model you will have this by default. Also if you add a new model in an existing group you must plan to upgrade all members to the 5.1.x versions.

If you are using the MEM multipath modules (provided by Dell) remember to disable it before the upgrade because the 1.0.x version is not compatible with ESXi 5. Some days ago, Dell has released the Version 1.1 Early Production Access (EPA) of MEM that is now compatible with ESXi 5, so after the upgrade you may choose to use this version (of course, you can have other multipath modules only with Enterprise and Enterprise+ editions). About the recommended multipath policy the HCL suggest the fixed policy, the Dell documentations the round robin… IMHO I suggest to use the fixed that is more conservative and build some Equallogic Volumes (to have different paths on different volumes). Also seems that with firmware 5.0.x some people have got some performance issue with the round robin policy.

For the VASA integration, it is provided by the new version of the Dell Host Integration Tools for VMware® (as described in a previous post) now update at the 3.1.1 version. This virtual appliance provide also a nice integration into vCenter Server to handle the Equallogic Volumes, the storage snapshots, the replicas, …

Finally I suggest to upgrade also the Dell SAN HeadQuarters utility (SAN HQ) to the latest version (2.2.0) in order to monitor the group and have nice information (this new version as also a simple capacity estimator feature that could be really useful).

As written in the previous post, the VM upgrade (after a vSphere upgrade) require first the upgrade of the VMware Tools and then (if needed) the upgrade of virtual hardware. But there are also more considerations to do.

Some days ago, I was involved in a upgrade path a little unusual: from VI 3.5 (and also quite old, only U2) to vSphere 5. For hosts and vCenter Server was simple: just a re-installation and re-configuration. But there where more issues  in the VMs upgrade.

Here some of my considerations:

  • Seems obvious, but a VM check before start the upgrade could be really important: see if some service does not start, that there is enough space free in the system disk, …
  • Both VMware Tools and virtual hardware upgrade can be orchestrated by VUM, but I suggest to use a manual upgrade for the first VMs (better with different type of VMs and guest OS).
  • From the GUI the virtual hardware upgrade is only to version 8, so for an upgrade from VI 3.x it means go from v4 (that can be played without issue also on vSphere 5) directly to v8.
  • The virtual hardware from v4 cause a virtual PCI slots reorder (this because from v7 they support the hot-add)… so, if you have more storage controller and/or more vNIC you can find them in a different order (this could be a problem with dual homed VMs).
  • New VMware Tools are compatible with vSphere 4.x, but with VI 3.x? Seems yes, but I’ve found a random issue (on some VMs) where the flexible vNIC was not working… the fastest solution was change it with a new vNIC.
  • For a big upgrade (from VI 3.x) I suggest to use (if possible) an interactive upgrade of the VMware Tools.
  • The KB1012259 (VMware KB: msvcp71.dll is removed after uninstalling ESX/ESXi 3.5) is also valid for vSphere 5! Check if you have the file, save it before the upgrade and restore it. For example I’ve found that the issue on with VMs with SQL 2000, but not with SQL 2005 and later.
  • For Linux VMs I use a different order: first the virtual hardware (that require a shutdown) and then (when the VM is powered-on) the VMware Tools (this operation, on Linux, does not require any reboot).
  • For View, see the previous post.
  • Remember to check and test the VMs also after the upgrade!

In a vSphere 5 upgrade path, the vSphere part of the upgrade process is the simples part and the order is the same of previous upgrade: first the vCenter Server (that can handle new and old hosts), then VUM (if you want to use it to upgrade the hosts), then the hosts, then the VMware Tools of the VMs and finally, if needed the VMFS5 of the datastores and the virtual hardware of the VMs.

vCenter Server

The upgrade of the vCenter Server is really simple and if you start from a version 4.1 you can use an in-place upgrade (the requirements are quite the same, but note that in case you use the SQL Express database, it will remain still the 2005 version). The in-line upgrade procedure can be started in a simple way: just run the new installation and choose the existing DB.

Of course you can plan to deploy a new vCenter Server (or maybe the vCSA), in this case you can choose to migrate the data (in some cases maybe be not possible) or simple start from scratch, build a new vCenter Server and connect the host to it. Remember to add the new licenses (and keep also the old during the host migration). About on “where” deploy the vCenter Server see the page on the question “physical or virtual“.

If you use an in-place upgrade I suggest to remove all 3rd part plugin before start the upgrade (especially remember to remove the Guide Consolidation and the Converter Enterprise). IMHO, for simple environment I prefer remove also VUM and reinstall it from scratch.

ESX/ESXi Hosts

The hosts upgrade can be handled with a reinstallation or, in some cases, an in-place upgrade (see also the VCP5 study notes). For the ESX host the in-place upgrade is not possible if it was upgraded from a previous 3.x version (in this case the boot partition is too small to fit the ESXi 5 installation)… this was just my case, so a full reinstallation was the best choice.

If you use an in-place upgrade I suggest to remove all 3rd part drivers and modules, especially multipath modules that may be not compatible with vSphere 5. The simplest way to handle the in-place upgrade is use VUM.

Note that you can have a VMware Cluster with mixed hosts (old ESX/ESXi and new ESXi 5) and use vMotion between them. During this phase the VMFS of shared storage must be keep to version 3 as also the virtual hardware hardware of VMs that must be keep v7 (or v4). Then VMware Tools are compatible also with old 4.x hosts, so you can start to plan this upgrade also in this phase.

Datastores and VMs

Finally when all the hosts are ESXi 5, you can consider to upgrade to VMFS5 (but note that the block size remain the original of VMFS3) that is a “live” procedure (see also the VCP5 study notes). Another solution could be build new datastores and use to Storage vMotion to free the old, and then re-format with VMFS5.

The virtual hardware upgrade can be planned or also not (but after the upgrade of VMware Tools). If you do not need the new features (like more than 8 vCPU, 3D support, …) you can simple VMs with v7. Note that VUM can orchestrate both the VMware Tools and the virtual hardware upgrades.

More information:

To install the hardware monitor and management too see the specific hardware vendor notes. For the Dell servers see how to install OMSA.

Note that with vSphere 5 there are also some new features can can be deployed, for example you can configure a Syslog Server to handle ESXi log.

In the previous post I’ve written about the upgrade of vSphere and View in the vSphere 5 scenario, but there are also other scenario to consider with a View environment.

For example, recently a new version of vSphere 4.1 has been released (the Update 2 version) and probably most vSphere 4.1 administrator will apply this set of patches. But what will happen with an existing View infrastructure? Both View 4.6 and 5.0 were released before this patch. We have see the a major upgrade may broke the View functionality, but a minor upgrade? Usually the release patches can be applied without issue, but for an entire “Update pack”?

Actually the View 4.6 documentation is related to a generic “vSphere 4.1 or later”, but the download area hasn’t change yet (I checked it yesterday) and still the vSphere 4.1 U1 is suggested as a vSphere download for View 4.6. So to be sure we have to way to a document upgrade or a refresh in the download area, or simple in a KB article that define this compatibility.

Anyway, I’ve make a test with a vSphere 4.1 U1 + View 4.6 environment and the upgrade to vSphere 4.1 U2 (of course a in-line upgrade) was done without special issues in the View part (included the Composer).

The only issue that I’ve already found was after the upgrade of the VMware Tools in the virtual desktop: the PCoIP was not working (black screen when you connect to the desktop and session closed for timeout), RDP was not affected. To fix this issue reinstall the View Agent 4.6 in the desktop (or in the pool “template”) with the option “Repair”.

© 2018-2011 vInfrastructure Blog | Hosted by Assyrus