Reading Time: 2 minutes

With vSphere 5, the VMware HA part has been completely change on the implementation part, but the nice aspect is that it seems still the change on the user part (this is a good example on how improve in a painlessly way).

In vSphere 5.0 the new HA agent is “FDM” (Fault Domain Manager) and replace the old AAM agent (from EMC Legato). But not only the agent has changed:

  • The old Primary / Secondary node concept has been replaced by a new and more simple Master/Slave node concept
  • A new Datastore Heartbeat function as been added (HA Architecture Series – Datastore Heartbeating)
  • No dependency on DNS
  • Syslog functionality

For more information see the well explained Duncan’s HA Deep Dive.

Storage failure
With the new Datastore Heartbeat seems possible handle also the datastore failure. But it’s not true, this functions is not used to detect a storage failure but only to improve the isolation detection. So actually the only way to handle a storage failure is with a strong design for the maximum  availability (and/or with specific storage solutions).

Mixed clusters
With vSphere 5 and FDM,  a good question is if a mixed cluster (with 5.0 and previous hosts) is supported or not.  Meaning what happens if there is an ESX 4 host in the cluster that has not been upgraded yet and how it will be handled.
There is an excellent blog on this by Frank Denneman.  The short answer is it is supported, and will get the FDM agent, but you should upgrade the host as soon as reasonable.

HA & DRS appear disabled when a Storage Profile is enabled / disabled on a cluster
This came up recently and it turns out that while HA and DRS appear disabled, it is not correct.  You need to re-enable HA and DRS to see things correctly.  However it is important to understand that HA and DRS are in fact working.  See http://kb.vmware.com/kb/2008203

Advanced options
See the previous post.

Reading Time: 2 minutes

In the previous post I’ve consider the cases and scenarios of a “non supported” configurations. But what’s happen with “supported” configurations? Are they always working and always in the best way? A supported configuration means that it can work well, but in specific situations, cases, scenarios.

Usually a good rule could be make a good analysis and a good virtual design before choose the single pieces. A supported configuration does not mean that also meets requirements like availability, scalability and performance. For example there are a lot of entry storage that are VMware certified, but they does not meet any of the previous requirements. This could be a simple example of a wrong design that is “too low”, but it’s funny that there are also example of wrong design that are “too high”. In some cases I’ve seen that a FC storage based solution was used in small environment only because the customer (or the consultant) was not confinded with an iSCSI or NFS (but still enterprise) solution.

continue reading…

Reading Time: 2 minutes

In previous posts related to vSphere 5 upgrade, I’ve talked several time about the HCL and his relevance. For a production environment, have a completed supported configuration, in each parts (hardware, software, firmware, …) IMHO is mandatory. But “not supported” not always means “not working”.

There are different scenarios with an “unsupported configuration”:

continue reading…

Reading Time: 3 minutes

In a vSphere upgrade process, there are two different approach for the host upgrade: a fresh re-install or a in-line upgrade. In the VMware site there is an interesting post about this choice.

The differences between an upgraded host and a freshly installed host are:

continue reading…

Reading Time: 3 minutes

The new major release of Citrix’s hypervisor was released on Sep, 30 2011 (XenServer 6.0 is here! ). For more info see also the Release Notes for Citrix XenServer 6.0.

Architectural Changes:

  • The Boston release is based on the open-source Xen 4.1 hypervisor.   XenServer is another commercial product to ship with the Xen 4 hypervisor.  For those of you who like to follow the open source world, Oracle VM 3 launched a few weeks ago, and is based on the Xen 4 hypervisor.  Ubuntu Server 11.10 will soon follow with support for Xen 4, and will be the Linux distro to benefit from Xen being included in Linux 3.0.
  • The Open vSwitch (OVS) is now the default network stack for the product.  OVS was first introduced in XS 5.6 FP1 as a post-install configuration option, and is the basis for the distributed virtual networking (DVS) features like NetFlow, RSPAN and security ACLs, as well as NIC bonding improvements and Jumbo frames support.  Improvements to DVS include improved availability through the “fail-safe” option, as well various improvements based on customer feedback from XenServer 5.6 FP1.  Note that the legacy Linux bridging stack is still available via a post-install configuration option, for those that need it.
  • Support for hardware-assisted (SR-IOV) network performance optimization has been improved, particularly for use with the NetScaler VPX and SDX products.  A future version of NetScaler SDX will ship with XenServer 6.0 onboard.

Product simplification:

  • Citrix has simplified the management infrastructure requirements for features such as Workload Balancing, StorageLink, and Site Recovery.  In fact, for StorageLink and Site Recovery, no additional management infrastructure is required at all.
  • Workload Balancing (and the historical reporting features which rely on its database) is available as a Linux-based virtual appliance for easy installation and management.

Virtual Appliance and V2V improvements:

  • Virtual Appliance support.  Within XenCenter you can create multi-VM virtual appliances (vApps), with relationships between the VMs for use with the boot sequence during Site Recovery.  vApps can be easily imported and exported using the Open Virtualization Format (OVF) standard.
  • VMDK and VHD import functionality is integrated into XenCenter for interoperability with VMware VMDK and Microsoft VHD disk images.  Reliability of the “transfer VM” used for appliance import/export has been improved.

Guest OS support updates:

  • Formal guest support for Ubuntu 10.04
  • Updates for support of RHEL 5.6, Oracle Enterprise Linux 5.6, Oracle Enterprise Linux 6.0, CentOS 5.6, and SLES 10 SP4
  • Experimental VM templates for Ubuntu 10.10, CentOS 6.0, and Solaris
  • RHEL 6.0 and Debian Squeeze are fully supported (these are also supported with XS 5.6 SP2)

Performance Improvements:

  • XenServer 6.0 will ship on-board a future version of NetScaler SDX, which uses XenServer’s SR-IOV enhancements to drive line-speed (up to 10 Gbps) performance to virtual machines
  • Host (dom0) network throughput has been increased by 70-100%
  • Improved XenMotion performance, especially as compared to XenServer 5.6.0.

Other enhancements and improvements:

  • High Availability (HA) permits configuration of a boot sequence for recovery, as well as storage of the heartbeat disk on NFS.
  • NIC bonding improvements, including more formal support for active/passive mode

Comparison with vSphere 5.0:

XenServer 6.0
vSphere 5.0
Max logical physical processor per host 64 160
Max RAM per host 1 TB 2 TB
Max vCPU per VM 16 vCPU 32 vCPU
Max vRAM per VM 128 GB 1 TB
Reading Time: 2 minutes

Some days ago Red Hat has announced the availability of its Red Hat Enterprise Virtualization (RHEV) 3.0 public beta. The first beta of RHEV 3.0 was announced in August, but was not available to the general public. You needed to have an active RHEV subscription at that time. The evaluation is immediately available to anyone with a Red Hat Network account.

About the new features and the improvements there is a specific page on RedHat site. Red Hat Enterprise Virtualization 3.0 includes updates such as:

  • Red Hat Enterprise Virtualization Manager is now a Java application running on JBoss Enterprise Application Platform on Red Hat Enterprise Linux. Was silly that first versions of this products required a Windows Server to run.
  • A power user portal that provides end users with a self-service interface to provision virtual machines, define templates and administer their own environments.
  • Extended multi-level administrative capabilities, allowing fine-grained resource control, Role Based Access control, delegation and hierarchical management.
  • An integrated and embedded reporting engine allowing for analysis of historic usage trends and utilization reports.

continue reading…

Reading Time: 2 minutes

Some days ago, VKernel has release a post (Hyper-V 3.0: Closing the Gap With vSphere 5) that compare the new Hyper-V 3.0 with the existing vSphere 5.0.

I don’t know if the post was written before of after the Quest acquisition, but it doesn’t matter: it’s a comparison of two products not homogeneous, because will be released probably next year and and one was released on August of this year.

But the data can still be used to see how Microsoft is working to reduce and close the gap with VMware, at least on the hypervisor part:

Here’s some other awesome additions in the upcoming Hyper-V 3.0 release:

  • Simultaneously live migrate both the VM and the VM’s disk to a new location.
  • Live migrate VMs without shared storage
  • NIC teaming with out special third-party hardware – something VMware has already been doing and also Microsoft Partners Already Implementing Hyper-V 3 Virtual Switch
  • Drag-and-drop files from one virtual machine to another to directly transfer without having to pass through the host or your workstation.
  • The ability to host virtual disks on file servers – CIFS, SMB, NFS

Since Hyper-V will also be packaged with the Windows 8 desktop OS, some kind of migration (probably cold) between desktop and server could be implemented. But Workstation 8 can already do this today.

It will be interesting to see how it matches up against vSphere in performance. In August, VMware released a third-party performance study, showing that vSphere 5.0 outperformed the current release of Hyper-V by 20%. If Hyper-V wants to compete on all levels, this is something Microsoft will most likely be addressing.

If you’re curious to dig into the upcoming Hyper-V 3.0 a bit more, here’s a couple of the better posts and articles:

© 2025-2011 vInfrastructure Blog | Disclaimer & Copyright