Reading Time: 2 minutes

One possible issue after a vSphere 5 upgrade using an in-place upgrade of vCenter Server could appear when you forget to remove the Converter Enterprise plugin (and/or the Guided Consolidation plugin). As you know some products has been removed from vSphere 5, and their plugins may remain in a “orphan” state.

The result of this issue is that you will have a “broken” plugins list (with some plugins that are no more available) and also a wrong vCenter health status, due to some services that are no more existing:

continue reading…

Reading Time: 3 minutes

Finally, after several weeks from the official announce, Veeam Backup & Replication 6 is available (from yesterday). About the new features see:

One of the biggest news of Veeam v6 is the Multi-Hypervisor support: as of version 6 Veeam will now support both VMware (of course it support vSphere 5) and Microsoft Hyper-V all from the same interface.

Other interesting features are:

  • Scalability – Veeam has completely redesigned their backup architecture in v6.  With that they have the addition of backup proxy servers which offload the backup and replication traffic to the proxy server instead of the actual backup target.  This will allow for the use of a load balancing algorithm that splits the load amongst proxy servers, and in turn, allows for more concurrent jobs to run and faster backup speeds.  In terms of replication, the on-site backup proxy server can now send data directly to the target backup proxy server, completely bypassing the Veeam server itself.
  • 1-Click Happiness – Veeam has simplified a lot of their processes by creating a bunch of 1-click functionality.  Veeam has added 1 click events to their Instant Level File Recovery, Failover, Failback, VM Restore, and Automated Upgrades.  Perhaps the most useful ‘1-click’ event will be with the Instant Level File Recovery.  Prior to v6 I found this to be a monotonous task involving many steps and approvals to restore a file to a VM.  It will be nice to be able to do this with 1 click.
  • Traffic Throttling – In essence this allows you to do just that…throttle the Veeam Backup traffic.  This can be done via time based or day based as well.  This way we can guarantee what bandwidth will be used for Veeam traffic, and guarantee that we are not interfering with other traffic that might be flowing across a WAN link.
  • Wan Optimizations – There has been a lot of work done with WAN optimization, it appears in quite a few of the enhancements on the What’s New document.  Essentially they have improved and optimized the protocol used to transfer data across a wan, allowed for multiple TCP/IP connections per job, and with the addition of the the traffic throttling, proxy architecture and windows targets you have no excuse to not get your data off-site.Active Rollbacks – The restore points of a replica are now stored as native VMware Snapshots, leaving you with the ability to revert to a previous point in time without using the Veeam Backup and Replication server.
  • Improved Seeding – You can now use previous VMs located at your target site, or backups at your target site as the seed for your replica job, allowing you to start replication in an incremental fashion, rather than waiting for the first big initial seed of your VM.
  • Re-IP – one of the biggest and coolest enhancements in my book.  The ability to automatically reconfigure your VM’s IP at a DR site to align with your networks there.
Reading Time: 2 minutes

The RDM disks are a feature of VMware vSphere (but was present also in Virtual Infrastructure) to make a “mapping” between a LUN (or logical disk) to a VM (is similar to a disk pass-through). This feature can be used in different cases, for example: to support disk larger than 2 TB (only in vSphere 5 with physical RDM) and to implement guest clustering with shared storage (still only with physical RDM).

But there is an issue (or a feature :) ) that does not allow to add a RDM disk from the GUI for local disk, and this also with vSphere 5. But the “silly” thing is that also a shared storage based on SAS connection is threaded in the same way (for example, in a Dell PowerVault MD3x00 series).

continue reading…

Reading Time: 2 minutes

With vSphere 5, the VMware HA part has been completely change on the implementation part, but the nice aspect is that it seems still the change on the user part (this is a good example on how improve in a painlessly way).

In vSphere 5.0 the new HA agent is “FDM” (Fault Domain Manager) and replace the old AAM agent (from EMC Legato). But not only the agent has changed:

  • The old Primary / Secondary node concept has been replaced by a new and more simple Master/Slave node concept
  • A new Datastore Heartbeat function as been added (HA Architecture Series – Datastore Heartbeating)
  • No dependency on DNS
  • Syslog functionality

For more information see the well explained Duncan’s HA Deep Dive.

Storage failure
With the new Datastore Heartbeat seems possible handle also the datastore failure. But it’s not true, this functions is not used to detect a storage failure but only to improve the isolation detection. So actually the only way to handle a storage failure is with a strong design for the maximum  availability (and/or with specific storage solutions).

Mixed clusters
With vSphere 5 and FDM,  a good question is if a mixed cluster (with 5.0 and previous hosts) is supported or not.  Meaning what happens if there is an ESX 4 host in the cluster that has not been upgraded yet and how it will be handled.
There is an excellent blog on this by Frank Denneman.  The short answer is it is supported, and will get the FDM agent, but you should upgrade the host as soon as reasonable.

HA & DRS appear disabled when a Storage Profile is enabled / disabled on a cluster
This came up recently and it turns out that while HA and DRS appear disabled, it is not correct.  You need to re-enable HA and DRS to see things correctly.  However it is important to understand that HA and DRS are in fact working.  See http://kb.vmware.com/kb/2008203

Advanced options
See the previous post.

Reading Time: 2 minutes

In the previous post I’ve consider the cases and scenarios of a “non supported” configurations. But what’s happen with “supported” configurations? Are they always working and always in the best way? A supported configuration means that it can work well, but in specific situations, cases, scenarios.

Usually a good rule could be make a good analysis and a good virtual design before choose the single pieces. A supported configuration does not mean that also meets requirements like availability, scalability and performance. For example there are a lot of entry storage that are VMware certified, but they does not meet any of the previous requirements. This could be a simple example of a wrong design that is “too low”, but it’s funny that there are also example of wrong design that are “too high”. In some cases I’ve seen that a FC storage based solution was used in small environment only because the customer (or the consultant) was not confinded with an iSCSI or NFS (but still enterprise) solution.

continue reading…

Reading Time: 2 minutes

In previous posts related to vSphere 5 upgrade, I’ve talked several time about the HCL and his relevance. For a production environment, have a completed supported configuration, in each parts (hardware, software, firmware, …) IMHO is mandatory. But “not supported” not always means “not working”.

There are different scenarios with an “unsupported configuration”:

continue reading…

Reading Time: 3 minutes

In a vSphere upgrade process, there are two different approach for the host upgrade: a fresh re-install or a in-line upgrade. In the VMware site there is an interesting post about this choice.

The differences between an upgraded host and a freshly installed host are:

continue reading…

© 2025-2011 vInfrastructure Blog | Disclaimer & Copyright