Reading Time: 5 minutes

For a list of all objectives see the VCP5 page.

Objective 5.4 – Migrate Virtual Machines

See also: Objective 5.4 – Migrate Virtual Machines and Objective 5.4 – Migrate Virtual Machines.

Identify ESXi host and virtual machine requirements for vMotion and Storage vMotion (same as vSphere 4.x)

See the vSphere Virtual Machine Administration guide (page 219) and vCenter Server and Host Management guide (page 119 and 122).

Some basic requiments: for vMotion a specific vmkernel interface (enabled for vMotion) is required, as the CPU compatibility (or EVC). For Storage vMotion the host must see both source and destination datastore.

Identify Enhanced vMotion Compatibility CPU requirements (similar as vSphere 4.x)

See the vCenter Server and Host Management guide (page 123) and VMware KB: Enhanced VMotion Compatibility (EVC) processor support. More baselines are available.

Identify snapshot requirements for vMotion/Storage vMotion migration (similar as vSphere 4.x)

See the vCenter Server and Host Management guide (page 121). Some restrictions apply when migrating virtual machines with snapshots:

  • Migrating a virtual machine with snapshots is now permitted, regardless of the virtual machine power state, as long as the virtual machine is being migrated toa new host without moving its configuration file or disks. (The virtual machine must reside on shared storageaccessible to both hosts.)
  • Reverting to a snapshot after migration with vMotion might cause the virtual machine to fail, because themigration wizard cannot verify the compatibility of the virtual machine state in the snapshot with the destination host. Failure occurs only if the configuration in the snapshot uses devices or virtual disks that are not accessible on the current host, or if the snapshot contains an active virtual machine state that was running on hardware that is incompatible with the current host CPU.
  • If the migration involves moving the configuration file or virtual disks, the following additional restrictions apply:
  • The starting and destination hosts must be running ESX 3.5 or ESXi 3.5 or later.
  • All of the virtual machine files and disks must reside in a single directory, and the migrate operation must move all the virtual machine files and disks to a single destination directory.

Migrate virtual machines using vMotion/Storage vMotion (similar as vSphere 4.x)

See the vSphere Virtual Machine Administration guide (page 223) and vCenter Server and Host Management guide (page 132). Can also be done from the vSphere Web Client.

About vMotion with high priority: On hosts running ESX/ESXi version 4.1 or later, vCenter Server attempts to reserve resources on both the source and destination hosts to be shared
among all concurrent migrations with vMotion. vCenter Server grants a larger share of host CPU resources to high priority migrations than to standard priority migrations. Migrations always proceed regardless of the resources that have been reserved. On hosts running ESX/ESXi version 4.0 or earlier, vCenter Server attempts to reserve a fixed amount of resources on both the source and destination hosts for each individual migration. High priority migrations do not proceed if resources are unavailable.

Remember that the Storage vMotion of virtual machines during VMware Tools installation is not supported.

Configure virtual machine swap file location (same as vSphere 4.x)

See the vCenter Server and Host Management guide (page 121).

You can configure ESX 3.5 or ESXi 3.5 or later hosts to store virtual machine swapfiles in one of two locations: with the virtual machine configuration file, or on a local swapfile datastore specified for that host. You can also set individual virtual machines to have a different swapfile location from the default set for their current host. If hosts are all 3.5 or later, during a migration with vMotion, if the swapfile location specified on the destination host differs from the swapfile location specified on the source host, the swapfile is copied to the new location. This can result in slower migrations with vMotion. If the destination host cannot access the specified swapfile location, it stores the swapfile with the virtual machine configuration file-

Migrate a powered-off or suspended virtual machine (similar as vSphere 4.x)

See the vSphere Virtual Machine Administration guide (page 220). Can also be done from the vSphere Web Client.

When you migrate a suspended virtual machine, the new host for the virtual machine must meet CPU compatibility requirements, because the virtual machine must be able to resume executing instructions on the new host.

Utilize Storage vMotion techniques (changing virtual disk type, renaming virtual machines, etc.) (same as vSphere 4.x)

See the vSphere Virtual Machine Administration guide (page 226).

During a Storage vMotion or also a cold storage migration all files are renamed with the current VM name. The vmdk can be change from thin to flat or viceversa.

For virtual compatibility mode RDMs, you can migrate the mapping file or convert to thick-provisioned or thinprovisioned disks during migration as long as the destination is not an NFS datastore. If you convert the mapping file, a new virtual disk is created and the contents of the mapped LUN are copied to this disk. For physical compatibility mode RDMs, you can migrate the mapping file only.

Reading Time: < 1 minute

Recently, has been released the VCP5 blueprint 1.2 that cover the official exam (note that the v1.4 was related to the beta exam).

The new blueprint does not include changes on the objectives, and is the as the 1.1 version (just a little changes in the graphics).

Reading Time: 2 minutes

For a list of all objectives see the VCP5 page.

Objective 5.3 – Create and Administer Resource Pools

See also the similar post: Objective 5.3 – Create and Administer Resource Pools and Objective 5.3 – Create and Administer Resource Pools.

Describe the Resource Pool hierarchy (same as vSphere 4.x)

See the vSphere Resource Management Guide (page 43). Note that a DRS license is required, in order to use resource pools in a VMware cluster.

Define the Expandable Reservation parameter (same as vSphere 4.x)

With this option, if you power on a virtual machine in this resource pool, and the combined reservations of the virtual machines are larger than the reservation of the resource pool, the resource pool can use resources from its parent or ancestors.

Create/Remove a Resource Pool (same as vSphere 4.x)

See the vSphere Resource Management Guide (page 45).

Configure Resource Pool attributes (same as vSphere 4.x)

See the vSphere Resource Management Guide (page 45). Note that shares, reservations and limits concepts are the same both in resource pools and VM properties.

Add/Remove virtual machines from a Resource Pool (same as vSphere 4.x)

See the vSphere Resource Management Guide (page 46).

Determine Resource Pool requirements for a given vSphere implementation (same as vSphere 4.x)

For a VMware Cluster, the requirements are the same for DRS (and of course for vMotion). Note that DRS can works fine combined with VMware HA and also FT.

Evaluate appropriate shares, reservations and limits for a Resource Pool based on virtual machine workloads (similar as vSphere 4.x)

See the DRS Performance and Best Practices document.

Clone a vApp (same as vSphere 4.x)

See the vSphere Virtual Machine Administration guide (page 195). For other vApp related questions see: VCP5 Exam Prep – Part 4.4

Reading Time: 2 minutes

For a list of all objectives see the VCP5 page.

Objective 5.2 – Plan and Implement VMware Fault Tolerance

See also this similar post: VCP 5 – Objective 5.2 – Plan and Implement VMware Fault Tolerance

Note that VMware FT is quite still the 1.0 version with the same constraints of a vSphere 4.1 version.

Identify VMware Fault Tolerance requirements (same as vSphere 4.x)

See the vSphere Availability guide (page 38) and VMware KB: Processors and guest operating systems that support VMware FT. To check the requirements, there is also a specific tool: VMware SiteSurvey utility.

Configure VMware Fault Tolerance networking (same as vSphere 4.x)

See the vSphere Availability guide (page 41). A good practice is use a dedicated vmkernel interface enable to FT logging on a dedicated pNIC. Network bandwidth is important to define how much VMs can be protected.

Enable/Disable VMware Fault Tolerance on a virtual machine (same as vSphere 4.x)

See the vSphere Availability guide (page 45). Study also the different reason on why a VM is in a non protected status (page 46).

Test an FT configuration (same as vSphere 4.x)

See VMware KB: Testing a VMware Fault Tolerance Configuration.

Determine use case for enabling VMware Fault Tolerance on a virtual machine (same as vSphere 4.x)

See the vSphere Availability guide (page 37). Remember the limit of one vCPU.

Reading Time: 3 minutes

For a list of all objectives see the VCP5 page.

Objective 5.1 – Create and Configure VMware Clusters

See also this similar post: Objective 5.1 – Create and Configure VMware Clusters and Objective 5.1 – Create and Configure VMware Clusters.

Describe DRS virtual machine entitlement (similar as vSphere 4.x)

The word entitlement is now usually referred to the vRAM entitlement, but in this context seems to be related on how work the DRS, for more info see the vSphere Resource Management Guide.

Create/Delete a DRS/HA Cluster (similar as vSphere 4.x)

See the vCenter Server and Host Management Guide (page 57), vSphere Resource Management Guide (page 51), vSphere Availability Guide (page 11).

Add/Remove ESXi Hosts from a DRS/HA Cluster (same as vSphere 4.x)

See the vCenter Server and Host Management Guide (page 56 and 113). Note that in order to add a host to an EVC cluster, you must put it in maintenance mode.

Add/Remove virtual machines from a DRS/HA Cluster (similar as vSphere 4.x)

Same of usual VM management. Just be sure that VMs are compliant with cluster requirements, like (for example) be on shared storage.

Configure Storage DRS (new in vSphere 5)

See the vSphere Resource Management Guide (page 77) and the VMware Storage DRS official page.

Configure Enhanced vMotion Compatibility (similar as vSphere 4.x)

See the vCenter Server and Host Management Guide (page 123) and VMware KB 1003212: Enhanced VMotion Compatibility (EVC) and VMware KB 1005764: EVC and CPU Compatibility FAQ.

Monitor a DRS/HA Cluster (similar as vSphere 4.x)

Understant the vSphere Client (and also the vSphere Web Client) interface.

Configure migration thresholds for DRS and virtual machines (similar as vSphere 4.x)

See the vSphere Resource Management Guide (page 54).

Configure automation levels for DRS and virtual machines (same as vSphere 4.x)

See the vSphere Resource Management Guide (page 57):

  • Manual -> Initial placement: Recommended host(s) is displayed, Migration: Recommendation is displayed.
  • Partially Automated n Initial placement: Automatic, Migration: Recommendation is displayed.
  • Fully Automated n Initial placement: Automatic, Migration: Recommendation is executed automatically.

Create VM-Host and VM-VM affinity rules (similar as vSphere 4.1)

See the vSphere Resource Management Guide (page 71).

Enable/Disable Host Monitoring (similar as vSphere 4.x)

See the vSphere Availability Guide (page 24).

Enable/Configure/Disable virtual machine and application monitoring (similar as vSphere 4.1)

See the vSphere Availability Guide (page 26). Note that application monitoring need specific API used in the applications.

Configure admission control for HA and virtual machines (similar as vSphere 4.x)

See the vSphere Availability Guide (page 16) and vSphere 5.0 HA: Changes in admission control.

Determine appropriate failover methodology and required resources for an HA implementation (similar as vSphere 4.x)

See the vSphere Availability Guide (page 30).

Reading Time: < 1 minute

Read the full story on: http://www.vladan.fr/vmware-vcenter-converter-standalone-5-0-final-available/

The VMware vCenter Converter Standalone 5.0 includes the following new functionality:

  • Full supporto of vSphere 5.
  • Optimized disk and partition alignment and cluster size change.
  • Preserving the LVM configuration on the source machine during Linux conversions.
  • Enhanced synchronization including options for scheduling synchronization tasks and performing multiple synchronization tasks in a conversion job.
  • Conversion data is encrypted between the source and the server.
  • Restoring VCB images.
Reading Time: < 1 minute

After the recent change in the certification path for VCP4-DT I’ve got some doubts about the future of the VCA-DT certification (if is no more required for other *-DT certifications, why take it?).

But seems that this kind of certification will remain also in new 5 release (based on View 5 and vSphere 5), because some rumors give the possible release date of new DT certifications:

  • VCA5-DT (estimated availability late 2011)
  • VCP5-DT (estimated availability early 2012)

About the VCAP-DT certification, still there isn’t any official news, both for the v4 and, of course, the v5.

© 2025-2011 vInfrastructure Blog | Disclaimer & Copyright