Still work in progress to the VCP5 Blueprint study notes, but first two objectives are (quite) completed. Stay tuned for the next objectives!
For the blueprint see: http://vinfrastructure.it/certifications-on-virtualization/vcp/vcp5/
Still work in progress to the VCP5 Blueprint study notes, but first two objectives are (quite) completed. Stay tuned for the next objectives!
For the blueprint see: http://vinfrastructure.it/certifications-on-virtualization/vcp/vcp5/
See also those similar posts: Objective 2.3 – Configure vSS and VDS Policies and Objective 2.3 – Configure vSS and vDS Policies
See the vSphere Networking Guide (page 43). Policies set at the standard switch or distributed port group level apply to all of the port groups on the standard switch or to ports in the distributed port group. The exceptions are the configuration options that are overridden at the standard port group or distributed port level.
The blueprint does not cover the NIOC and Monitor policies, but I suggest to study them as well.
For network security policy see: http://vinfrastructure.it/2011/08/vcp5-exam-prep-part-1-4/.
For port blocking policies see the vSphere Networking Guide (page 59).
See the vSphere Networking Guide (page 43). Load balancing and failover policies allow you to determine how network traffic is distributed between adapters and how to re-route traffic in the event of adapter failure. The Failover and Load Balancing policies include the following parameters:
See the vSphere Networking Guide (page 50). In the VLAN ID field of a port group (or distribuited port group) set a value between 1 and 4094 (0 disable the VLAN and 4095 is the same as trunk all the VLANs).
For vDS other two options are available: VLAN Trunking (to specify a range) and Private VLAN (for more info see http://kb.vmware.com/kb/1010691)
See the vSphere Networking Guide (page 54). A traffic shaping policy is defined by average bandwidth, peak bandwidth, and burst size. You can establish a traffic shaping policy for each port group and each distributed port or distributed port group.
ESXi shapes outbound network traffic on standard switches and inbound and outbound traffic on distributed switches. Traffic shaping restricts the network bandwidth available on a port, but can also be configured to allow bursts of traffic to flow through at higher speeds.
See the vSphere Networking Guide (page 38). TCP Segmentation Offload (TSO) is enabled on the VMkernel interface by default, but must be enabled at the virtual machine level.
See the vSphere Networking Guide (page 39). Now it’s possible configure the MTU from GUI also in standard vSwitch and on vmkernel interfaces.
See the vSphere Networking Guide (page 66). VLAN tagging is possible at diffent level: External Switch Tagging (EST), Virtual Switch Tagging (VST), and Virtual Guest Tagging (VGT).
The blueprint does not cover port Mirroring and NetFlow (new from vSphere 5)… but I suggest to study them as well. Note that also a new (and standard) discover protocol (LLDP) has been added (but only to vDS).
See also this similar post: Objective 2.2 – Configure vNetwork Distributed Switches and Objective 2.2 – Configure vNetwork Distributed Switches
Official page with features of vDS:
See also: vSphere 5 new Networking features.
For this and next points see the vSphere Networking Guide and http://thevirtualheadline.com/2011/07/11/vsphere-vnetwork-distributed-switches/.
See the vSphere Networking Guide (page 21).
See the vSphere Networking Guide (page 25).
See the vSphere Networking Guide (page 29).
See the vSphere Networking Guide (page 30).
See the vSphere Networking Guide (page 31) and VMware vNetwork Distributed Switch: Migration and Configuration
See the vSphere Networking Guide (page 33) and VMware vNetwork Distributed Switch: Migration and Configuration
See also those similar posts: Objective 2.1 – Configure vNetwork Standard Switches e Objective 2.1 – Configure vNetwork Standard Switches and Objective 2.1 – Configure vNetwork Standard Switches.
See: VMware Virtual Networking Concepts and VMware Virtual Networking.
See the vSphere Networking Guide. No need to know the CLI…
Note that now also from the GUI the vSwitch MTU can be changed.
See the vSphere Networking Guide (page 16 and 17). Each physical NIC act as a vSwitch uplink.
See the vSphere Networking Guide (page 14). The VMkernel port groups can be used for:
Note that now also from the GUI the MTU can be changed.
See the vSphere Networking Guide (page 12).
A VLAN ID, which restricts port group traffic to a logical Ethernet segment within the physical network, is optional. For a port group to reach port groups located on other VLANs, the VLAN ID must be set to 4095. If you use VLAN IDs, you must change the port group labels and VLAN IDs together so that the labels properly represent connectivity.
Both standard (vSS) and distributed (vDS) switches can exist at the same time. Note that vDS can give more functions that vSS. But vDS license require the Enterprise+ edition, so for all other editions the only way is use vSS.
For very small environment vSS could be most simple: everything should be controlled on local host. Basically, you go to Local Host->Configuration->Networking. Then, you start everything from there.
Introduced for the first time in vSphere the vStorage API are specific functions to permit an integration more thight with external storage functions. Note that there also other integration module that are coverd in the PSA framework. For more information see:
This is a new set of APIs introduced in vSphere 5.0 that will enable VMware vCenter Server to detect the capabilities of the storage array LUNs/datastores, making it much easier to select the appropriate disk for virtual machine placement or the creation of datastore clusters. vSphere Storage APIs – Storage Awareness can also provide help in facilitating, for example, the troubleshooting process or conversations between vSphere and storage administrators. Storage capabilities, such as RAID level, thin or thick provisioned, replication state and much more, can now be made visible within VMware vCenter Server either via “system-defined capabilities,” which are per-datastore descriptors, or via attributes that are exposed via Storage Views and SMS API. vSphere Storage APIs – Storage Awareness aims to eliminate the need for maintaining massive spreadsheets detailing the storage capabilities of each LUN that are needed to guarantee the correct SLA to virtual machines.
For other info see: Overview vSphere Storage API for Storage Awareness (VASA) e A Deeper Look at VASA.
The vCenter Server availability could be necessary in most cases and critical for large environment (or where a lot of services depends on it). In small environment, usually, this not so critical, because most of the VMware functions still work also without vCenter Server (for more info see vCenter Server Design: Physical vs Virtual).
There are at least four solutions to increase the availability for vCenter Server:
See also: http://kb.vmware.com/kb/1024051 – Supported vCenter Server high availability options. Also see this post on Yellow Bricks (Protecting vCenter Server – HA or Heartbeat?) updated to Sep 2012.
For more information see the official VMware HA web page. This solution is only usable when vCenter Server is a VM!
VMware HA require vCenter Server only for the initial setup and configuration. Than it can work on the hosts in a distributed mode, and vCenter Server is not needed to ensure the correct work of VMware HA. For this reason VMware HA can handle also vCenter Server restart (in this case the downtime could be some minutes…)
IMHO, for small/medium environment I prefer the VMware HA solution… very simple and cheap (it’s included in all editions exept the Essential). To take it simple be sure to have also the vCenter DB on your VM, or you have to find a HA solution also for your VCDB.
http://www.vmware.com/products/vcenter-server-heartbeat/
In this case vCenter Server (the primary instance) could be a VM or a physical system. The secondary must be a VM!
Optimize availability and resiliency for VMware vCenter Server in any situation. VMware vCenter Server Heartbeat delivers the maximum uptime for your virtual datacenter infrastructure, ensuring consistent operation, even when VMware vCenter Server is threatened by unplanned or planned downtime.
See also: VMware to tackle vCenter availability with new Server Heartbeat.
This solution is now unsupported!
See: http://www.vmware.com/pdf/VC_MSCS.pdf and Reference Implementation: Clustering VirtualCenter 2.5 Using Microsoft Cluster Services.
In this case vCenter Server (both the primary or the secondary node) could be a VM or also a physical system. To implement a Microsoft cluster solution you also need at least Windows Server Enterprise Edition (required to build a Microsoft Cluster).
Note that vCenter Server 2.5 is supported only on Windows Server 2003 or Windows Server 2003 R2. Instead vCenter Server 4.0 is supported also on Windows Server 2008.
Similar to this solution is the usage of other cluster solutions, for example Veritas Cluster: http://searchservervirtualization.techtarget.com/news/article/0,289142,sid94_gci1341780,00.html (but still will not be supported).
Note: vCenter Server 4.x has not been qualified with third party clustering products such as Microsoft Clustering Service and Veritas Cluster Services. VMware does not support third party clustering products.
This solution is now unsupported!
Why not use VMware FT for virtual vCenter server availability? Simple because vCenter Server 4.0 require minimum 2 vCPU (expecially if you have also a local DB).
VMware FT actually can work only with VM single vCPU. So or you keep an unsupported vCenter Server (because for a small environment vCenter Server may “work” also with a single vCPU), or you cannot use VMware FT.
For more information on FT see: VMware FT
For a list of the solutions to increase the availability for vCenter Server see this dedicated page.
From the version VI 3.x the vCenter Server can be deployed or on a physical server or on a virtual machine (both solutions are supported by VMware).
In this page pros and cons are analyzed.
Note: VMware has removed the vRAM Entitlements concept and restriction. So this post is no more relevent!
Introduced during the official announce of vSphere 5, this news in the vSphere licensing (expecially for the hosts) has been changed (the first rumor of this was mentioned here by Gabrie van Zanten). Not in the priciple (the vRAM is still the allocated vRAM of powered on VM, not the consumed vRAM) but in his value and how are counted (for example there is a max vRAM value of 96 GB for each VM):
vSphere edition | Previous vRAM entitlement | New vRAM entitlement |
---|---|---|
vSphere Desktop | Unlimited | Unlimited |
vSphere Enterprise+ | 48 GB | 96 GB |
vSphere Enterprise | 32 GB | 64 GB |
vSphere Standard | 24 GB | 32 GB |
vSphere Essentials+ | 24 GB | 32 GB |
vSphere Essentials | 24 GB | 32 GB |
Free vSphere Hypervisor | 8 GB | 32 GB[*] |
[*] physical RAM limit
See also:
See also those similar posts: Objective 1.5 – Identify vSphere Architecture and Solutions and Objective 1.5 – Identify vSphere Architecture and Solutions.
See the Compare vSphere 5.0 Kits and Compare vSphere 5.0 Editions.
See the VMware Sphere Basic guide, the Objective 0.1 – VMware Products and also VMware Web Site.
See the Objective 0.2 – Cloud Concepts.
Of course this depends by several factors, not only the requirements but also the constraints and assumptions. Price and vRAM entitlement could be a factor, for this see vSphere 5.0 Licensing, Pricing and Packaging Whitepaper.
Usually Essential bundles could be fine for SMB, the Standard could be a way to upgrade and scale an Essential+ bundle, the Enterprise could be fine for most cases (note that the Advanced is no more available) and Enterprise+ is for who need specific features (like DVS, Auto Deploy, SDRS, SIOC, NIOC, …).
Most of the references are from the vSphere Security Guide, but also the old (from VI 3.x) Managing VMware VirtualCenter Roles and Permissions is still a good reference.
See also: Objective 1.4 – Secure vCenter Server and ESXi e Objective 1.4 –Secure vCenter Server and ESXi.
See: vSphere Security Guide (page 59). Some are available both in ESXi and vCenter Server:
See: vSphere Security Guide (page 48 and also page 51 for some examples).
When you assign a permission to an object, you can choose whether the permission propagates down the object hierarchy. You set propagation for each permission. Propagation is not universally applied. Permissions defined for a child object always override the permissions that are propagated from parent objects.
Note that in previous releases of vCenter Server, datastores and networks inherited access permissions from the datacenter. In vCenter Server 5.0, they have their own set of privileges that control access to them. This might require you to manually assign privileges, depending on the access level you require. For more info see the vSphere Upgrade Guide (page 61).
See: What’s new in vSphere 5: ESXi firewall.
See: What’s new in vSphere 5: ESXi firewall.
See: The New Lockdown Mode in ESXi 4.1 and the vSphere Security Guide (page 81).
Note that lockdown mode does not apply to root users who log in using authorized keys. When you use an authorized key file for root user authentication, root users are not prevented from accessing a host with SSH when the host is in lockdown mode. Also the root user is still authorized to log in to the direct console user interface when lockdown mode is enabled.
See: VMware Virtual Networking Concepts and the vSphere Security Guide (page 25).
The virtual switch (but also a port group) has the ability to enforce L2 security policies to prevent virtual machines from impersonating other nodes on the network. There are three components to this feature:
For VLAN security see vSphere Security Guide (page 20).
See: vSphere Security Guide (page 45). Note that there are local users/groups (both ESXi and vCenter Server local users) and centralized users/groups (from a directory service).
See: vSphere Security Guide (page 53) and http://www.vmwarehub.com/Permissions.html.
See: vSphere Security Guide (page 61). When you remove a role that is assigned to a user or group, you can remove assignments or replace them with an assignment to another role.
There are two different way to use an Active Directory solution in ESXi 5:
See Use Host Profiles to Apply Permissions to Hosts (for host added in the AD) and the vSphere Security Guide (at page 67 to use with the vSphere Authentication Proxy).
See the vSphere Security Guide and also, for other guide, the privileges requirements are always specificated.
In vSphere 5, for the first time, ESXi has now an integrated firewall. In this way another feature gap between ESXi and ESX has been filled. But this firewall is quite new and different compared to the one from ESX, although the management (at the GUI mode) remain similar of the old one.
For more info see: http://vinfrastructure.it/vdesign/esxi-5-firewall/
On the VMware site there are new version of the VCP blueprint, both for VCP4 and VCP5:
About the VCP5 document, still i cover the beta exam and it does not make any changes in the covered objectives. My version with study notes is still a work in progress.
See also this similar post: Objective 1.3 – Plan and Perform Upgrades of vCenter Server and VMware ESXi and Objective 1.3 – Plan and Perform Upgrades of vCenter Server and VMware ESXi.
See: vSphere Upgrade Guide (page 11) and vSphere Upgrade Guide (page 69).
ESXi 5 system requirements are the same for a clean installation: 64 bit CPU, one or more supported NIC, 2098 MB RAM (note that are more than 2 GB), supported storage, … There are other requirements based on the type of source.
Note that there upgrade and migration are used in the guide in the same way, but a ESXi 4.x to ESXi 5 is an upgrade and ESX 4.x to ESXi 5 is a migration. Upgrade/migration can be perform in an automated mode (with VUM or by scripting) or in a interactive mode (you can boot the ESXi installer from a CD, DVD, or USB flash drive to upgrade ESX/ESXi 4.x hosts to ESXi 5.0).
Study also the files that are migrated from a ESX 4.x to a ESXi 5. Some are converted, other make not sense on a ESXi.
See the entire vSphere Upgrade Guide. Basically the steps are (after the requirements check):
Note that VUM can orchestrate hosts, VMware Tools and virtual hardware upgrade.
A vSphere distributed switch version 4.0 or 4.1 can be upgraded to a later version (5.0), enabling the distributed switch to take advantage of features that are only available in the later version.
For the DVS upgrade see the vSphere Networking Guide (page 24). Log in to the vSphere Client and select the Networking inventory view. Select the vSphere distributed switch in the inventory pane. On the Summary tab, next to Version, select Upgrade.
vSphere 5 offers a pain free upgrade path from VMFS-3 to VMFS-5. The upgrade is an online and non-disruptive operation which allows the resident virtual machines to continue to run on the datastore. But upgraded VMFS datastores may have impact on SDRS operations, specifically virtual machine migrations.
When upgrading a VMFS datastore from VMFS-3 to VMFS-5, the current VMFS-3 block size will be maintained and this block size may be larger than the VMFS-5 block size as VMFS-5 uses unified 1MB block size.
In upgraded hosts, the VMFS partition is not upgraded from VMFS3 to VMFS5. ESXi 5.0 is compatible with VMFS3 partitions. You can upgrade the partition to VMFS5 after the host is upgraded to ESXi 5.0. See the information on upgrading datastores from command line to VMFS5 in the vSphere Storage Guide (page 206).
Also note that new ESXi use a different partition schema (GPT instead of MBR) to handle disks and LUNs larger than 2 TB. For new installation the GPT partition table is used.
For more info about VMFS see also: http://www.boche.net/blog/index.php/2011/07/21/vmfs-5-vmfs-3-whats-the-deal/
See the vSphere Upgrade Guide (page 138). This task can be done manually (from vSphere Client) or with VUM. Note that:
See the vSphere Upgrade Guide (page 154). Can be done only with the VM powered off. Some new features (like more than 8 vCPU, for example) require the new virtual hardware (v8). ESXi 5 can create, edit and run v8 and v7 VMs, and can edit and run v4 VMs.
Paravirtualization (VMI) is not supported on ESXi 5.0. Hence, you cannot move VMI-enabled virtual machines from an ESX 3.x or ESX 4.x/ESXi 4.x host to an ESXi 5.0 host when the virtual machines are powered on.
See the vSphere Upgrade Guide (page 92). You can use Update Manager to perform orchestrated upgrades of the ESX/ESXi hosts in your vSphere inventory by using a single upgrade baseline. You can create upgrade baselines for ESX/ESXi hosts with ESXi 5.x images that you import to the Update Manager repository. You can use ESXi .iso images to upgrade ESXi 4.x hosts to ESXi 5.x or migrate ESX 4.x hosts to ESXi 5.x.
Some upgrade/migration paths are not supported, like: