Reading Time: < 1 minute

Still work in progress to the VCP5 Blueprint study notes, but first two objectives are (quite) completed. Stay tuned for the next objectives!

For the blueprint see: http://vinfrastructure.it/certifications-on-virtualization/vcp/vcp5/

Reading Time: 3 minutes

Objective 2.3 – Configure vSS and vDS Policies

See also those similar posts: Objective 2.3 – Configure vSS and VDS Policies and Objective 2.3 – Configure vSS and vDS Policies

Identify common vSS and vDS policies (similar as vSphere 4.x)

See the vSphere Networking Guide (page 43). Policies set at the standard switch or distributed port group level apply to all of the port groups on the standard switch or to ports in the distributed port group. The exceptions are the configuration options that are overridden at the standard port group or distributed port level.

The blueprint does not cover the NIOC and Monitor policies, but I suggest to study them as well.

Configure dvPort group blocking policies (similar as vSphere 4.x)

For network security policy see: http://vinfrastructure.it/2011/08/vcp5-exam-prep-part-1-4/.

For port blocking policies see the vSphere Networking Guide (page 59).

Configure load balancing and failover policies (similar as vSphere 4.1)

See the vSphere Networking Guide (page 43). Load balancing and failover policies allow you to determine how network traffic is distributed between adapters and how to re-route traffic in the event of adapter failure. The Failover and Load Balancing policies include the following parameters:

  • Load Balancing policy: Route based on the originating port or Route based on IP hash (require etherchannel at switch level)or Route based on source MAC hash or Use explicit failover order or Route based on physical NIC load (only for vDS) .
  • Failover Detection: Link Status or Beacon Probing (this make sense with at least 3 NICs).
  • Failback: Yes or No.
  • Network Adapter Order: Active or Standby or Unused.

Configure VLAN settings (same as vSphere 4.x)

See the vSphere Networking Guide (page 50). In the VLAN ID field of a port group (or distribuited port group) set a value between 1 and 4094 (0 disable the VLAN and 4095 is the same as trunk all the VLANs).

For vDS other two options are available: VLAN Trunking (to specify a range) and Private VLAN (for more info see http://kb.vmware.com/kb/1010691)

Configure traffic shaping policies (same as vSphere 4.x)

See the vSphere Networking Guide (page 54). A traffic shaping policy is defined by average bandwidth, peak bandwidth, and burst size. You can establish a traffic shaping policy for each port group and each distributed port or distributed port group.

ESXi shapes outbound network traffic on standard switches and inbound and outbound traffic on distributed switches. Traffic shaping restricts the network bandwidth available on a port, but can also be configured to allow bursts of traffic to flow through at higher speeds.

Enable TCP Segmentation Offload support for a virtual machine (same as vSphere 4.x)

See the vSphere Networking Guide (page 38). TCP Segmentation Offload (TSO) is enabled on the VMkernel interface by default, but must be enabled at the virtual machine level.

Enable Jumbo Frames support on appropriate components (new in vSphere 5)

See the vSphere Networking Guide (page 39). Now it’s possible configure the MTU from GUI also in standard vSwitch and on vmkernel interfaces.

Determine appropriate VLAN configuration for a vSphere implementation (similar as vSphere 4.x)

See the vSphere Networking Guide (page 66).  VLAN tagging is possible at diffent level: External Switch Tagging (EST), Virtual Switch Tagging (VST), and Virtual Guest Tagging (VGT).

Missing advanced features

The blueprint does not cover port Mirroring and NetFlow (new from vSphere 5)… but I suggest to study them as well. Note that also a new (and standard) discover protocol (LLDP) has been added (but only to vDS).

Reading Time: 2 minutes

Objective 2.2 – Configure vNetwork Distributed Switches

See also this similar post: Objective 2.2 – Configure vNetwork Distributed Switches and Objective 2.2 – Configure vNetwork Distributed Switches

Identify vNetwork Distributed Switch capabilities (similar as vSphere 4.1)

Official page with features of vDS:

  • Improves visibility into virtual machine traffic through Netflow (New in vDS 5)
  • Enhances monitoring and troubleshooting using SPAN and LLDP (New in vDS 5)
  • Enables the new Network I/O Control (NIOC) feature (now utilizing per VM controls) (New in vDS 5)
  • Simplified provisioning and administration of virtual networking across many hosts and clusters through a centralized interface.
  • Simplified end-to-end physical and virtual network management through third-party virtual switch extensions for the Cisco Nexus 1000V virtual switch.
  • Enhanced provisioning and traffic management capabilities through private VLAN support and bi-directional virtual machine rate-limiting.
  • Enhanced security and monitoring for virtual machines migrated via VMware vMotion through maintenance and migration of port runtime state.
  • Prioritized controls between different traffic types, including virtual machine, vMotion, FT, and IP storage traffic.
  • Load-based teaming for dynamic adjustment of the teaming algorithm so that the load is always balanced across a team of physical adapters on the distributed switch (New in vDS 4.1).

See also: vSphere 5 new Networking features.

Create/Delete a vNetwork Distributed Switch (same as vSphere 4.x)

For this and next points see the vSphere Networking Guide and http://thevirtualheadline.com/2011/07/11/vsphere-vnetwork-distributed-switches/.

Add/Remove ESXi hosts from a vNetwork Distributed Switch (same as vSphere 4.x)

See the vSphere Networking Guide (page 21).

Add/Configure/Remove dvPort groups (same as vSphere 4.x)

See the vSphere Networking Guide (page 25).

Add/Remove uplink adapters to dvUplink groups (same as vSphere 4.x)

See the vSphere Networking Guide (page 29).

Create/Configure/Remove virtual adapters (same as vSphere 4.x)

See the vSphere Networking Guide (page 30).

Migrate virtual adapters to/from a vNetwork Standard Switch (same as vSphere 4.x)

See the vSphere Networking Guide (page 31) and VMware vNetwork Distributed Switch: Migration and Configuration

Migrate virtual machines to/from a vNetwork Distributed Switch (same as vSphere 4.x)

See the vSphere Networking Guide (page 33) and VMware vNetwork Distributed Switch: Migration and Configuration

Determine use case for a vNetwork Distributed Switch (similar as vSphere 4.1)

Reading Time: 2 minutes

Objective 2.1 – Configure vNetwork Standard Switches

See also those similar posts: Objective 2.1 – Configure vNetwork Standard Switches e Objective 2.1 – Configure vNetwork Standard Switches and Objective 2.1 – Configure vNetwork Standard Switches.

Identify vNetwork Standard Switch capabilities (same as vSphere 4.x)

See: VMware Virtual Networking Concepts and VMware Virtual Networking.

Create/Delete a vNetwork Standard Switch (similar as vSphere 4.x)

See the vSphere Networking Guide. No need to know the CLI…

Note that now also from the GUI the vSwitch MTU can be changed.

Add/Configure/Remove vmnics on a vNetwork Standard Switch (same as vSphere 4.x)

See the vSphere Networking Guide (page 16 and 17). Each physical NIC act as a vSwitch uplink.

Configure vmkernel ports for network services (similar as vSphere 4.x)

See the vSphere Networking Guide (page 14). The VMkernel port groups can be used for:

  • Management (the portgroup must be flag with “Management” option).
  • vMotion (the portgroup must be flag with “vMotion” option).
  • Management (the portgroup must be flag with “fault tolerance logging” option).
  • IP storage, like iSCSI (with software initiator) or NFS.

Note that now also from the GUI the MTU can be changed.

Add/Edit/Remove port groups on a vNetwork Standard Switch (same as vSphere 4.x)

See the vSphere Networking Guide (page 12).

A VLAN ID, which restricts port group traffic to a logical Ethernet segment within the physical network, is optional. For a port group to reach port groups located on other VLANs, the VLAN ID must be set to 4095. If you use VLAN IDs, you must change the port group labels and VLAN IDs together so that the labels properly represent connectivity.

Determine use case for a vNetwork Standard Switch (same as vSphere 4.x)

Both standard (vSS) and distributed (vDS) switches can exist at the same time. Note that vDS can give more functions that vSS. But vDS license require the Enterprise+ edition, so for all other editions the only way is use vSS.

For very small environment vSS could be most simple: everything should be controlled on local host. Basically, you go to Local Host->Configuration->Networking. Then, you start everything from there.

Reading Time: 3 minutes

vStorage API

Introduced for the first time in vSphere the vStorage API are specific functions to permit an integration more thight with external storage functions. Note that there also other integration module that are coverd in the PSA framework. For more information see:

vSphere Storage API – Storage Awareness (VASA)

This is a new set of APIs introduced in vSphere 5.0 that will enable VMware vCenter Server to detect the capabilities of the storage array LUNs/datastores, making it much easier to select the appropriate disk for virtual machine placement or the creation of datastore clusters. vSphere Storage APIs – Storage Awareness can also provide help in facilitating, for example, the troubleshooting process or conversations between vSphere and storage administrators. Storage capabilities, such as RAID level, thin or thick provisioned, replication state and much more, can now be made visible within VMware vCenter Server either via “system-defined capabilities,” which are per-datastore descriptors, or via attributes that are exposed via Storage Views and SMS API. vSphere Storage APIs – Storage Awareness aims to eliminate the need for maintaining massive spreadsheets detailing the storage capabilities of each LUN that are needed to guarantee the correct SLA to virtual machines.

For other info see: Overview vSphere Storage API for Storage Awareness (VASA) e A Deeper Look at VASA.

continue reading…

Reading Time: 3 minutes

The vCenter Server availability could be necessary in most cases and critical for large environment (or where a lot of services depends on it). In small environment, usually, this not so critical, because most of the VMware functions still work also without vCenter Server (for more info see vCenter Server Design: Physical vs Virtual).

There are at least four solutions to increase the availability for vCenter Server:

  • use a VM for vCenter Server and use VMware HA
  • use vCenter Server Heartbeat product
  • use a MSCS solution for vCenter (unsupported from vSphere version)
  • use VMware FT (can work, but it’s unsupported)

See also: http://kb.vmware.com/kb/1024051 – Supported vCenter Server high availability options. Also see this post on Yellow Bricks (Protecting vCenter Server – HA or Heartbeat?)  updated to Sep 2012.

VMware HA

For more information see the official VMware HA web page. This solution is only usable when vCenter Server is a VM!

VMware HA require vCenter Server only for the initial setup and configuration. Than it can work on the hosts in a distributed mode, and vCenter Server is not needed to ensure the correct work of VMware HA. For this reason VMware HA can handle also vCenter Server restart (in this case the downtime could be some minutes…)

IMHO, for small/medium environment I prefer the VMware HA solution… very simple and cheap (it’s included in all editions exept the Essential). To take it simple be sure to have also the vCenter DB on your VM, or you have to find a HA solution also for your VCDB.

vCenter Server Heartbeat

http://www.vmware.com/products/vcenter-server-heartbeat/

In this case vCenter Server (the primary instance) could be a VM or a physical system. The secondary must be a VM!

Optimize availability and resiliency for VMware vCenter Server in any situation. VMware vCenter Server Heartbeat delivers the maximum uptime for your virtual datacenter infrastructure, ensuring consistent operation, even when VMware vCenter Server is threatened by unplanned or planned downtime.

See also: VMware to tackle vCenter availability with new Server Heartbeat.

MSCS or Failover Cluster

This solution is now unsupported!

See: http://www.vmware.com/pdf/VC_MSCS.pdf and Reference Implementation: Clustering VirtualCenter 2.5 Using Microsoft Cluster Services.

In this case vCenter Server (both the primary or the secondary node) could be a VM or also a physical system. To implement a Microsoft cluster solution you also need at least Windows Server Enterprise Edition (required to build a Microsoft Cluster).

Note that vCenter Server 2.5 is supported only on Windows Server 2003 or Windows Server 2003 R2. Instead vCenter Server 4.0 is supported also on Windows Server 2008.

Similar to this solution is the usage of other cluster solutions, for example Veritas Cluster: http://searchservervirtualization.techtarget.com/news/article/0,289142,sid94_gci1341780,00.html (but still will not be supported).

Note: vCenter Server 4.x has not been qualified with third party clustering products such as Microsoft Clustering Service and Veritas Cluster Services. VMware does not support third party clustering products.

VMware FT?

This solution is now unsupported!

Why not use VMware FT for virtual vCenter server availability? Simple because vCenter Server 4.0 require minimum 2 vCPU (expecially if you have also a local DB).

VMware FT actually can work only with VM single vCPU. So or you keep an unsupported vCenter Server (because for a small environment vCenter Server may “work” also with a single vCPU), or you cannot use VMware FT.

For more information on FT see: VMware FT

For a list of the solutions to increase the availability for vCenter Server see this dedicated page.

Reading Time: < 1 minute

From the version VI 3.x the vCenter Server can be deployed or on a physical server or on a virtual machine (both solutions are supported by VMware).

In this page pros and cons are analyzed.

© 2024-2011 vInfrastructure Blog | Disclaimer & Copyright