Reading Time: 2 minutes

In a physical environment usually the term CPU is used to refer to the physical package (or socket). The real processing unit inside this package are called cores (and not that each core can have inside more ALU and can be seen as more logical cores with hyper-threading feature). More CPU usually define a SMP system, more cores a multi-core CPU, more CPU each with more cores a complex system (usually the NUMA architecture is used in this case).

In a virtual enviroment the term vCPU is used to refer to a core assigned to a VM. More vCPU define a vSMP system, similar to a physical system with more CPU each with a single core.

The number of vCPUs can be assigned is determined by the license edition (for more info see http://www.vmware.com/products/vsphere/buy/editions_comparison.html): for all edition except the Enterprise+ was (in vSphere 4.x) 4 vCPU, now (in vSphere 5) is 8 vCPU. For the Enterprise+ edition was 8 and now is 32.

But the number of vCPU can have some impact on guest OS CPU licensing. For example Windows XP or 7 is limited to only 2 vCPUs as “physical” CPUs and can not use more than this limit… But can use more cores.

To get around this limit, it’s possible expose to a VM a more complicated structure where each vCPU has more than one cores. This can be set by and advanced setting in the vmx file. Note that from vSphere 5 this is possible also from graphics interface.

Reading Time: < 1 minute

The list of the vExpert 2011 is now available on line. But note that this list could not cover all the vExpert people  (the list is not automatic but each vExpert must add his entry) and, of course, do not include the vExpert 2009 and 2010.

Reading Time: 4 minutes

Objective 3.3 – Create and Configure VMFS and NFS Datastores

See also those similar posts: Objective 3.3 – Create and Configure VMFS and NFS Datastores and Objective 3.3 – Create and Configure VMFS and NFS Datastores.

Identify VMFS and NFS Datastore properties (similar as vSphere 4.x)

See the vSphere Storage Guide (page 21). NFS as quite the same functions (also hardware acceleration introducted in vSphere 5), but still cannot implement a RDM disk, and for this reason cannot implement a guest cluster solution (like Microsoft MSCS).

Identify VMFS5 capabilities (new in vSphere 5)

See the vSphere Storage Guide (page 114) and vSphere 5.0 Storage Features Part 1 – VMFS-5.

VMFS5 provides many improvements in scalability and performance over the previous version. The following improvements can be found in VMFS5:

  • GTP partition table (see also vSphere 5.0 Storage Features Part 7 – VMFS-5 & GPT) is used (for newly formatted VMFS5 datastores)
  • Unified the block size (to 1MB for newly formatted VMFS5 datastores), previous versions of VMFS used 1,2,4 or 8MB file blocks.
  • Larger single Extent Volumes: in previous versions of VMFS, the largest single extent was 2TB. With VMFS-5, this limit has been increased to ~ 60TB.
  • Smoother upgrade path with on-line in-place upgrade.
  • Scalability improvements using VAAI and ATS Enhancement.
  • Increased resource limits such as filedescriptors, Smaller Sub-Block, Small File Support, Increased File Count
  • Mount and Unmount workflow in the vSphere Client.

Differences between newly created and upgraded VMFS-5 datastores:

  • VMFS-5 upgraded from VMFS-3 continues to use the previous file block size which may be larger than the unified 1MB file block size.
  • VMFS-5 upgraded from VMFS-3 continues to use 64KB sub-blocks and not new 8K sub-blocks.
  • VMFS-5 upgraded from VMFS-3 continues to have a file limit of 30720 rather than new file limit of > 100000 for newly created VMFS-5.
  • VMFS-5 upgraded from VMFS-3 continues to use MBR (Master Boot Record) partition type; when the VMFS-5 volume is grown above 2TB, it automatically & seamlessly switches from MBR to GPT (GUID Partition Table) with no impact to the running VMs.
  • VMFS-5 upgraded from VMFS-3 continue to have its partition starting on sector 128; newly created VMFS5 partitions will have their partition starting at sector 2048.

For RDM – Raw Device Mappings:

  • There is now support for passthru RDMs to be ~ 60TB in size.
  • Both upgraded VMFS-5 & newly created VMFS-5 support the larger passthru RDM.

Note that:

  • The maximum size of a VMDK on VMFS-5 is still 2TB -512 bytes.
  • The maximum size of a non-passthru (virtual) RDM on VMFS-5 is still 2TB -512 bytes.
  • The maximum number of LUNs that are supported on an ESXi 5.0 host is still 256.

Create/Rename/Delete/Unmount a VMFS Datastore (same as vSphere 4.x)

See the vSphere Storage Guide (page 115, 129, 123 and 131).

Mount/Unmount an NFS Datastore (same as vSphere 4.x)

See the vSphere Storage Guide (page 128).

Extend/Expand VMFS Datastores (same as vSphere 4.x)

See the vSphere Storage Guide (page 119). Note that two methods are availabe (as also in vSphere 4.x):

  • Add a new extent (same as VI 3.x): an extent is a partition on a storage device, or LUN. You can add up to 32 new extents of the same storage type to an existing VMFS datastore. The spanned VMFS datastore can use any of allits extents at any time. It does not need to fill up a particular extent before using the next one.
  • Grow an extent in an existing VMFS datastore (better and clean solution), so that it fills the available adjacent capacity. Only extents with free space immediately after them are expandable.

Upgrade a VMFS3 Datastore to VMFS5 (new in vSphere 5)

See the vSphere Storage Guide (page 121). For VMFS3 the upgrade can be done with VM running on it (different from VMFS2 to VMFS3 upgrade). Of course all hosts accessing the datastore must support VMFS5.

Place a VMFS Datastore in Maintenance Mode (new in vSphere 5)

See the vSphere Resource Management Guide (page 83). Maintenance mode is available to datastores within a Storage DRS-enabled datastore cluster. Standalone datastores cannot be placed in maintenance mode.

Select the Preferred Path for a VMFS Datastore (similar as vSphere 4.x)

See the vSphere Storage Guide (page 158).

Disable a path to a VMFS Datastore (similar as vSphere 4.x)

See the vSphere Storage Guide (page 163).

Determine use case for multiple VMFS/NFS Datastores (similar as vSphere 4.x)

Understand the limits (quite few now) of NFS, but also the benefits. See also: http://technodrone.blogspot.com/2010/01/vmfs-or-nfs.html.

Determine appropriate Path Selection Policy for a given VMFS Datastore (similar as vSphere 4.x)

See the vSphere Storage Guide (page 159).

Reading Time: 4 minutes

Objective 3.2 – Configure the Storage Virtual Appliance for vSphere

This objective is related to a new product: the VMware Storage Virtual Appliance 1.0 (SVA or usually called also VSA) for vSphere 5. Note that this product is not included in vSphere editions, but is a separate product with his specific license (see the VMware site for more details).

Note: to make a virtual environment to test VSA see the blog Getting the VMware VSA running in a nested ESXi environment.

See also this post: Objective 3.2 – Configure the Storage Virtual Appliance for vSphere.

Define SVA architecture

See the vSphere Storage Appliance 1.0 Installation and Configuration Guide (page 9) and the VMware vSphere Storage Appliance Technical Whitepaper (page 3) and vSphere Storage Appliance (VSA) – Introduction.

VSA can be deployed in a two-node or three-node configuration. Collectively, the two or three nodes in the VSA implementation are known as a VSA storage cluster. Each VMware ESXi server has a VSA instance deployed to it as a virtual machine. The VSA instance will then use the available space on the local disk(s) of the VMware ESXi servers to present one mirrored NFS volume per VMware ESXi. The NFS volume is presented to all VMware ESXi servers in the datacenter.

Configure ESXi hosts as SVA hosts

The VSA can be deployed in two configurations:

  • 3 x VMware ESXi 5.0 server configuration
  • 2 x VMware ESXi 5.0 server configuration

The two different configurations are identical in the way in which they present storage. The only difference is in the way that they handle VSA storage cluster membership. The two-node VSA configuration uses a special VSA cluster service, which runs on the VMware vCenter Server. This behaves as a cluster member and is used to make sure that there is still a majority of members in the cluster, should one VMware ESXi server VSA member fail. There is no storage associated with the VSA cluster service.

Configure the storage network for the SVA

See the vSphere Storage Appliance 1.0 Installation and Configuration Guide (page 21). It require 2 different networks (front-end and back-end network).

Deploy/Configure the SVA Manager

A VSA installation is started by installing the VSA manager software on an existing VMware vCenter Server. Once completed, a vSphere client is opened and pointed to the VMware vCenter Server, and the VSA manager plug-in is enabled. This creates a VSA manager tab in the vSphere client.

NOTE: In this release of VSA, an instance of VMware vCenter Server 5.0 can only manage a single VSA storage cluster.

Administer SVA storage resources

See the vSphere Storage Appliance 1.0 Administration.

Determine use case for deploying the SVA

VSA enables users to get the full range of vSphere features, including vSphere HA, vMotion, and vSphere DRS, without having to purchase a physical storage array to provide shared storage, making VSA a very cost-effective solution (IMHO price is not so cost effective, also in the bundle with Essential+).

VSA is very easy to deploy. Many of the configuration tasks, such as network setup and vSphere HA deployment, are done by the installer. The benefit here is that this product can be deployed by customers who might not be well versed in vSphere. It gives them a good first-time user experience.

VSA is very resilient. If a VMware ESXi server that is hosting one of the VSAs goes down, or one VSA member suffers an outage, with the redundancy built into the VSA, the NFS share presented from that VSA will be automatically and seamlessly failed over to another VSA in the cluster.

Determine appropriate ESXi host resources for the SVA

About calculating the VSA Cluster Capacity, see the vSphere Storage Appliance 1.0 Installation and Configuration Guide (page 15).

About the requirements for the hosts see the vSphere Storage Appliance 1.0 Installation and Configuration Guide (page 17). Note that the actual HCL is very small and that RAID10 is the only supported RAID configuration for the VSA cluster.

Reading Time: 5 minutes

Objective 3.1 – Configure Shared Storage for vSphere

See also those similar posts: Objective 3.1 – Configure Shared Storage for vSphere and Objective 3.1 – Configure Shared Storage for vSphere

Identify storage adapters and devices (similar as vSphere 4.x)

See the vSphere Storage Guide (page 10). Storage adapters provide connectivity for your ESXi host to a specific storage unit (block oriented) or network.
ESXi supports different classes of adapters, including SCSI, SATA, SAS, iSCSI, Fibre Channel, Fibre Channel over Ethernet (FCoE). ESXi accesses the adapters directly through device drivers in the VMkernel.

Notice that (block oriented) storage can be local (SCSI, SAS o SATA) or SAN oriented (iSCSI, FC, FCoE). Local storage can also be shared with special type of enclouse (like those LSI based), intead the SAN oriented storages is (usually) shared. In order to use serveral vSphere features (like vMotion, HA, DRS, DPM, …) a shared storage is needed.

Note also that RAID functions must be implemented in hardware (VMware does not support at all neither software RAID and “fake” RAID): or in the host storage controller/adapter (for local and not shared storage) or in the storage.

Identify storage naming conventions (similar as vSphere 4.x)

See the vSphere Storage Guide (page 17). Each storage device, or LUN, is identified by several names:

  • Device Identifiers: SCSI INQUIRY identifier (naa.number, t10.number, eui.number) or Path-based identifier (mpx.path)
  • Legacy identifier (vml.number)
  • Runtime Name

Identify hardware/dependent hardware/software iSCSI initiator requirements (similar as vSphere 4.1)

See the vSphere Storage Guide (page 63) and also http://vinfrastructure.it/vdesign/vstorage-software-vs-hardware-iscsi/

Compare and contrast array thin provisioning and virtual disk thin provisioning (similar as vSphere 4.x)

See the vSphere Storage Guide (page 183). With ESXi, you can use two models of thin provisioning, array-level (depends on storage functions) and virtual disk-level (using the thin format).

One issue with array level approach, is that vSphere was previsously not notified of storage type of LUN and storage was not able to track the real usage (and reclaim the free space). With the new VAAI primitives for Thin Provisioning and also with new VASA APIs this problem is solved.

Describe zoning and LUN masking practices (same as vSphere 4.x)

See the vSphere Storage Guide (page 32). Zoning provides access control in the SAN topology. Zoning defines which HBAs can connect to which storage interfaces.
When you configure a SAN (usually FC) by using zoning, the devices outside a zone are not visible to the devices inside the zone. Zoning has the following effects:

  • Controls and isolates paths in a fabric.
  • Can prevent non-ESXi systems from accessing a particular storage system, and from possibly destroying VMFS data.
  • Can be used to separate different environments, for example, a test from a production environment.

LUN masking is instead a way to hide some LUN to a host and can be applied both on storage side (usually the preferred solution) or on host side. The effect is that it can reduces the number of targets and LUNs presented to a host and, for example, make a rescan faster.

Scan/Rescan storage (same as vSphere 4.x)

See the vSphere Storage Guide (page 124). Can be done for each host, but if you want to rescan storage available to all hosts managed by your vCenter Server system, you can do so by right-clicking a datacenter, cluster, or folder that contains the hosts and selecting Rescan for Datastores.

Note that, by default, the VMkernel scans for LUN 0 to LUN 255 for every target (a total of 256 LUNs). You can modify the Disk.MaxLUN parameter to improve LUN discovery speed.

Identify use cases for FCoE (new in vSphere 5)

See the vSphere Storage Guide (page 21 and 37). The Fibre Channel over Ethernet (FCoE) procotocol encapsulates Fibre Channel frames into Ethernet frames. As a result, your host does not need special Fibre Channel links to connect to Fibre Channel storage, but can use 10Gbit lossless Ethernet to deliver Fibre Channel traffic.

In vSphere 5 two type of adapters can be used: Converged Network Adapter (hardware FCoE, as the hardware independent iSCSI adapter) or NIC with FCoE support (software FCoE, similar as the hardware dependent iSCSI adapter).

Create an NFS share for use with vSphere (same as vSphere 4.x)

See the vSphere Storage Guide (page 128).

Connect to a NAS device (same as vSphere 4.x)

See the vSphere Storage Guide (page 128).

Note: in vSphere 5 there is the Hardware Acceleration on NAS Devices (but must be provided from the storage vendor). NFS datastores with Hardware Acceleration and VMFS datastores support the following disk provisioning policies: Flat disk (old zeroedthick format), Thick Provision (the old eagerzeroedthick) and Thin Provisioning. On NFS datastores that do not support Hardware Acceleration, only thin format is available.

Enable/Configure/Disable vCenter Server storage filters (same as vSphere 4.x)

See the vSphere Storage Guide (page 126). By using the vCenter advanced settings: config.vpxd.filter.vmfsFilter, config.vpxd.filter.rdm, …

Enable/Configure/Disable vCenter Server storage filters (same as vSphere 4.x)

Vedere the vSphere Storage Guide (pag. 126). Use advanced settings of vCenter Server: config.vpxd.filter.vmfsFilter, config.vpxd.filter.rd, …

Configure/Edit hardware/dependent hardware initiators (similar as vSphere 4.1)

See the vSphere Storage Guide (page 70).

Enable/Disable software iSCSI initiator (same as vSphere 4.x)

See the vSphere Storage Guide (page 72).

Configure/Edit software iSCSI initiator settings (same as vSphere 4.x)

See the vSphere Storage Guide (page 72).

Configure iSCSI port binding (new in vSphere 5)

See the vSphere Storage Guide (page 78). This is required for some type of storage and previosly was only possible from the CLI. Now it can simple be set from the vSphere Client.

Enable/Configure/Disable iSCSI CHAP (same as vSphere 4.x)

See the vSphere Storage Guide (page 83).

Determine use case for hardware/dependent hardware/software iSCSI initiator (similar as vSphere 4.1)

See the vSphere Storage Guide (page 63) and also http://vinfrastructure.it/vdesign/vstorage-software-vs-hardware-iscsi/

Determine use case for and configure array thin provisioning (similar as vSphere 4.x)

See previous point about thin provisiong.

Reading Time: < 1 minute

In this page are described the different type of iSCSI inititators supported in vSphere and how to choose and use them:
http://vinfrastructure.it/vdesign/vstorage-software-vs-hardware-iscsi/

Reading Time: < 1 minute

Although still there isn’t any news about the VCDX5 certification (and there are only few info about VCAP5-DCA and VCAP5-DCD), I would like to bring to your attention some interesting threads about the VCDX program:

There is also a great podcast about the VCDX Program with also some great questions.Podcast #134 on: http://www.talkshoe.com/talkshoe/web/talkCast.jsp?masterId=19367

© 2024-2011 vInfrastructure Blog | Disclaimer & Copyright