Reading Time: 4 minutes

Objective 3.3 – Create and Configure VMFS and NFS Datastores

See also those similar posts: Objective 3.3 – Create and Configure VMFS and NFS Datastores and Objective 3.3 – Create and Configure VMFS and NFS Datastores.

Identify VMFS and NFS Datastore properties (similar as vSphere 4.x)

See the vSphere Storage Guide (page 21). NFS as quite the same functions (also hardware acceleration introducted in vSphere 5), but still cannot implement a RDM disk, and for this reason cannot implement a guest cluster solution (like Microsoft MSCS).

Identify VMFS5 capabilities (new in vSphere 5)

See the vSphere Storage Guide (page 114) and vSphere 5.0 Storage Features Part 1 – VMFS-5.

VMFS5 provides many improvements in scalability and performance over the previous version. The following improvements can be found in VMFS5:

  • GTP partition table (see also vSphere 5.0 Storage Features Part 7 – VMFS-5 & GPT) is used (for newly formatted VMFS5 datastores)
  • Unified the block size (to 1MB for newly formatted VMFS5 datastores), previous versions of VMFS used 1,2,4 or 8MB file blocks.
  • Larger single Extent Volumes: in previous versions of VMFS, the largest single extent was 2TB. With VMFS-5, this limit has been increased to ~ 60TB.
  • Smoother upgrade path with on-line in-place upgrade.
  • Scalability improvements using VAAI and ATS Enhancement.
  • Increased resource limits such as filedescriptors, Smaller Sub-Block, Small File Support, Increased File Count
  • Mount and Unmount workflow in the vSphere Client.

Differences between newly created and upgraded VMFS-5 datastores:

  • VMFS-5 upgraded from VMFS-3 continues to use the previous file block size which may be larger than the unified 1MB file block size.
  • VMFS-5 upgraded from VMFS-3 continues to use 64KB sub-blocks and not new 8K sub-blocks.
  • VMFS-5 upgraded from VMFS-3 continues to have a file limit of 30720 rather than new file limit of > 100000 for newly created VMFS-5.
  • VMFS-5 upgraded from VMFS-3 continues to use MBR (Master Boot Record) partition type; when the VMFS-5 volume is grown above 2TB, it automatically & seamlessly switches from MBR to GPT (GUID Partition Table) with no impact to the running VMs.
  • VMFS-5 upgraded from VMFS-3 continue to have its partition starting on sector 128; newly created VMFS5 partitions will have their partition starting at sector 2048.

For RDM – Raw Device Mappings:

  • There is now support for passthru RDMs to be ~ 60TB in size.
  • Both upgraded VMFS-5 & newly created VMFS-5 support the larger passthru RDM.

Note that:

  • The maximum size of a VMDK on VMFS-5 is still 2TB -512 bytes.
  • The maximum size of a non-passthru (virtual) RDM on VMFS-5 is still 2TB -512 bytes.
  • The maximum number of LUNs that are supported on an ESXi 5.0 host is still 256.

Create/Rename/Delete/Unmount a VMFS Datastore (same as vSphere 4.x)

See the vSphere Storage Guide (page 115, 129, 123 and 131).

Mount/Unmount an NFS Datastore (same as vSphere 4.x)

See the vSphere Storage Guide (page 128).

Extend/Expand VMFS Datastores (same as vSphere 4.x)

See the vSphere Storage Guide (page 119). Note that two methods are availabe (as also in vSphere 4.x):

  • Add a new extent (same as VI 3.x): an extent is a partition on a storage device, or LUN. You can add up to 32 new extents of the same storage type to an existing VMFS datastore. The spanned VMFS datastore can use any of allits extents at any time. It does not need to fill up a particular extent before using the next one.
  • Grow an extent in an existing VMFS datastore (better and clean solution), so that it fills the available adjacent capacity. Only extents with free space immediately after them are expandable.

Upgrade a VMFS3 Datastore to VMFS5 (new in vSphere 5)

See the vSphere Storage Guide (page 121). For VMFS3 the upgrade can be done with VM running on it (different from VMFS2 to VMFS3 upgrade). Of course all hosts accessing the datastore must support VMFS5.

Place a VMFS Datastore in Maintenance Mode (new in vSphere 5)

See the vSphere Resource Management Guide (page 83). Maintenance mode is available to datastores within a Storage DRS-enabled datastore cluster. Standalone datastores cannot be placed in maintenance mode.

Select the Preferred Path for a VMFS Datastore (similar as vSphere 4.x)

See the vSphere Storage Guide (page 158).

Disable a path to a VMFS Datastore (similar as vSphere 4.x)

See the vSphere Storage Guide (page 163).

Determine use case for multiple VMFS/NFS Datastores (similar as vSphere 4.x)

Understand the limits (quite few now) of NFS, but also the benefits. See also: http://technodrone.blogspot.com/2010/01/vmfs-or-nfs.html.

Determine appropriate Path Selection Policy for a given VMFS Datastore (similar as vSphere 4.x)

See the vSphere Storage Guide (page 159).

Reading Time: 4 minutes

Objective 3.2 – Configure the Storage Virtual Appliance for vSphere

This objective is related to a new product: the VMware Storage Virtual Appliance 1.0 (SVA or usually called also VSA) for vSphere 5. Note that this product is not included in vSphere editions, but is a separate product with his specific license (see the VMware site for more details).

Note: to make a virtual environment to test VSA see the blog Getting the VMware VSA running in a nested ESXi environment.

See also this post: Objective 3.2 – Configure the Storage Virtual Appliance for vSphere.

Define SVA architecture

See the vSphere Storage Appliance 1.0 Installation and Configuration Guide (page 9) and the VMware vSphere Storage Appliance Technical Whitepaper (page 3) and vSphere Storage Appliance (VSA) – Introduction.

VSA can be deployed in a two-node or three-node configuration. Collectively, the two or three nodes in the VSA implementation are known as a VSA storage cluster. Each VMware ESXi server has a VSA instance deployed to it as a virtual machine. The VSA instance will then use the available space on the local disk(s) of the VMware ESXi servers to present one mirrored NFS volume per VMware ESXi. The NFS volume is presented to all VMware ESXi servers in the datacenter.

Configure ESXi hosts as SVA hosts

The VSA can be deployed in two configurations:

  • 3 x VMware ESXi 5.0 server configuration
  • 2 x VMware ESXi 5.0 server configuration

The two different configurations are identical in the way in which they present storage. The only difference is in the way that they handle VSA storage cluster membership. The two-node VSA configuration uses a special VSA cluster service, which runs on the VMware vCenter Server. This behaves as a cluster member and is used to make sure that there is still a majority of members in the cluster, should one VMware ESXi server VSA member fail. There is no storage associated with the VSA cluster service.

Configure the storage network for the SVA

See the vSphere Storage Appliance 1.0 Installation and Configuration Guide (page 21). It require 2 different networks (front-end and back-end network).

Deploy/Configure the SVA Manager

A VSA installation is started by installing the VSA manager software on an existing VMware vCenter Server. Once completed, a vSphere client is opened and pointed to the VMware vCenter Server, and the VSA manager plug-in is enabled. This creates a VSA manager tab in the vSphere client.

NOTE: In this release of VSA, an instance of VMware vCenter Server 5.0 can only manage a single VSA storage cluster.

Administer SVA storage resources

See the vSphere Storage Appliance 1.0 Administration.

Determine use case for deploying the SVA

VSA enables users to get the full range of vSphere features, including vSphere HA, vMotion, and vSphere DRS, without having to purchase a physical storage array to provide shared storage, making VSA a very cost-effective solution (IMHO price is not so cost effective, also in the bundle with Essential+).

VSA is very easy to deploy. Many of the configuration tasks, such as network setup and vSphere HA deployment, are done by the installer. The benefit here is that this product can be deployed by customers who might not be well versed in vSphere. It gives them a good first-time user experience.

VSA is very resilient. If a VMware ESXi server that is hosting one of the VSAs goes down, or one VSA member suffers an outage, with the redundancy built into the VSA, the NFS share presented from that VSA will be automatically and seamlessly failed over to another VSA in the cluster.

Determine appropriate ESXi host resources for the SVA

About calculating the VSA Cluster Capacity, see the vSphere Storage Appliance 1.0 Installation and Configuration Guide (page 15).

About the requirements for the hosts see the vSphere Storage Appliance 1.0 Installation and Configuration Guide (page 17). Note that the actual HCL is very small and that RAID10 is the only supported RAID configuration for the VSA cluster.

Reading Time: 5 minutes

Objective 3.1 – Configure Shared Storage for vSphere

See also those similar posts: Objective 3.1 – Configure Shared Storage for vSphere and Objective 3.1 – Configure Shared Storage for vSphere

Identify storage adapters and devices (similar as vSphere 4.x)

See the vSphere Storage Guide (page 10). Storage adapters provide connectivity for your ESXi host to a specific storage unit (block oriented) or network.
ESXi supports different classes of adapters, including SCSI, SATA, SAS, iSCSI, Fibre Channel, Fibre Channel over Ethernet (FCoE). ESXi accesses the adapters directly through device drivers in the VMkernel.

Notice that (block oriented) storage can be local (SCSI, SAS o SATA) or SAN oriented (iSCSI, FC, FCoE). Local storage can also be shared with special type of enclouse (like those LSI based), intead the SAN oriented storages is (usually) shared. In order to use serveral vSphere features (like vMotion, HA, DRS, DPM, …) a shared storage is needed.

Note also that RAID functions must be implemented in hardware (VMware does not support at all neither software RAID and “fake” RAID): or in the host storage controller/adapter (for local and not shared storage) or in the storage.

Identify storage naming conventions (similar as vSphere 4.x)

See the vSphere Storage Guide (page 17). Each storage device, or LUN, is identified by several names:

  • Device Identifiers: SCSI INQUIRY identifier (naa.number, t10.number, eui.number) or Path-based identifier (mpx.path)
  • Legacy identifier (vml.number)
  • Runtime Name

Identify hardware/dependent hardware/software iSCSI initiator requirements (similar as vSphere 4.1)

See the vSphere Storage Guide (page 63) and also http://vinfrastructure.it/vdesign/vstorage-software-vs-hardware-iscsi/

Compare and contrast array thin provisioning and virtual disk thin provisioning (similar as vSphere 4.x)

See the vSphere Storage Guide (page 183). With ESXi, you can use two models of thin provisioning, array-level (depends on storage functions) and virtual disk-level (using the thin format).

One issue with array level approach, is that vSphere was previsously not notified of storage type of LUN and storage was not able to track the real usage (and reclaim the free space). With the new VAAI primitives for Thin Provisioning and also with new VASA APIs this problem is solved.

Describe zoning and LUN masking practices (same as vSphere 4.x)

See the vSphere Storage Guide (page 32). Zoning provides access control in the SAN topology. Zoning defines which HBAs can connect to which storage interfaces.
When you configure a SAN (usually FC) by using zoning, the devices outside a zone are not visible to the devices inside the zone. Zoning has the following effects:

  • Controls and isolates paths in a fabric.
  • Can prevent non-ESXi systems from accessing a particular storage system, and from possibly destroying VMFS data.
  • Can be used to separate different environments, for example, a test from a production environment.

LUN masking is instead a way to hide some LUN to a host and can be applied both on storage side (usually the preferred solution) or on host side. The effect is that it can reduces the number of targets and LUNs presented to a host and, for example, make a rescan faster.

Scan/Rescan storage (same as vSphere 4.x)

See the vSphere Storage Guide (page 124). Can be done for each host, but if you want to rescan storage available to all hosts managed by your vCenter Server system, you can do so by right-clicking a datacenter, cluster, or folder that contains the hosts and selecting Rescan for Datastores.

Note that, by default, the VMkernel scans for LUN 0 to LUN 255 for every target (a total of 256 LUNs). You can modify the Disk.MaxLUN parameter to improve LUN discovery speed.

Identify use cases for FCoE (new in vSphere 5)

See the vSphere Storage Guide (page 21 and 37). The Fibre Channel over Ethernet (FCoE) procotocol encapsulates Fibre Channel frames into Ethernet frames. As a result, your host does not need special Fibre Channel links to connect to Fibre Channel storage, but can use 10Gbit lossless Ethernet to deliver Fibre Channel traffic.

In vSphere 5 two type of adapters can be used: Converged Network Adapter (hardware FCoE, as the hardware independent iSCSI adapter) or NIC with FCoE support (software FCoE, similar as the hardware dependent iSCSI adapter).

Create an NFS share for use with vSphere (same as vSphere 4.x)

See the vSphere Storage Guide (page 128).

Connect to a NAS device (same as vSphere 4.x)

See the vSphere Storage Guide (page 128).

Note: in vSphere 5 there is the Hardware Acceleration on NAS Devices (but must be provided from the storage vendor). NFS datastores with Hardware Acceleration and VMFS datastores support the following disk provisioning policies: Flat disk (old zeroedthick format), Thick Provision (the old eagerzeroedthick) and Thin Provisioning. On NFS datastores that do not support Hardware Acceleration, only thin format is available.

Enable/Configure/Disable vCenter Server storage filters (same as vSphere 4.x)

See the vSphere Storage Guide (page 126). By using the vCenter advanced settings: config.vpxd.filter.vmfsFilter, config.vpxd.filter.rdm, …

Enable/Configure/Disable vCenter Server storage filters (same as vSphere 4.x)

Vedere the vSphere Storage Guide (pag. 126). Use advanced settings of vCenter Server: config.vpxd.filter.vmfsFilter, config.vpxd.filter.rd, …

Configure/Edit hardware/dependent hardware initiators (similar as vSphere 4.1)

See the vSphere Storage Guide (page 70).

Enable/Disable software iSCSI initiator (same as vSphere 4.x)

See the vSphere Storage Guide (page 72).

Configure/Edit software iSCSI initiator settings (same as vSphere 4.x)

See the vSphere Storage Guide (page 72).

Configure iSCSI port binding (new in vSphere 5)

See the vSphere Storage Guide (page 78). This is required for some type of storage and previosly was only possible from the CLI. Now it can simple be set from the vSphere Client.

Enable/Configure/Disable iSCSI CHAP (same as vSphere 4.x)

See the vSphere Storage Guide (page 83).

Determine use case for hardware/dependent hardware/software iSCSI initiator (similar as vSphere 4.1)

See the vSphere Storage Guide (page 63) and also http://vinfrastructure.it/vdesign/vstorage-software-vs-hardware-iscsi/

Determine use case for and configure array thin provisioning (similar as vSphere 4.x)

See previous point about thin provisiong.

Reading Time: < 1 minute

In this page are described the different type of iSCSI inititators supported in vSphere and how to choose and use them:
http://vinfrastructure.it/vdesign/vstorage-software-vs-hardware-iscsi/

Reading Time: 3 minutes

Objective 2.3 – Configure vSS and vDS Policies

See also those similar posts: Objective 2.3 – Configure vSS and VDS Policies and Objective 2.3 – Configure vSS and vDS Policies

Identify common vSS and vDS policies (similar as vSphere 4.x)

See the vSphere Networking Guide (page 43). Policies set at the standard switch or distributed port group level apply to all of the port groups on the standard switch or to ports in the distributed port group. The exceptions are the configuration options that are overridden at the standard port group or distributed port level.

The blueprint does not cover the NIOC and Monitor policies, but I suggest to study them as well.

Configure dvPort group blocking policies (similar as vSphere 4.x)

For network security policy see: http://vinfrastructure.it/2011/08/vcp5-exam-prep-part-1-4/.

For port blocking policies see the vSphere Networking Guide (page 59).

Configure load balancing and failover policies (similar as vSphere 4.1)

See the vSphere Networking Guide (page 43). Load balancing and failover policies allow you to determine how network traffic is distributed between adapters and how to re-route traffic in the event of adapter failure. The Failover and Load Balancing policies include the following parameters:

  • Load Balancing policy: Route based on the originating port or Route based on IP hash (require etherchannel at switch level)or Route based on source MAC hash or Use explicit failover order or Route based on physical NIC load (only for vDS) .
  • Failover Detection: Link Status or Beacon Probing (this make sense with at least 3 NICs).
  • Failback: Yes or No.
  • Network Adapter Order: Active or Standby or Unused.

Configure VLAN settings (same as vSphere 4.x)

See the vSphere Networking Guide (page 50). In the VLAN ID field of a port group (or distribuited port group) set a value between 1 and 4094 (0 disable the VLAN and 4095 is the same as trunk all the VLANs).

For vDS other two options are available: VLAN Trunking (to specify a range) and Private VLAN (for more info see http://kb.vmware.com/kb/1010691)

Configure traffic shaping policies (same as vSphere 4.x)

See the vSphere Networking Guide (page 54). A traffic shaping policy is defined by average bandwidth, peak bandwidth, and burst size. You can establish a traffic shaping policy for each port group and each distributed port or distributed port group.

ESXi shapes outbound network traffic on standard switches and inbound and outbound traffic on distributed switches. Traffic shaping restricts the network bandwidth available on a port, but can also be configured to allow bursts of traffic to flow through at higher speeds.

Enable TCP Segmentation Offload support for a virtual machine (same as vSphere 4.x)

See the vSphere Networking Guide (page 38). TCP Segmentation Offload (TSO) is enabled on the VMkernel interface by default, but must be enabled at the virtual machine level.

Enable Jumbo Frames support on appropriate components (new in vSphere 5)

See the vSphere Networking Guide (page 39). Now it’s possible configure the MTU from GUI also in standard vSwitch and on vmkernel interfaces.

Determine appropriate VLAN configuration for a vSphere implementation (similar as vSphere 4.x)

See the vSphere Networking Guide (page 66).  VLAN tagging is possible at diffent level: External Switch Tagging (EST), Virtual Switch Tagging (VST), and Virtual Guest Tagging (VGT).

Missing advanced features

The blueprint does not cover port Mirroring and NetFlow (new from vSphere 5)… but I suggest to study them as well. Note that also a new (and standard) discover protocol (LLDP) has been added (but only to vDS).

Reading Time: 2 minutes

Objective 2.2 – Configure vNetwork Distributed Switches

See also this similar post: Objective 2.2 – Configure vNetwork Distributed Switches and Objective 2.2 – Configure vNetwork Distributed Switches

Identify vNetwork Distributed Switch capabilities (similar as vSphere 4.1)

Official page with features of vDS:

  • Improves visibility into virtual machine traffic through Netflow (New in vDS 5)
  • Enhances monitoring and troubleshooting using SPAN and LLDP (New in vDS 5)
  • Enables the new Network I/O Control (NIOC) feature (now utilizing per VM controls) (New in vDS 5)
  • Simplified provisioning and administration of virtual networking across many hosts and clusters through a centralized interface.
  • Simplified end-to-end physical and virtual network management through third-party virtual switch extensions for the Cisco Nexus 1000V virtual switch.
  • Enhanced provisioning and traffic management capabilities through private VLAN support and bi-directional virtual machine rate-limiting.
  • Enhanced security and monitoring for virtual machines migrated via VMware vMotion through maintenance and migration of port runtime state.
  • Prioritized controls between different traffic types, including virtual machine, vMotion, FT, and IP storage traffic.
  • Load-based teaming for dynamic adjustment of the teaming algorithm so that the load is always balanced across a team of physical adapters on the distributed switch (New in vDS 4.1).

See also: vSphere 5 new Networking features.

Create/Delete a vNetwork Distributed Switch (same as vSphere 4.x)

For this and next points see the vSphere Networking Guide and http://thevirtualheadline.com/2011/07/11/vsphere-vnetwork-distributed-switches/.

Add/Remove ESXi hosts from a vNetwork Distributed Switch (same as vSphere 4.x)

See the vSphere Networking Guide (page 21).

Add/Configure/Remove dvPort groups (same as vSphere 4.x)

See the vSphere Networking Guide (page 25).

Add/Remove uplink adapters to dvUplink groups (same as vSphere 4.x)

See the vSphere Networking Guide (page 29).

Create/Configure/Remove virtual adapters (same as vSphere 4.x)

See the vSphere Networking Guide (page 30).

Migrate virtual adapters to/from a vNetwork Standard Switch (same as vSphere 4.x)

See the vSphere Networking Guide (page 31) and VMware vNetwork Distributed Switch: Migration and Configuration

Migrate virtual machines to/from a vNetwork Distributed Switch (same as vSphere 4.x)

See the vSphere Networking Guide (page 33) and VMware vNetwork Distributed Switch: Migration and Configuration

Determine use case for a vNetwork Distributed Switch (similar as vSphere 4.1)

Reading Time: 2 minutes

Objective 2.1 – Configure vNetwork Standard Switches

See also those similar posts: Objective 2.1 – Configure vNetwork Standard Switches e Objective 2.1 – Configure vNetwork Standard Switches and Objective 2.1 – Configure vNetwork Standard Switches.

Identify vNetwork Standard Switch capabilities (same as vSphere 4.x)

See: VMware Virtual Networking Concepts and VMware Virtual Networking.

Create/Delete a vNetwork Standard Switch (similar as vSphere 4.x)

See the vSphere Networking Guide. No need to know the CLI…

Note that now also from the GUI the vSwitch MTU can be changed.

Add/Configure/Remove vmnics on a vNetwork Standard Switch (same as vSphere 4.x)

See the vSphere Networking Guide (page 16 and 17). Each physical NIC act as a vSwitch uplink.

Configure vmkernel ports for network services (similar as vSphere 4.x)

See the vSphere Networking Guide (page 14). The VMkernel port groups can be used for:

  • Management (the portgroup must be flag with “Management” option).
  • vMotion (the portgroup must be flag with “vMotion” option).
  • Management (the portgroup must be flag with “fault tolerance logging” option).
  • IP storage, like iSCSI (with software initiator) or NFS.

Note that now also from the GUI the MTU can be changed.

Add/Edit/Remove port groups on a vNetwork Standard Switch (same as vSphere 4.x)

See the vSphere Networking Guide (page 12).

A VLAN ID, which restricts port group traffic to a logical Ethernet segment within the physical network, is optional. For a port group to reach port groups located on other VLANs, the VLAN ID must be set to 4095. If you use VLAN IDs, you must change the port group labels and VLAN IDs together so that the labels properly represent connectivity.

Determine use case for a vNetwork Standard Switch (same as vSphere 4.x)

Both standard (vSS) and distributed (vDS) switches can exist at the same time. Note that vDS can give more functions that vSS. But vDS license require the Enterprise+ edition, so for all other editions the only way is use vSS.

For very small environment vSS could be most simple: everything should be controlled on local host. Basically, you go to Local Host->Configuration->Networking. Then, you start everything from there.

© 2025-2011 vInfrastructure Blog | Disclaimer & Copyright