This post is also available in: Italian

Reading Time: 5 minutes

Objective 3.1 – Configure Shared Storage for vSphere

See also those similar posts: Objective 3.1 – Configure Shared Storage for vSphere and Objective 3.1 – Configure Shared Storage for vSphere

Identify storage adapters and devices (similar as vSphere 4.x)

See the vSphere Storage Guide (page 10). Storage adapters provide connectivity for your ESXi host to a specific storage unit (block oriented) or network.
ESXi supports different classes of adapters, including SCSI, SATA, SAS, iSCSI, Fibre Channel, Fibre Channel over Ethernet (FCoE). ESXi accesses the adapters directly through device drivers in the VMkernel.

Notice that (block oriented) storage can be local (SCSI, SAS o SATA) or SAN oriented (iSCSI, FC, FCoE). Local storage can also be shared with special type of enclouse (like those LSI based), intead the SAN oriented storages is (usually) shared. In order to use serveral vSphere features (like vMotion, HA, DRS, DPM, …) a shared storage is needed.

Note also that RAID functions must be implemented in hardware (VMware does not support at all neither software RAID and “fake” RAID): or in the host storage controller/adapter (for local and not shared storage) or in the storage.

Identify storage naming conventions (similar as vSphere 4.x)

See the vSphere Storage Guide (page 17). Each storage device, or LUN, is identified by several names:

  • Device Identifiers: SCSI INQUIRY identifier (naa.number, t10.number, eui.number) or Path-based identifier (mpx.path)
  • Legacy identifier (vml.number)
  • Runtime Name

Identify hardware/dependent hardware/software iSCSI initiator requirements (similar as vSphere 4.1)

See the vSphere Storage Guide (page 63) and also http://vinfrastructure.it/vdesign/vstorage-software-vs-hardware-iscsi/

Compare and contrast array thin provisioning and virtual disk thin provisioning (similar as vSphere 4.x)

See the vSphere Storage Guide (page 183). With ESXi, you can use two models of thin provisioning, array-level (depends on storage functions) and virtual disk-level (using the thin format).

One issue with array level approach, is that vSphere was previsously not notified of storage type of LUN and storage was not able to track the real usage (and reclaim the free space). With the new VAAI primitives for Thin Provisioning and also with new VASA APIs this problem is solved.

Describe zoning and LUN masking practices (same as vSphere 4.x)

See the vSphere Storage Guide (page 32). Zoning provides access control in the SAN topology. Zoning defines which HBAs can connect to which storage interfaces.
When you configure a SAN (usually FC) by using zoning, the devices outside a zone are not visible to the devices inside the zone. Zoning has the following effects:

  • Controls and isolates paths in a fabric.
  • Can prevent non-ESXi systems from accessing a particular storage system, and from possibly destroying VMFS data.
  • Can be used to separate different environments, for example, a test from a production environment.

LUN masking is instead a way to hide some LUN to a host and can be applied both on storage side (usually the preferred solution) or on host side. The effect is that it can reduces the number of targets and LUNs presented to a host and, for example, make a rescan faster.

Scan/Rescan storage (same as vSphere 4.x)

See the vSphere Storage Guide (page 124). Can be done for each host, but if you want to rescan storage available to all hosts managed by your vCenter Server system, you can do so by right-clicking a datacenter, cluster, or folder that contains the hosts and selecting Rescan for Datastores.

Note that, by default, the VMkernel scans for LUN 0 to LUN 255 for every target (a total of 256 LUNs). You can modify the Disk.MaxLUN parameter to improve LUN discovery speed.

Identify use cases for FCoE (new in vSphere 5)

See the vSphere Storage Guide (page 21 and 37). The Fibre Channel over Ethernet (FCoE) procotocol encapsulates Fibre Channel frames into Ethernet frames. As a result, your host does not need special Fibre Channel links to connect to Fibre Channel storage, but can use 10Gbit lossless Ethernet to deliver Fibre Channel traffic.

In vSphere 5 two type of adapters can be used: Converged Network Adapter (hardware FCoE, as the hardware independent iSCSI adapter) or NIC with FCoE support (software FCoE, similar as the hardware dependent iSCSI adapter).

Create an NFS share for use with vSphere (same as vSphere 4.x)

See the vSphere Storage Guide (page 128).

Connect to a NAS device (same as vSphere 4.x)

See the vSphere Storage Guide (page 128).

Note: in vSphere 5 there is the Hardware Acceleration on NAS Devices (but must be provided from the storage vendor). NFS datastores with Hardware Acceleration and VMFS datastores support the following disk provisioning policies: Flat disk (old zeroedthick format), Thick Provision (the old eagerzeroedthick) and Thin Provisioning. On NFS datastores that do not support Hardware Acceleration, only thin format is available.

Enable/Configure/Disable vCenter Server storage filters (same as vSphere 4.x)

See the vSphere Storage Guide (page 126). By using the vCenter advanced settings: config.vpxd.filter.vmfsFilter, config.vpxd.filter.rdm, …

Enable/Configure/Disable vCenter Server storage filters (same as vSphere 4.x)

Vedere the vSphere Storage Guide (pag. 126). Use advanced settings of vCenter Server: config.vpxd.filter.vmfsFilter, config.vpxd.filter.rd, …

Configure/Edit hardware/dependent hardware initiators (similar as vSphere 4.1)

See the vSphere Storage Guide (page 70).

Enable/Disable software iSCSI initiator (same as vSphere 4.x)

See the vSphere Storage Guide (page 72).

Configure/Edit software iSCSI initiator settings (same as vSphere 4.x)

See the vSphere Storage Guide (page 72).

Configure iSCSI port binding (new in vSphere 5)

See the vSphere Storage Guide (page 78). This is required for some type of storage and previosly was only possible from the CLI. Now it can simple be set from the vSphere Client.

Enable/Configure/Disable iSCSI CHAP (same as vSphere 4.x)

See the vSphere Storage Guide (page 83).

Determine use case for hardware/dependent hardware/software iSCSI initiator (similar as vSphere 4.1)

See the vSphere Storage Guide (page 63) and also http://vinfrastructure.it/vdesign/vstorage-software-vs-hardware-iscsi/

Determine use case for and configure array thin provisioning (similar as vSphere 4.x)

See previous point about thin provisiong.

Share

Virtualization, Cloud and Storage Architect. Tech Field delegate. VMUG IT Co-Founder and board member. VMware VMTN Moderator and vExpert 2010-24. Dell TechCenter Rockstar 2014-15. Microsoft MVP 2014-16. Veeam Vanguard 2015-23. Nutanix NTC 2014-20. Several certifications including: VCDX-DCV, VCP-DCV/DT/Cloud, VCAP-DCA/DCD/CIA/CID/DTA/DTD, MCSA, MCSE, MCITP, CCA, NPP.