Reading Time: < 1 minute

After the recent announce of  new Equallogic storage I’ve notice some tweets that need more  clarifications.

To take advantage of new vStorage API for vSphere you do not need the new storage, but only the firmware that support the required feature:

  • VAAI can be used also in vSphere 4.1 (only in the Enterprise and Enterprise+ editions)  both with firmware 5.0.x or 5.1.x. With the new 5.1 firmware a new version of VAAI is implemented with a new feature for thin provisioning (but in this case it’s also required vSphere 5).
  • VASA can be used only with vSphere 5 and the new 5.1 firmware, but old with a old Equallogic storage (check controller compatibility for the 5.1 upgrade).
  • The new related software, like for example the EqualLogic Host Integration Tools for VMware 3.1, are not strictly required to use Equallogic in a virtual environment or to use vSphere integration.

See also:

Reading Time: 2 minutes

Dell announce the new Equallogic array with the introduction of 2.5-inch drive support, providing greater density that enables users to store more in less space. New storage arrays include:

  • EqualLogic PS4100 Series — ideal for small-to-medium businesses or remote office locations with growing storage needs. The SANs, supporting up to 36TB in a single array and 72TB in a single group, can scale as storage demands increase by seamlessly adding additional PS4100 or PS6100 arrays into an EqualLogic group.
  • EqualLogic PS6100 Series — designed to provide mid-sized customers with a scalable storage environment, up to 72TB in a single array and 1.2PB in a single group, that can easily accommodate both high-performance and high-capacity drive options. The PS6100 family is ideal for customers looking for a storage solution to support the storage demands of a highly virtualized data center environment where the seamless movement and protection of virtual machines, applications and data is crucial.

With the newly redesigned, compact form factor, PS6100 customers can achieve the same performance for their typical workload using half the number of arrays, and receive 50 percent more expansion capacity when compared with the previous generation EqualLogic arrays. Customers can gain up to 60 percent performance improvement on typical workloads with the EqualLogic PS Series compared to the previous generation EqualLogic arrays. The new controller type of the PS6100 series has now 8 GB of cache (the PS6000 series has 4 GB and the PS4000 “only” 2 GB).

Those new Equallogic are equiped with the new 5.1 firmware that include new specific API for vSphere integration, like vSphere Storage API – Storage Awareness (VASA) for thin provisioning awareness and new vSphere Storage APIs – Array Integration (VAAI). Note that this improvement are firmware related, so can be applied also to old Equallogic series (if the controller can be upgraded to the new 5.1 firmware).

Dell also has begun shipping the new EqualLogic FS7500, the company’s latest NAS solution that works with EqualLogic PS Series arrays to deliver the only scale-out, unified storage platform for mid-size deployments. The FS7500 uses the Dell Scalable File System, which offers several advanced features including cache monitoring, load balancing and multi-threading for fast I/O processing.

See also:

Reading Time: < 1 minute

Official announce: Veeam unveils Veeam Backup & Replication v6, extending leadership in virtualization data protection.

Veeam Software today unveiled major enhancements in Veeam Backup & Replication™ v6.

Probably the most evident new feature is theMulti-hypervisor support (actually limited only to VMware and Windows Server Hyper-V).

Note that the vSphere 5 compatibility is not specified, but the related Storage API are not changed (or not too much) and also the v 5.0.2 works “fine” on vSphere

General availability of v6 is expected in Q4 2011. For more information and updates prior to GA, go to http://go.veeam.com/v6-backup-replication.

Reading Time: 2 minutes

In a physical environment usually the term CPU is used to refer to the physical package (or socket). The real processing unit inside this package are called cores (and not that each core can have inside more ALU and can be seen as more logical cores with hyper-threading feature). More CPU usually define a SMP system, more cores a multi-core CPU, more CPU each with more cores a complex system (usually the NUMA architecture is used in this case).

In a virtual enviroment the term vCPU is used to refer to a core assigned to a VM. More vCPU define a vSMP system, similar to a physical system with more CPU each with a single core.

The number of vCPUs can be assigned is determined by the license edition (for more info see http://www.vmware.com/products/vsphere/buy/editions_comparison.html): for all edition except the Enterprise+ was (in vSphere 4.x) 4 vCPU, now (in vSphere 5) is 8 vCPU. For the Enterprise+ edition was 8 and now is 32.

But the number of vCPU can have some impact on guest OS CPU licensing. For example Windows XP or 7 is limited to only 2 vCPUs as “physical” CPUs and can not use more than this limit… But can use more cores.

To get around this limit, it’s possible expose to a VM a more complicated structure where each vCPU has more than one cores. This can be set by and advanced setting in the vmx file. Note that from vSphere 5 this is possible also from graphics interface.

Reading Time: < 1 minute

The list of the vExpert 2011 is now available on line. But note that this list could not cover all the vExpert people  (the list is not automatic but each vExpert must add his entry) and, of course, do not include the vExpert 2009 and 2010.

Reading Time: 4 minutes

Objective 3.3 – Create and Configure VMFS and NFS Datastores

See also those similar posts: Objective 3.3 – Create and Configure VMFS and NFS Datastores and Objective 3.3 – Create and Configure VMFS and NFS Datastores.

Identify VMFS and NFS Datastore properties (similar as vSphere 4.x)

See the vSphere Storage Guide (page 21). NFS as quite the same functions (also hardware acceleration introducted in vSphere 5), but still cannot implement a RDM disk, and for this reason cannot implement a guest cluster solution (like Microsoft MSCS).

Identify VMFS5 capabilities (new in vSphere 5)

See the vSphere Storage Guide (page 114) and vSphere 5.0 Storage Features Part 1 – VMFS-5.

VMFS5 provides many improvements in scalability and performance over the previous version. The following improvements can be found in VMFS5:

  • GTP partition table (see also vSphere 5.0 Storage Features Part 7 – VMFS-5 & GPT) is used (for newly formatted VMFS5 datastores)
  • Unified the block size (to 1MB for newly formatted VMFS5 datastores), previous versions of VMFS used 1,2,4 or 8MB file blocks.
  • Larger single Extent Volumes: in previous versions of VMFS, the largest single extent was 2TB. With VMFS-5, this limit has been increased to ~ 60TB.
  • Smoother upgrade path with on-line in-place upgrade.
  • Scalability improvements using VAAI and ATS Enhancement.
  • Increased resource limits such as filedescriptors, Smaller Sub-Block, Small File Support, Increased File Count
  • Mount and Unmount workflow in the vSphere Client.

Differences between newly created and upgraded VMFS-5 datastores:

  • VMFS-5 upgraded from VMFS-3 continues to use the previous file block size which may be larger than the unified 1MB file block size.
  • VMFS-5 upgraded from VMFS-3 continues to use 64KB sub-blocks and not new 8K sub-blocks.
  • VMFS-5 upgraded from VMFS-3 continues to have a file limit of 30720 rather than new file limit of > 100000 for newly created VMFS-5.
  • VMFS-5 upgraded from VMFS-3 continues to use MBR (Master Boot Record) partition type; when the VMFS-5 volume is grown above 2TB, it automatically & seamlessly switches from MBR to GPT (GUID Partition Table) with no impact to the running VMs.
  • VMFS-5 upgraded from VMFS-3 continue to have its partition starting on sector 128; newly created VMFS5 partitions will have their partition starting at sector 2048.

For RDM – Raw Device Mappings:

  • There is now support for passthru RDMs to be ~ 60TB in size.
  • Both upgraded VMFS-5 & newly created VMFS-5 support the larger passthru RDM.

Note that:

  • The maximum size of a VMDK on VMFS-5 is still 2TB -512 bytes.
  • The maximum size of a non-passthru (virtual) RDM on VMFS-5 is still 2TB -512 bytes.
  • The maximum number of LUNs that are supported on an ESXi 5.0 host is still 256.

Create/Rename/Delete/Unmount a VMFS Datastore (same as vSphere 4.x)

See the vSphere Storage Guide (page 115, 129, 123 and 131).

Mount/Unmount an NFS Datastore (same as vSphere 4.x)

See the vSphere Storage Guide (page 128).

Extend/Expand VMFS Datastores (same as vSphere 4.x)

See the vSphere Storage Guide (page 119). Note that two methods are availabe (as also in vSphere 4.x):

  • Add a new extent (same as VI 3.x): an extent is a partition on a storage device, or LUN. You can add up to 32 new extents of the same storage type to an existing VMFS datastore. The spanned VMFS datastore can use any of allits extents at any time. It does not need to fill up a particular extent before using the next one.
  • Grow an extent in an existing VMFS datastore (better and clean solution), so that it fills the available adjacent capacity. Only extents with free space immediately after them are expandable.

Upgrade a VMFS3 Datastore to VMFS5 (new in vSphere 5)

See the vSphere Storage Guide (page 121). For VMFS3 the upgrade can be done with VM running on it (different from VMFS2 to VMFS3 upgrade). Of course all hosts accessing the datastore must support VMFS5.

Place a VMFS Datastore in Maintenance Mode (new in vSphere 5)

See the vSphere Resource Management Guide (page 83). Maintenance mode is available to datastores within a Storage DRS-enabled datastore cluster. Standalone datastores cannot be placed in maintenance mode.

Select the Preferred Path for a VMFS Datastore (similar as vSphere 4.x)

See the vSphere Storage Guide (page 158).

Disable a path to a VMFS Datastore (similar as vSphere 4.x)

See the vSphere Storage Guide (page 163).

Determine use case for multiple VMFS/NFS Datastores (similar as vSphere 4.x)

Understand the limits (quite few now) of NFS, but also the benefits. See also: http://technodrone.blogspot.com/2010/01/vmfs-or-nfs.html.

Determine appropriate Path Selection Policy for a given VMFS Datastore (similar as vSphere 4.x)

See the vSphere Storage Guide (page 159).

Reading Time: 4 minutes

Objective 3.2 – Configure the Storage Virtual Appliance for vSphere

This objective is related to a new product: the VMware Storage Virtual Appliance 1.0 (SVA or usually called also VSA) for vSphere 5. Note that this product is not included in vSphere editions, but is a separate product with his specific license (see the VMware site for more details).

Note: to make a virtual environment to test VSA see the blog Getting the VMware VSA running in a nested ESXi environment.

See also this post: Objective 3.2 – Configure the Storage Virtual Appliance for vSphere.

Define SVA architecture

See the vSphere Storage Appliance 1.0 Installation and Configuration Guide (page 9) and the VMware vSphere Storage Appliance Technical Whitepaper (page 3) and vSphere Storage Appliance (VSA) – Introduction.

VSA can be deployed in a two-node or three-node configuration. Collectively, the two or three nodes in the VSA implementation are known as a VSA storage cluster. Each VMware ESXi server has a VSA instance deployed to it as a virtual machine. The VSA instance will then use the available space on the local disk(s) of the VMware ESXi servers to present one mirrored NFS volume per VMware ESXi. The NFS volume is presented to all VMware ESXi servers in the datacenter.

Configure ESXi hosts as SVA hosts

The VSA can be deployed in two configurations:

  • 3 x VMware ESXi 5.0 server configuration
  • 2 x VMware ESXi 5.0 server configuration

The two different configurations are identical in the way in which they present storage. The only difference is in the way that they handle VSA storage cluster membership. The two-node VSA configuration uses a special VSA cluster service, which runs on the VMware vCenter Server. This behaves as a cluster member and is used to make sure that there is still a majority of members in the cluster, should one VMware ESXi server VSA member fail. There is no storage associated with the VSA cluster service.

Configure the storage network for the SVA

See the vSphere Storage Appliance 1.0 Installation and Configuration Guide (page 21). It require 2 different networks (front-end and back-end network).

Deploy/Configure the SVA Manager

A VSA installation is started by installing the VSA manager software on an existing VMware vCenter Server. Once completed, a vSphere client is opened and pointed to the VMware vCenter Server, and the VSA manager plug-in is enabled. This creates a VSA manager tab in the vSphere client.

NOTE: In this release of VSA, an instance of VMware vCenter Server 5.0 can only manage a single VSA storage cluster.

Administer SVA storage resources

See the vSphere Storage Appliance 1.0 Administration.

Determine use case for deploying the SVA

VSA enables users to get the full range of vSphere features, including vSphere HA, vMotion, and vSphere DRS, without having to purchase a physical storage array to provide shared storage, making VSA a very cost-effective solution (IMHO price is not so cost effective, also in the bundle with Essential+).

VSA is very easy to deploy. Many of the configuration tasks, such as network setup and vSphere HA deployment, are done by the installer. The benefit here is that this product can be deployed by customers who might not be well versed in vSphere. It gives them a good first-time user experience.

VSA is very resilient. If a VMware ESXi server that is hosting one of the VSAs goes down, or one VSA member suffers an outage, with the redundancy built into the VSA, the NFS share presented from that VSA will be automatically and seamlessly failed over to another VSA in the cluster.

Determine appropriate ESXi host resources for the SVA

About calculating the VSA Cluster Capacity, see the vSphere Storage Appliance 1.0 Installation and Configuration Guide (page 15).

About the requirements for the hosts see the vSphere Storage Appliance 1.0 Installation and Configuration Guide (page 17). Note that the actual HCL is very small and that RAID10 is the only supported RAID configuration for the VSA cluster.

© 2025-2011 vInfrastructure Blog | Disclaimer & Copyright