This post is also available in: Italian

My briefing with Avere Systems, during the last Powering the cloud event, was with Rebecca Thompson (VP Marketing) and Bernhard “Bernie” Behn (Principal Technical Marketing Engineer).

Avere Systems brings to the market NAS Optimization solutions designed specifically to scale performance and capacity separately and take advantage of new Flash-based storage media using real-time tiering.

The company was founded on 2008 by a team of seasoned storage experts, and the President and CEO Ronald Bianchini, Jr. was a Senior Vice President at NetApp. Before it, he was CEO and Co-Founder of Spinnaker Networks, which developed the Storage Grid architecture acquired by NetApp. Also the CTO Michael Kazar has worked on versions of Carnegie Mellon University’s Andrew File System as well as the Andrew Toolkit, an OLE-like windowing toolkit.

The headquarter is at Pittsburgh (PA), but they also have an EMEA office in UK.

We have mainly discussed about their product to reduce the storage (NAS based) latency.

In a traditional storage solution there are some latency issues:

  • HDD latency (could be compensate by fast disks)
  • Storage CPU latency (queuing latency, could be compensate by mid-high level storage)
  • Geographic latency (depending on the distance and the bandwidth)

All traditional solutions to reduce the latency will increase the storage costs.

In the Avere solution there are two different roles:

  • Edge Filer (closer to the host part) could reduce the latency, increase the performance (by accelerate read, write & metadata operations), could gain linear performance scaling through clustering.
  • Core Filer (the traditional storage), that could become capacity optimized and can mix heterogeneous vendors in the same cluster.

Compared to other flash solutions this could resolve different issue:

  • Filer-Side Flash: flash are attached to some filer and hottest virtual machines must be placed on them. There could be some unused flash. And also could not optimize remote filer.
  • Server-Side Flash: flash are attached to some ESXi server (or other virtualization hosts) and hottest applications must be located on ESXi server with flash. Caching could only be read-only (due to the lack of high-availability protection for write-cached data). Also requires some custom drivers.

With the Avere FTX Edge Filer we have a Global Flash Pool that can perform read/write caching both for the data but also for the metadata, becose the edge terminated the NAS protocol and act as a “proxy” for the core filer (in this way also locking could be managed by the edge filer).

With more than two node, is possible not only scale the solution (with a linear scaling up to 50 nodes), but also have an HA configuration, where each node has another as a peer, but all are see as one with a transparent and global namespace based on the A3 architecture.

Also it’s possible reach an optimal density utilization of Edge Filer capacity, considering a 100:1 off-load typical for VMware, 1% of traffic transits the network. Al cases are actually based on VMware as a virtualization solution and they are also deploying specific off-load features.

On the front-end side actually it support SMB2 (maybe SMB3 in the future) and NFS. On the back-end side actually only NFS (maybe if the future they could implement also specific protocol support, for example to support public cloud storage solution).

One interesting user case of this type of solutions could be the ROBO (Remote Office/Branch Office) scenario: in this with we can have a central storage in the headquarter and several ESXi + Edge in each remote office!

For more information see also:

See also: full report list of Powering The Cloud 2012 event.

Andrea MauroAbout Andrea Mauro (2489 Posts)

Virtualization & Cloud Architect. VMUG IT Co-Founder and board member. VMware VMTN Moderator and vExpert (2010, 2011, 2012, 2013, 2014, 2015). PernixPro 2014. Dell TechCenter Rockstar 2014. MVP 2014. Several certifications including: VCDX-DCV, VCP-DCV/DT/Cloud, VCAP-DCA/DCD/CIA/CID/DTA/DTD, MCSA, MCSE, MCITP, CCA, NPP.


Share