This post is also available in: Italian
Some days ago I’ve got an interesting conf with Eric Keohan and Jonathan Klick about the Infinio product for storage acceleration in a VMware environment.
Infinio is a start-up and began in 2011 when Columbia computer science professor Vishal Misra and his fellow researchers saw the potential for their work to solve real-world problems. Vishal enlisted experienced software industry entrepreneurs and top-notch engineers (including co-founders Arun Agarwal and Dan Rubenstein) to make that vision a reality.
They exit from stealth mode during the Tech Field Day 9 and announce their innovative product: a caching appliance on each ESXi host to accelerate reads from NFS datastores in a VMware vSphere environment.
The solution is really interesting and fit the set of solution to have a caching feature host side, in order to accelerate your existing storage. The interesting and almost unique point is that is really fast in the deploy phase (only few minutes) and could be activated (or deactivated) also in a product environment, without any outage or new device drivers or host reboots, that make it a really a zero-downtime.
Another interesting point is that it use RAM to accelerate the storage, in order to gain SSD performance without SSDs!
Basically the deployment require just few simple steps:
- Download Infinio Accelerator installer (that can be used in trial mode for 30 days and can be removed without any outage)
- Discover your virtual infrastructure and decide which NFS datastores you want to accelerate (actually only one)
- Deploy and enable the Accelerator engines and get better performance from the storage system you already have
Actually this solution is designed only for NFS datastore and only for VMware vSphere, but a block-level datastore version is in future plan (and probably also other hypervisor).
Really interesting also the deploying approach: only 1 management IP is needed for the entire solution, each VA (one per host) will use auto-generated IP (in the APIPA range) on a dedicated VLAN (basically it build it on the vMotion network to be use to have an isolated and suitable network for connecting all the hosts). To configure the VA as a real “NFS gateway” without interruption the VMkernel NFS interface is moved on a new VLAN (usually the last one available, so the 4094) where the VA is receiving the request and then pass to the real NFS target. Actually, due to this auto-deployment approach distributed virtual switches are not yet supported.
All the VA will build a shared and deduplicated global cache and there are some graphics tools to monitor its efficiency and effectiveness.
Note that each VA use 2 vCPU and 8 GB of vRAM (actually is a fixed value selected to have a minimum impact on the host resources) and a 15 GB of local datastore (actually you need to have a local datastore on each node, in next version also a shared datastore will become a valid option).
User cases are several, from the VDI environments to SMB or storage acceleration cases. Really interesting that the license is really simple and the price is reasonable (499$ per socket); combined with some unique characteristics they make this product really interesting (and more can become when also it will support the iSCSI storages).
There are also some interesting video in order to understand this product and its approach. One is this introduction video:
Another one is the one on vDestination were Gregg Stuard sits down for a chat with Peter Smith from Infinio Systems: