This post is also available in: Italian

actifio_logoIn my briefing with Actifio, during the last Powering the cloud event, was with Andrew Gilman (Senior Director of Global Marketing) and Ann Karolin Thueland (Director Marketing EMEA).

The company is a so-called storage “start-up”, but it already has its own story (it was founded by Ash Ashutosh in July 2009) and by the way was born in Massachusetts, rather than the “usual” Silicon Valley. Furthermore, compared to many other start-up that are building solutions similar to each other (at least from a conceptual point of view), here the founders have addressed a specific needs and tried to propose a solution to satisfy it.
Copy-DataThey have found that most companies were spending much (or too much) for the “cold data”, such as backups and archives. And more if we extend this analysis to all other duplicated data, called “copy data“, such as snapshots, data for test and development environments, …

The growth of those kind of data could become critical: starting from 2006 storage increased linearly for the production data, but exponentially for the copy data. Also in 2013 the Copy Data Disk Cost passes production data (IDC Insight document, March 2013 #239875)! So they invented a way to reduce the space occupied by creating a “Copy Data Management” solution.

Their unique approach is to leave the production storage as is (saving also the investments and avoiding to enter in the “common” enterprise storage game) and provide a way to handle the copy data. Actifio virtualizes data management to deliver a solution that decouples the management of data from storage, network and server infrastructure using this approach:

  1. Capture changes once
  2. Store and move only unique deduped data
  3. Instantly access virtual copies for any use
  4. Single pane, app–centric, SLA–driven
  5. A single, comprehensive solution across your whole environment: virtual / physical, cloud / local, 3rd-party appliances and storage vendors

It is software defined, application centric, and SLA driven. By virtualizing copy data, creating a single “gold copy” available for instant access and recovery, Actifio dramatically improves service levels while eliminating the need for separate tools.

Their solution, now at version 6.0 (announced last August) is completely software based on standard hardware (or comodity hardware) and is designed to maximize data capture (also from particular physical environments such as iSeries), reduce the data movement and optimize the capacity of the storage operations using deduplication (globally) and compression in the background.

The data capture technique depends by the source type: for VMware environments is used the specific function of the vStorage APIs for Data Protection ™ called Changed Block Tracking (CBT), for database it’s used RMAN (for Oracle) or specific AppAPI, for file servers and in general for physical servers there is a Actifio Connector that also provide snapshots.

For applications and operating systems that are home grown or lack built-in interfaces for quiescing, Actifio’s internal snapshot technology can quickly capture crash consistent views of the application data. In addition pre- and post-snapshot scripts can be run to ensure that the data is consistent. Actifio leverages advanced Copy On Write (COW) snapshot technology for physical server environments such as Microsoft Windows™, Unix, Linux, and IBM i Servers™. Once the copy data is saved using Actifio snapshot technology, customers can leverage Actifio’s advanced copy data management features such as single image, global deduplication, compression, remote replication and instant restore.

The interesting aspect (and approach) is provide a single tool for all non-production data, that make conceptually useless (or at least that is going to replace) the backup systems and archive solutions. All with a multi-tenant and role-based mamagement, usable through various devices (including tablets and smartphones) or via REST API.

Obviously, considering that it also act as a backup and data protection solution, it must provide functions suited for this role. In particular, on the one hand, it provide numerous functions to bring active data: restore, clone, clone live. Including functions to restore single files, single mail, SharePoint, … and different applications and entire system (using Bare Metal Recovery). On the other hand, it must provide data protection functions appropriate for a logic of geographical redundancy: in particular provides replication functions (with a proprietary technology but fully integrated with VMware vCenter Site Recovery Manager):

  • Sync Replication: Synchronous replication can be guaranteed between customer sites up to 300KM apart.
  • Async Replication: Asyncronous replication has no distance limitation, and will send data over the WAN as fast as network bandwidth allows.

Equally interesting is the function to create different type of workflow, usable to implement the failover test, but also to create data copy for developers or test environments. All with a  multi-tenant and role based logic.

And to each application is possible create (manually of with the autodiscovery function) specific SLAs that define the frequency of snapshots, class of storage to utilize, replication frequency and type, and retention, then mapping to applications and/or hosts.

For more information there is also a technical white paper that describe how does this solution work.

Conclusions

The solution developed and the proposal is certainly interesting and could also redefine how non production data are managed, by moving from many different solutions and silos to a single environment. The multi-tenant and role-based and SLA driven interface allows the management of the different teams with different skills without having to change the internal organization.

The architecture should however be redesigned from different solutions for different needs to a single unified for all the “copy data”. Considering also that, with the ability to mount data live, also part of the production data (like historical data on a file server or archived emails) could be revised according to this model.

Of course many features are already present in specific solutions in particular context (like backup), but still the idea of bringing together all the different needs in a single solution is very innovative and probably will be used as a model for other solutions ( already there are some backup solutions that integrate archiving functions or DR, but this is just a part of the entire “cake”).

See also: full report list of Powering The Cloud 2013 event.

Andrea MauroAbout Andrea Mauro (2449 Posts)

Virtualization & Cloud Architect. VMUG IT Co-Founder and board member. VMware VMTN Moderator and vExpert (2010, 2011, 2012, 2013, 2014, 2015). PernixPro 2014. Dell TechCenter Rockstar 2014. MVP 2014. Several certifications including: VCDX-DCV, VCP-DCV/DT/Cloud, VCAP-DCA/DCD/CIA/CID/DTA/DTD, MCSA, MCSE, MCITP, CCA, NPP.


Share