Reading Time: 3 minutes

Tarmin is Data Defined Storage company delivering key advanced technologies in big data and information management, leveraging the promise of cloud computing to create a new storage paradigm. They are focusing on providing active archive, Content Addressable Storage (CAS), Information Lifecycle Management (ILM) and intelligent storage software optimized for secondary storage environment.

I’ve met this company for the first time at the Powering the cloud 2013 event and recently during the last A3 Communications Technology Live!

Their product GridBank is a vertical storage solution for very big companies (typical customers are banks and finance market, but there are also healthcare and oil&gas user cases) that try to address the main unstructured data challenges:

  • Data Growth: Exponential unstructured data growth driving increased data storage cost
  • Data Silos: Limited access to data between business units, locations & proprietary hardware silos
  • Data Risk: Increasingly stringent regulatory & legal compliance mandates
  • Data Protection: Multiple solutions required for data disaster prevention, retention, disposal & immutability
  • Data Discovery: Restricted search capability across massively complex data repositories
  • Data Value: Storage is seen as a cost center versus a strategic business enabler

The solution is providing:

  • Data Centric not Media Centric
  • Scale linearly in capacity across x86/64 hardware
  • Security & identity management integrated at data level
  • Data access & mobility regardless of size, type & geography
  • Unprecedented metadata access, intelligence & insight

It’s a distributed and policy based solution that can handle and managed different compute powers in a smart way and can start from 1 (or better 2) nodes and grow as you need.

The “Grid” is just a cluster of those nodes where you can have replication factor 1 (or more) of your information.

Tarmin-GridBank

Erasure coding is not yet implemented to maintain high level of performance (this a storage for secondary data, but for the main data!).

The interesting aspect is the application integrations that can be done in several ways and support a lot of applications type. For some of them, like Exchange, agents are not needed at all (for Exchange is used the standard MAPI connection). For others, like Sharepoint, an applications is needed (or for Sharepoint is recommended) to improve the efficiency.

Standard protocoles like CIFS, NFS, HDFS, FTP, Webdav, HTTP, REST, SOAP, S3 are supported for generic file or object access.

For the public cloud, some supported backend are: AWS, Azure, Rackspace.

Disclaimer: A3 Communications has invited me to this even and they has covered also the travel expenses, but I am not compensated for my time and I’m not obliged to blog. Furthermore, the content is not reviewed, approved or published by any other person than me.

Share

Virtualization, Cloud and Storage Architect. Tech Field delegate. VMUG IT Co-Founder and board member. VMware VMTN Moderator and vExpert 2010-24. Dell TechCenter Rockstar 2014-15. Microsoft MVP 2014-16. Veeam Vanguard 2015-23. Nutanix NTC 2014-20. Several certifications including: VCDX-DCV, VCP-DCV/DT/Cloud, VCAP-DCA/DCD/CIA/CID/DTA/DTD, MCSA, MCSE, MCITP, CCA, NPP.