StorMagic has announced the full release of StorMagic SvSAN 6.1 that takes the several capabilities of SvSAN 6.0 and augments it with a host of new features including predictive caching, automated storage tiering and a range of new management capabilities.
By triangulating your compute and storage requirements with the patent pending caching features within SvSAN 6.1, you can simultaneously:
- Reduce footprint
- Boost performance
- Increase capacity
Their patent pending predictive caching can dramatically increase storage IOPs performance and lower the latency in existing server applications. In addition, a suite of new and improved management capabilities will further automate and streamline deployment and monitoring processes so you’ll spend even less time on storage management.
New features available in version 6.1 are:
- Multiple VSA GUI deployment & upgrade: SvSAN 6 includes the ability to deploy multiple VSAs through a single wizard, reducing the time to deploy SvSAN. In addition to initial deployment, enhancements have been made to the StorMagic dashboard enabling multiple VSAs to be upgraded at the same time. Firmware is selected from a repository and installed onto multiple VSAs. This can be installed immediately or staged where the firmware is uploaded to the VSA in preparation for a later upgrade, for example, out-of-hours. SvSAN handles the dependencies and performs a health check ensuring that there is no impact to environments during upgrades.
- Automated PowerShell script generation: When deploying SvSAN via the GUI it is now possible to automatically generate a custom PowerShell script. These scripts can then be used for mass deployments for large environments removing the need for user interaction.
- SSD Write-back caching: The write-back caching feature uses a solid-state disk (SSD) to act as a cache for slower, low-performance hard disk drives and is suitable for read-intensive workloads. All data is initially written to SSD, providing low latency and improving application response times for random I/O workloads. Data is efficiently de-staged from the SSD to the final storage location at a later time.
Subsequent reads of data previously written to the SSD are read from cache, further reducing the number of I/Os going to the hard disks. Caching is enabled on a per target basis allowing users to select only the targets that need to benefit from I/O acceleration, ensuring that only important data is accelerated. - Predictive read-ahead: The predictive read-ahead algorithms are used to detect sequential read I/O patterns, with the primary goal to reduce hard disk head movements and reduce I/O latency. On identifying sequential read patterns, additional related data is pre-fetch from disk into memory prior to, ensuring that any subsequent reads are satisfied from memory without accessing disk, resulting in lower access times.
- Data pinning: The data pinning feature allows data to permanently reside in memory. Data pinning has a “learning” mode which records all the access data blocks, these are stored in a “pin map”. These pin maps are used to load the data from hard disk into memory prior to use ensuring that the data is always available in cache providing the best performance for specific workloads. This can be used for repetitive workloads such as system boots or end of month processing.
As usual it’s available both for Hyper-V and vSphere environments. For vSphere are supported both the Windows installable vCenter Server but also (like in versione 5) the vCSA. The vCenter Server integration it’s really powerful and you can manage both the VSA and the datastore from there:
Too good to be true? Just learn more and examine how the technology works in detail with the latest edition of our Monthly Webinar Series and accompanying white paper.
Register for the webinar now and receive your invite.
The webinar is scheduled to take place on Thursday 4th May at 8am PDT / 11am EDT / 4pm BST / 5pm CEST and your host will be Luke Pruen, StorMagic’s Technical Services Director.