This post is also available in: Italian

Reading Time: 5 minutes

StorMagic SvSAN is an interesting hyperconverged solution that converts internal disk, flash and memory of two or more servers into a robust, shared-storage appliance.

SvSAN delivers highly-available converged compute and storage while requiring the fewest components; 2 light-weight servers and no physical SAN. For this reason it’s a valuable product for ROBO (Remote Office  / Branch Office) scenario where a 2 nodes infrastructure could be enough.

But one interesting aspect is related to manageability and simplicity: you can deploy and manage your infrastructure directly from the vCenter Web Client (until vSphere version 6.0 you can also use the legacy vSphere Client) in very a simple way. Note that the vSphere plugin it’s available both for the Windows version and also for the vCSA.

Let’s start from the deploy the two VSA (Virtual Storage Appliance), one for each node. There are several way to do (including import an OVF file), but I really like the deployment and automatical configuration directly from the vCenter using the Web Client (after you have added the StorMagic plugin). Just go to the datacenter view and select the StorMagic tab:

From here you can monitor an existing infrastructure or deploy a new one, stating from the Deploy a VSA onto a host link.

You can choose an host without an existing StorMagic VSA (the other are already filtered from the view, so easy to find), read and accept the Terms and Conditions and then start configuring the VSA.

You have to choose an hostname for the VSA (unique on your network) and the domain name (those two combined must become a FQDN registered in the DNS) and than choose a datastore where the VSA VM will reside on (physical disks design can be done in different way, and maybe I will write a specific post on it). The datastore should be on internal or direct-attached server storage and have enough space (32 GB is enough for version 5.x of the product).

Then choose the desired storage allocation technique and which disks will become the shared storage. Usually is one or more RDM disks (raw device mappings). Note that SSH must be enabled on the ESXi host, but only for the duration of the deployment (SSH can be disabled immediately after deployment).

Using Advanced options you can build disks pools and managed them (useful if you are not using hardware RAID for local disks).

Optionally is possible enable read only caching on the VSA using one (or more) SSD (this is related to version 5.x of the product, starting in version 6.0 it’s also possible enable a write cache and SSD management is little different).

Then there is the Networking page that could be the most long configuration part, but can also be left to acquire IP addresses from DHCP, or they can be set statically (in this case you have to plan the IP really well).

You need at least one interface, but usually it’s good separate the Management interface from the iSCSI and Mirroring interfaces (maybe because those are just cross cables and/or are using faster NICs):

You can define the role for each network interface and also the IP configuration. Note that you cannot manage Jumbo Frame from here, but you can do after the deployment using the management page on each VSA.

Finally you have to put your license and during deployment, the VSA attempts to connect to StorMagic’s license server to validate the license. If it needs to go through a proxy server to do this, supply the proxy server details. If the VSA does not have internet connectivity, license it later using an offline activation mechanism. Note that each VSA is licensed per TB (usaully with chunck of 2TB).

That’s all: the vCenter plugin will upload and deploy the VSA on the host, configure it and activate the license. And register the new VSA to be able to build the shared datastore (when you have the two VSA ready). Also if you are deploying the VSA to a remote office, considering how small is it, does not take (usually) more than 5-10 minutes.

As written Jumbo Frame are not configured from this wizard and, if needed, must be configured from the VSA web page. Same for NTP settings or notification, but those tasks are quite easy to do.

Share

Virtualization, Cloud and Storage Architect. Tech Field delegate. VMUG IT Co-Founder and board member. VMware VMTN Moderator and vExpert 2010-24. Dell TechCenter Rockstar 2014-15. Microsoft MVP 2014-16. Veeam Vanguard 2015-23. Nutanix NTC 2014-20. Several certifications including: VCDX-DCV, VCP-DCV/DT/Cloud, VCAP-DCA/DCD/CIA/CID/DTA/DTD, MCSA, MCSE, MCITP, CCA, NPP.