This post is also available in: Italian

Reading Time: 8 minutes

Some months ago I described the characteristics of a new Dell’s product, intended to redefine the existing server form factors and the server converged concept: Dell PowerEdge VRTX is the first and so far only Dell product (at present there aren’t similar solutions for other vendors) to merge together three different form factors (tower, rack and blade) into one with unique characteristics.

It could be defined a Data Center in a box, or (using a quote from Dell) a (private) Cloud in a box. Basically it’s an integrated tower format system (complete with wheels if you need to move it) that could be also rack-mountable (in that case it will oriented horizontally and will fill 5U) with most of the functionality of a Dell blade solutions, but also some unique functions. It include a server part, a storage part and also a networking part, all in a single system.

Unlike the PowerEdge C line (already existing from several time), in this case the storage is shared between the servers and the servers are in the same blade form factor, being able to benefit from the same components from the blade line (althought the blades are designed with a specific fabric for this solution). In contrast, the VRTX solution is less dense, but it is not his main purpose and its market position.

As written in a previous post, user cases scenarios ranging from VDI projects (in this case a fundamental aspect is that this solution is able to map the PCI-e card to the blades) , the development and test environments, to the ROBO (Remote Office/Brach Office), but also it could become really useful in data center physical migration cases.

Each VRTX can host a maximum of 4 half-height blades (R520 or R620) and up to 12 3.5 “disks, or (alternatively) up to 25 2.5” disks using NLSAS, SAS, or SSD (even mixing different types, but with the same form size). Unfortunately, the R420 blades (quarter-height size) cannot be used, so 4 is the maximun number of blades. But as mentioned before the density is not the primary goal of this solution. Remember also that blades are specific for this solution, they are using several components from the blade line (and are using the same name), but the R520 or R620 designed for VRTX cannot be used in a M1000e and viceversa (due to different fabrics).

This converged solution is also sell at a reasonable (and interesting) price: you can have a compelte solution with two blades, integrated networking and shared storage to around $10,000. Obviously, in a full loaded solution with 4 blades (with a lot of memory), 48TB of storage (the maximum obtainable on the existing disks) or a lot of SSDs, PCI-e card with flash cache or GPU graphics the price could become pretty high.

Dell PowerEdge VRTX - Front Dell PowerEdge VRTX - Rear

As you can notice in the pictures, servers and storage are placed in the front of the system (which also includes a display control), while on the back there are the fans, the networking integrated switch (vertically in the lower part between power supplies and fans), some PCI-e slots (8 in total including 3 full-height), the CMC (one or two for redundancy) and up to 4 power supplies.

The CMC is the management board of the entire infrastructure and functionality is very similar to the CMC of the M1000e blade solution: in fact it can manage also the ingrated switch, the shared storage an give the access to each iDRAC server.

A specific mention about the power supplies and fans: first of all the power supplies are standard format (same as the rack series), with standard power plugs like regular computers, a great advantage over the blade solution (and some high-end servers from other vendors), which instead require specific power cords and PDUs, making really important have un assessment or a good design of the electrical part (and also the thermal part).

About the fans in this system are really quite, considering also the thermal profile is really optimized and that the entire system is designed for fresh air cooling (the blades are Dell’s G12 that already support this option). This aspect make this solution suitable also to be positioned outside a datacenter (of course in this case the physical security could become a risk). Could be also located under your working desk: it’s really quite, like a standard PC. Considering that some scenarios were the test environments and ROBO, this aspect confirms the objectives and the effectiveness in those cases.

It could also be used as a demo environment but in this case, it’s not so easy to be moved, considering that it’s not so light. But considering that is include the servers, the storage also the networking it could still make sense if the option is move 3-4 servers and a shared storage, considering also the advantage to have all cabled and ready to run.

About the cloud in a box scenario, it remains the most interesting considering also that there is a specific Dell bundle with 3 blades preconfigured with a private cloud environment: one blade for Microsoft System Center and two blades with Microsoft Hyper-V. This does not exclude other solutions (maybe in a future we can see a preconfigured OpenStack or VMware solution) or, of course, custom solutions.

User cases are not only limited to virtual environments: for example Microsoft Exchange or SQL Server or Sharepoint cluster could be realized with a single box and could serve several clients.

One more note about the VDI user case: with the integration of Dell Quest vWorkspace solution this could become a killer application, combined with Dell Wyse thin or zero clients.

Unfortunately the shared storage could be addressed only by the Shared-PERC (that integrate also the RAID functions): the lack of a shared storage card with direct access to the disks in a shared-JBOD mode make this solution not realistic for some fileserver implementations, for example using Nexenta or using features like Storage Space and Storage Pools in Microsoft Windows Server 2012 and 2012 R2 (in this case was technically possible build a 4 active nodes scale-out fileserver).

Clearly it is not optimal and perfect: if the previous limit was more a marketing (rather than a technical) limit, there are also a couple of limitations that absolutely must be considered in the design phase and that may limit its use in the mid-enterprise environments.

Dell-SPERCThe first is related to the shared storage that is achieved by using a specific card called Shared-PERC (SPERC) installed inside on the motherboard (in picture is in the bottom) . At the moment represents a single point of failure since it does not (yet) have redundant capability. System is already design to implement this redundancy: in fact in the picture we can notice that there is already the space for a second card (just above the first one), but for now the problem is the lack of specific device drivers to manage the two cards together.

Since it’s a storage solution completely developed by Dell (and not derived from any of the existing PowerVault , EqualLogic or Compellent families) it’s new and it’s not natively supported by the various operating systems and hypervisors. Dell has already released the drivers for the major systems, but of course, the development of a single chip device driver is faster than the development of multi-chip drivers with redundancy features.

Maybe it might be interesting to see also alternative and more modular storage solutions: it would be a nice idea to adapt the EqualLogic Blade to be located inside the VRTX, maybe in place of the existing shared disks. Physically there is space to fit it, technically there are some issues that must be resolved.

A second limitation is related to networking: the VRTX has an integrated switch but is only single and only at 1 Gbps. So it representing again another single point of failure (in this case does not have a second integrated switch that can planned) and also represents a possible limit in some scenarios. Again, this product is not targeted for the enterprise environments, where rack and/or blade solutions remain the primary target.

However, there is a technical solution (not so elengante) to solve the networking issue: just buy additional network adapter cards, one for each blade, and map them to each server. With the teaming software you will then handle the proper redundancy between switches and integrated additional network adapters.

Share

Virtualization, Cloud and Storage Architect. Tech Field delegate. VMUG IT Co-Founder and board member. VMware VMTN Moderator and vExpert 2010-24. Dell TechCenter Rockstar 2014-15. Microsoft MVP 2014-16. Veeam Vanguard 2015-23. Nutanix NTC 2014-20. Several certifications including: VCDX-DCV, VCP-DCV/DT/Cloud, VCAP-DCA/DCD/CIA/CID/DTA/DTD, MCSA, MCSE, MCITP, CCA, NPP.