The VMAX 40K (and as much as the new platform, the Enginuity 5876 release) takes the core architectural/philosophical tenants of the Symmetrix family and dials them up to 12.
- Powerful. 3x more performance (both IOPs and bandwidth), and 2x more capacity and density. Wow.
- Trusted. You like what VMAX does? Want to have to add that to your other arrays? Great – you can now using Federated Tiered Storage (FTS).
- Smart. FAST VP in more places (synchronized across SRDF R2s, usable in mainframe use cases), and simple can be smart – Unisphere is now on VMAX!
Read on past the break for more detailed tear-down of a massive release…
Ok, here we go… Since there is so much in here, I’m going to break this into two parts – first hardware, then (perhaps more importantly) the software updates…
Hardware:
1) New bigger/stronger/faster storage engines = new upper limits of scale and performance.
The storage engine in the VMAX 40K is runs on 2.8GHz Intel Xeon Westmere based chips, but also increases the core density to 24 cores per Engine (vs. 16 cores per engine in the VMAX 20K and 8 cores per engine in the VMAX 10K). That means that a fully loaded VMAX 40K has half a terahertz of Intel processing goodness under the hood. One thing that is kept – as demanded by the most critical workloads – the core architectural property of being a global memory model across engines as they are added. The other thing is that there is materially more bandwidth more ports, and moving to PCIe gen 2throughout the whole system (much more than a VMAX 20K even) – (psst.. Makes you think about how much we could do with the stuff in this video)
At the same time – the VMAX 40K introduces higher memory density (BTW – these dense memory configs are driven by the upper end of the envelope – even with VMAX (and just like Isilon and VNX), a good idea is “start small, and scale it as you need” – they all can grow in various ways) has doubled – with up to 256GB per engine, which means up to a total of 2TB in an 8 engine config (and remember a Storage Engine is always an HA active/active pair.
2) 2.5” SAS drive support = much higher density options.
Comparing it to the VMAX 20K, this architectural change means greater density, and more efficiency. While VMAX still isn’t EMC’s densest platform (when you want a PB tank in the smallest space – that title goes to the VNX with the ultra-dense 1.8+PB in a 19” rack with the 60 drive/4U enclosure) or largest capacity (that would be Isilon and Atmos which can be many PBs in size) – this means that for customer who want everything in a package, they don’t have to trade off density when they go VMAX.
3) System and Drive Bay dispersion
Customers told us loud and clear – these hyper-consolidated use cases can’t require that all the cabinets are physically beside one another. We always planned to extend the dispersion options between bays (the hyper-low latency, shared global memory model of VMAX means inter-storage engine connectivity is key), and I’m glad to say we listened! The total distance is approaching a quarter of a football field, with lots of flexibility between storage engine location and drive bays.
4) eMLC.
This is a big deal. EMC was the first vendor to use SSD at large scale, across many customers many years ago. We are very focused on reliability, durability – and hammered the flash and SSD vendors through crazy-stringent requirements and test harnesses. Until recently, with all that it was our view that SLC was the only drive type that could handle the enterprise use cases, and withstand the write duty cycle we demand (which is higher than we see at the vast majority of the customers – but we aim for that high standard). We shipped more than 24PB of Flash last year, and the volumes are climbing every day. The attach rate is now approaching “100%”. We’re at the point where there is no room for “all disk” array configs in the market – and every EMC array is always a “hybrid array” (and we’re seeing use cases emerge for “all flash” array use cases).
… But Flash itself is accelerating (as it ramps in volume) and between MLC improvements, controller optimizations, we started to see that the best MLC was able to BEAT the specs we set. Remember, this still isn’t “consumer MLC”, but rather the best the manufacturers have to offer – enterprise-grade MLC, or eMLC.
Here you can see that testing (and this wasn’t against a drive, but rather the ultimate result in a system use case) shows that they can handle real (not artificially small IO sizes) blended workloads without starving read IOs.
Short version? The cost of FLASH in EMC arrays has dropped 88% over the last 4 years (in $/GB), the new eMLC drives represent a 20% drop in $/GB, and have the same EMC support and warranty we use for SLC and for magnetic media.
Software:
1) Symmetrix. Simple. Not two words that historically have gone together :-) Enginuity 5876 brings VMAX management into the integrated management model that EMC is using: Unisphere. Unisphere is winning accolades and customer cheers (while not perfect - I think we still need to optimize for high-latency use cases more) – and now we use Unisphere for common, EMC-standard look and feel. In this first wave – many, but not all, VMAX functions are in Unisphere. All the common provisioning/management use cases are there – but some of the things done by Symmetrix Performance Analyzer (for example) are not merged in yet. Count on those continuing to integrate. We know how important this is for our customers.
Beyond simply the management UI (don’t get me wrong, I think it’s huge!), there’s also improvements in Enginuity internals – something called the “Dynamic Back End”. This means that more system-wide configuration changes can be done with shorter (or eliminated) configuration locks – ergo easier, faster changes.
2) Federated Tiered Storage (FTS). For our customers, I expect this to be huge. We’re continuing to march down the path of the idea of “federation”. The arrays are getting so big that “migration” is a huge barrier. Federated Live Migration (ability to move data and device identity between Symms non-disruptively) was introduced with the last Enginuity major release. Customers should expect this to continue (supporting more complex clustered use cases, carrying replication and more rich device attributes) in future releases – but customers have asked for this to work not just between EMC arrays, but across the industry. While we continue to try to drive this via the standard bodies (to do this, the arrays need to be able to communicate with one another and “look” like each other – transferring things like WWNs and replication state) – but it won’t happen without multiple vendors wanting to do this.
BUT – we could figure out how to put third party arrays behind our brains, and then manage their capacity (after all – others do this, and call it ‘virtualization’), and then not only do they inherit great stuff like FAST VP, our VAAI implementation, our vCenter integration/management (for you frustrated VMware admins who’s storage and storage teams who don’t know how important this is), but also FLM… and so we have.
Federated Tiered Storage enables you to attach a broad range of 3rd party storage behind the VMAX and have it act as a “tier”. I’m going to add a demo of this – it’s ridiculously easy.
We’ve also employing the lessons learnt from VPLEX – making the support matrix simple. The FTS support matrix follows the “Simple Support Matrix” model, and we’re rolling out a program where customers can easily help get their config supported if we don’t support it out of the gate.
Oh – one more thing. FTS is FREE.
3) FAST VP for Mainframe.
While I’m not (personally) much of a Mainframe fan – there’s no denying the strong, and critical presence they have at customers. While they may not be driving growth like x86-based cloud computing use, or modern scale-out shared little/nothing big-data use cases… Mainframes are still the beating heart of many a customer – and they want the best, and want efficiency there too. We (and our customers) were unsure (actually more like “didn’t think it would”) about whether FAST VP would help as much with mainframes as it does with open systems. Turns out we were wrong!
After more than 32 Million run-hours of FAST VP since launch – it’s time to bring the goodness to the old big-iron. Here’s some testing data of the effect on FAST VP on System Z:
4) Snapshot and Replication improvements. These data services are important – and an area where we could improve, so we did.
- SRDF may be the “gold standard” when it comes to replication at very large scale, and is often the choice of the customer for their the most mission-critical use cases, but for many customers is overkill. So we embedded the Recoverpoint splitter in Enginuity. This means there is now a simple replication option that I think should be “where we start” design discussions with customers. It has other benefits to in the places where SRDF is not a fit – not the least of which is the ability to replicate to non-EMC storage.
- For the customers who need (demand in many cases) SRDF – one request was to have the FAST VP policy be “mirrored” along with the data – in other words, if there is a need to failover- that the performance envelope was the same. This BTW gives you some idea of the architectural drivers that customers with these use cases ask for – they have requirements that say “if this building is a smoking hole, and we failover to the R2 copy, not only must we have the data, not only must the failover be transparent – but the performance needs to be the same”. If you have that kind of requirement (not just that desire – but requirement), there are few choices out there – and VMAX and SRDF are one of those very, very few.
- Timefinder Snaps have some pretty material improvements (we also showed some preview stuff on what we’re doing on VNX Snaps – will do a post on that too). For Virtually Provisioned devices – the ability to have very space-efficiency pointer-based snapshots that are efficient in their use of system cache is a welcome (I would say overdue) improvement. Would love to get customer input on our direction here.
All this is goodness is in Enginuity 5876 and is common on VMAX 40K and 20K. The software stack on the VMAX 10K (focused at the entry band of customers in the market for this class of solution) lags the overall software stack a little (no FTS and no Unisphere until 2H’12) – but will catch up shortly.
Phew. That’s a LOT (and makes for a long + dense blog post). The actual GA happened quietly before EMC World – so we have been training folks, and warming up the machinery for execution at scale. You should have seen the piles of Q2 training for the field geeks like me :-) (BTW – working to open this all up to our partners pre-launch going fwd)
I totally also dig something else behind the scenes I wanted to share.
EMCers and EMC Partners – you can play with VMAX 40K and Enginuity NOW (check it out at portal.demoemc.com)
This uses the same “vEnginuity” Virtual Storage Appliance that powers the EMC World vLabs hands-on-labs around VMAX use cases. The thing that is interesting (I’ve said it before), we develop as virtual machines in many (approaching most) use cases – but for the first time, it became part of the product readiness process: “don’t release unless there is a virtual appliance in the vLab”.
Now, since it’s it’s first “big outing”, vEnginuity needs to still go on a diet, could stand to be more stable, and currently doesn’t support all the use cases – but I’ve seen this before. It was the same scoop with the VNX VSA at the beginning, and the more we use them, the more they harden. Same thing went for VPLEX 5.1, and while it’s not in the GA product yet, that VPLEX VSA accelerated the iSCSI target code (since that makes the VSA much more usable). I think that’s cool – and while not earth-shattering, it highlights continued learning and adaptation inside the machine – it’s these things that help our field, our partners, and our customers…
In any case – Would love to hear your thoughts on the VMAX 40K and Enginuity 5876 announcement – share!
Am I understanding this correctly, FAST VP will work between VMAX and 3rd party arrays with FTS?
Posted by: John Affatati | June 01, 2012 at 10:39 AM