A big exciting day! – click on the link to subscribe to one of the sessions…
Earlier I posted on how EMC has been tracking the host attach rate to our systems and how 2 years ago in the open-systems category – ESX hosts started to just ramp up furiously? That had the engineers thinking about what would characterize a storage subsystem for a 100% virtualized datacenter. Over the remaining 2 years, they’ve been off working on the next generation. The CX4 was the first to incorporate some of this thinking (large scale multicore, 64-bit transition, ability to change I/O interfaces non-disruptively via Ultraflex I/O, Virtual Provisioning), the Celerra updates came next (similar hardware considerations including the modular Ultraflex I/O design, deduplication, vCenter plugins), and now the 3rd shoe drops.
Readers – you know that I’m a big fan of small, and home-brew, and using IOmega for VMware storage – that is EMC. So is this – just at the other extreme.
So – what characterizes a “Storage for a Private Cloud”?
Well – it needs to be:
- Efficient – for any given workload – it needs to be as efficient as possible. For some, that means capacity-saving techniques like Virtual Provisioning. For others, it means IO-saving techniques like EFD, large caches.
- Control – the ability to have vCenter-integrated control (which EMC covers end-to-end – EMC storage Viewer for the VMware admin, ControlCenter being VMware Ready for the storage admin, and the new provisioning model on the Symmetrix V-Max has a built-in facility for “VMware Cluster-wide provisioning” all at once), and control at the hyper-scale of the VMware datacenters we’re seeing within datacenters and at service providers. It’s has to deliver the control of 24 x forever operation.
- Choice – the choice to have a common infrastructure literally scale to any workload, any app, any use case, and across geographic boundaries. You need to be able to literally change ANY attribute (software OR hardware configuration – from layout to brains, to IO) totally non-disruptively. In the mid-range, that means a certain scaling factor, but in the high-end – it means a different scaling factor.
First – a quick summary of Symmetrix V-Max (and I’ll leave a lot of detail to Barry – I’m more interested in the VMware related elements). It’s a new product – which can be summarized as 2-3 times faster/larger/stronger than the previous generation – which normally would be enough for any new product announcement, but it’s also a new architectural design, and one that is important and also ties together things we’ve been working on for a long time, and we think will redefine storage-land for a while.
A V-Max Engine is the building block of a Symmetrix V-Max config. It has 8 Intel x86-64 cores, 16 high density front end ports (can be hot-swapped – note the theme of anything cn be changed non-disruptively) and 128GB of memory. The “Global Memory” thing is important for reasons that will be clear in a second… bear with me.
You can take these V-Max engines and start with one and scale out. How far out? Let’s use the “out of the gate” configuration:
Here – with 8 V-Max engines, we now have an array that has:
- hundreds of high performance cores
- hundreds of ports
- TBs of cache
- supports thousands of spindles
If you’re curious what this would look like – check it out….
Let me explain the Virtual Matrix design and that global memory design. Literally, the memory of these engines are all one global memory address space and all nodes can read and write to the common – supported by the virtual matrix – EMC-developed ASICs which handle memory access and a very highspeed low latency interconnect.
But the design has been designed to scale to 256 engines, and the virtual matrix is designed to span geographic distances.
Here – now, across geographically disperse datacenters, you have
- thousands of high performance cores
- thousands of ports
- hundreds of TBs of cache
- hundreds of Petabytes of storage
- tens of millions of IOPs (something has to feed the 6 Million IOP vSphere cluster – right?)
The next element is the idea of FAST (Fully Automated Storage Tiering) which can tier within the array (which remember – is now modular, and can be geographically dispersed) completely transparently.
Ok – take a look at this slide from the VMware/Cisco/EMC keynote at VMworld 2009. Now I can start to apply the decoder ring to what we were talking about (BTW – we continue to do these – I’m doing one on Wednesday with Cisco at VMware Partner Exchange – with each event, we’re going to keep a 1-2 year window on what we’re talking about, so now you can see how literal we are, you can apply the same filter for subsequent events).
- The Moore’s law point meant that it was clear – our focus on Intel and x86 in the mid-range was the right bet, and was becoming a no-brainer for Symmetrix. It’s the smallest part of the Symmetrix V-Max launch to a customer at it’s surface, but important. It’s one of the reasons we’re able to get 2-3x more in every possible vector (performance, number of objects/replicas). It’s also one of the important ingredients as we continue to scale up in the current generation and the future generations and contain cost at the same time – the Xeon roadmap isn’t slowing down right now – it’s accelerating, and VMware and hyper-consolidated arrays are two of the few things that can chew up all those cycles. BTW – this was a three way presentation – and we were all trying to show “where we are thinking” – note that for Cisco “Processors/Servers designed for VMware” clearly meant UCS.
- Something to consider….. Purely from an engineering feat standpoint – think about this: the Symmetrix engineering team managed to completely change the hardware layer supporting the platform (including a full instruction set change and moving from a direct matrix to virtual matrix model) while keeping DMX 4 the clear technology/market leader at the high end during the multi-year effort, AND maintaining full look/feel/feature consistency in the Symmetrix V-Max. That’s quite an engineering achievement. For you VMware readers that’s analagous to VI3.5 going to vSphere while going from PowerPC to x86 – obviously not the case, but makes you think about the software engineering challenge.
- “Every App that can be a VM should be a VM should be (and will be!) a VM” – we’re hammering this (because it’s true, and when understood it’s a key to datacenter transformation) – you can literally virtualize almost every x86 workload. Are there exceptions? Sure – but they aren’t worth arguing over. And with VMware’s own shoe to drop, the bar will clearly be raised again. This means that all the infrastructure to support it needs to be able to scale to that sort of workload. Sure, it’s not for everyone, but for those of you that say “yes, I need a configuration with thousands of spindles, and hundreds of storage engines, and hundreds of ports – and I need to start small but be able to scale up to that with one point of management”
- “DRS for Storage and Network” and “Aggregate workloads demand I/O QoS” - now it’s clear what we’re talking about – the storage array will be able to move the elements that compose VMs (even parts of VMs!) between tiers, between configurations, and applying storage-engine QoS mechanisms completely dynamically and automatically. FAST (Fully Automated Storage Tiering) is huge – particularly in the coming world with two very divergent tiers (hyper-fast EFD and hyper-large SATA). Now, I’ve talked about this in a few places before, but the picture is more clear now – this will work without any external policy engine (working to optimize with no external input), but his will come to full fruition with vApp metadata and vStorage APIs in the vSphere roadmap – for example, you may be OK with virtual machine A getting a lower performance than virtual machine B – so a “balance all available resources” is good, but not as good as “proritize this VM to deliver these performance characteristics”.
- “Technology must contain cost” – efficiency needs to operate at every level – Virtual Provisioning (be thin and fluid), hit capacity goals as effectively as possible (for some workloads, dedupe, for others SATA, for others compression), hit IOps goals as effectively as possible (large caches – but more than anything EFD is changing the game here) – but there’s another factor: Consolidation and Management.
Think about that…. Hyper scale, hyper consolidated environments like the idea of:
- A single vSphere cluster that consists of 64 ESX hosts, with 4096 cores, 32TB of RAM, and 6 Million IOPs – the largest thing of it’s kind, and all centrally managed as an aggregate pool
- A single Cisco UCS design can have 320 blades with 2590 cores, 62TB of RAM – the largest unified compute system of it’s kind, and all centrally managed as an aggregate pool
- A single Symmetrix V-Max that can have thousands of cores, thousands of ports, hundreds of TB of memory, hundreds of PB of storage – the largest hyper-consolidated single array (that can span sites) – that can be managed as an aggregate pool.
Even better – through vStorage and vNetwork integration, increasingly these can be managed as a unit (as can anyone that designs against these philosophies and APIs)
For those of you that aren’t Symmetrix V-Max scale but like what you see…. EMC spent $1.8B last year in R&D – more than the rest of our industry put together. That’s not to disparage any others. Heck, frankly, many projects are easier when you’re small! What it does mean that you may be see good cross-polination here, and it’s a two-way street. Things like FAST, or the dedupe/compression techniques on Celerra or the ____ on CLARiiON (wait for it!) all cross-polinate.
Like I said when I started - an exciting new day!
WOW! And i was considering going without a Symm in our new DataCeneter, I might certainly be thinking otherwise now.
Posted by: David Robertson | April 14, 2009 at 09:41 AM
Excellent posting Chad.
thanks,
Elias P.
Posted by: Elias Patsalos | April 14, 2009 at 11:33 AM
WOW... EMC takes it to the next level.. again.
Posted by: Paul Wegiel | April 14, 2009 at 04:38 PM
I’d like to make a few comments about the four numbered points in your posting.
1. The choice of chip is one of the engineering decisions that goes into every system but it’s irrelevant to the user, since it’s what can be done with the system and how much real world work the total system performs for the user that counts. The Storage Performance Council has recognized the Hitachi Universal Storage Platform V (USP V) as the Highest Performing Storage System in the Industry. The SPC-2 benchmark results reaffirm several key benefits the Hitachi USP V platform provides customers including:
Greater Productivity: Increased Efficiency: Lower TCO: and Reduced Risk
How about EMC publish some industry standard benchmark figures for the Symmetrix?
2. You talk about hammering the VM message for x86 workloads but make no mention of virtualizing external storage assets. The USP V and smaller footprint USP VM can both host Hitachi and a wide range of storage systems from other manufacturers. Customers need to be able to virtualize all their storage assets and be able to manage their total storage pool from one screen of glass with one management interface.
3. The ability to move date between storage tiers has been available from Hitachi since 2000 with the Lightning 9900 and since 2004 Hitachi has extended automated tiering with the USP to externally attached storage, and with the USP V to thin moves, copies, replication, and migration. Even now, with the V-Max, EMC customers will still need to use external software to migrate, move or copy data from the DMX to V-Max.
4. Indeed, as you say, “Technology must contain cost” but what about a very important cost containment – the ability to virtualize and manage external multivendor storage? A single USP V can manage up to 247PB of total storage capacity.
Hu Yoshida has written a blog which goes into this in more detail:
http://blogs.hds.com/hu/2009/04/dont-confuse-symmetrix-v-max-with-storage-virtualization.html
Posted by: XRbight | April 20, 2009 at 03:31 PM
XRbight - thanks for the comment.
1) I'm not sure if I agree. Using (and being frank here) commodity components means a more cost effective design, but more importantly, we've seen a cross-over where x86-64 simply is starting to deliver a significantly better performance, and a VERY much better price/performance. I suppose ultimately I agree with the point that ultimately its an internal design choice - and that price/performance is the expression - which customers will need to choose. There is another factor though - this allows us to leverage all the other things that are happening in the x86-64 space, but we've managed to do it while maintaining the global cache characteristics that are one of the defining elements of these enterprise-class arrays.
2) We have published (already) loads of performance testing and results on the V-Max (and many EMC arrays). EMC generally doesn't think that the SPC-1 and SPC-2 tests are particularly useful, I've never found a customer running an SPC-1/SPC-2 workload. Every database workload is so wildly variable, that it's tricky. At the really high end, you and I both know that HDS and EMC will put in our systems side by side and run the actual customer workload. We do that all over the market, and generally, EMC comes out on top - but every customer varies. Now, where do we publish performance results - where the benchmark reflects something practical. For example, Exchange Jetstres/Loadgen are pretty close approximations of a given Exchange workload, and we publish those out the yin-yang. My last comment - I think that the SPC-1 and SPC-2 (though this applies to all benchmarks) are particularly off when you're talking about these enterprise class arrays (of which USP-V is certainly EMC's strongest competitor) which are designed to support a whole ton of different workloads at the same time.
3) Ah, the joy of the "competitive talk sheet" :-) That's why I try not to talk about others - it's too easy to be wrong. You don't need external software to migrate between the DMX and V-Max. Also, I've got to say - in the Virtual Datacenter, for 100% non-disruptive migrations (in all heterogenous configurtions) we're finding that Storage Vmotion does a REALLY REALLY good job :-) Some customers want to put a device BETWEEN their host and their heterogenous arrays (USP V, IBM SVC, EMC Invista), but often the downsides of the "man in the middle" make this a faustian bargain. It is a legitimate design choice - but I've got to say - if data mobility between heterogenous platforms is the ONLY use case, I wouldn't want to design for that, and give up everything else. I would virtualize everything and use storage vmotion.
4) It's the same trade-off as the list above - storage virtualization (as defined by the "stick something between your host and your array" of the USP-V/SVC/Invista/vFiler just have not taken off. I think the reason is that the use case is so narrow. I'd also say that "containing the cost" of your arrays by sticking an expensive thing in the middle just doesn't seem to make sense to me.
Hu is certainly entitled to his opinion, and as a strong EMC competitor, I wouldn't expect him to say "yup, it's great, oh well". Barry has done a more complete discussion on this (and I'm sure his site will be ground zero for the HDS/IBM response) here http://thestorageanarchist.typepad.com/weblog/2009/04/1058-v-max-does-what-hi-star-cant.html#more
Posted by: Chad Sakac | April 20, 2009 at 07:02 PM