Today - the CLARiiON CX4 launched - and it is a big release. You can see more details here.
From my perspective, there's lots of cool stuff (read and come to your own conclusions) but as you can guess from my earlier post on I/O consolidation and contention in the VMware enviroment (nice way to say potential bottleneck) here, my main interest is in the UltraFlex I/O modules. This blatant marketing blurb has a picture of the back of one of the new arrays - those are all removable (hotswappable!) IO blades.
Why are these important? The following are true statements from where I sit, and lead (at least for me) to a couple obvious conclusions of where datacenters are headed:
- Any x86 workload can be virtualized, and what can be done, will be done (we've shown only a small sampling of that here, here, and here) There's too many good reasons to do this. This will include all sorts of workloads that even on their own, have a heavy I/O impact. Put them together and it's straight addition.
- Consolidation ratios are only going to increase. With Intel (and in this cycle, AMD to a lesser extent - but I'm sure they will come out swinging) making a quad core proc for $250 now, and setting clear expectations for 8-core and more in 2009, and memory innovation to come, we will quickly move from 10:1 to 20:1 (I would argue we're already well past that!) to 40:1 to 100:1 and beyond.
BTW, please think about what that sort of hyper-consolidation future implies about: 1) Memory Page Sharing (aka memory dedupe) and about those that CAN do it (VMware) and those that can't (Hyper-V and Xen); 2) whether you care that you can do live, non-disruptive movement of VMs when you have 100 on a single host - is that going to become more important, or less?
- The bottleneck is moving to the I/O layer (both the network and storage transport and the back-end). This is particularly acute on network and IP storage today (again, know that I'm an IP super-fan, and no-fanboi of FC for it's own sake) - where many, many GbE interfaces off a single server are common, and blades once again come into vouge, not for power/space/density issues (VMware makes the only question power/space/density per VM the question) - but rather for IO aggregation/virtualization/management reasons.
- Above all, flexibility is paramount (i.e. the ability to non-disruptively adapt to unforeseen changes) with things like Vmotion and Storage VMotion - and those constructs will increasingly appear in all parts of infrastructure.
Now, making storage built for VMware is only part of EMC's strategy - our view is that everything needs to adapt to a world where nearly every host is a hypervisor, and every app is a VM or VM appliance. This affects infrastructure operations (backup/recovery/DR, etc), management (understanding and adapting to pervasive mobility, PtoV mapping and relationships) and skillset (we're at 400 VCPs and still adding at 50/quarter).
BUT storage is an important part of the strategy - so what are we doing about it? Read on....
Today, our mid-range gets updated - and you can have any I/O configuration that you need. It comes first in the CLARiiON family, but anyone who knows EMC knows that our mid-range hardware is shared - i.e. a CLARiiON storage processor is a lot like a Celerra Datamover, so expect to see large scale 64-bit multi-core Intel-based brains with loads of memory and huge backend IO expandability to become standard in all our stuff.
We get slammed and caricatured as a hardware company by many folks, but the reality is that we do billions in software, both pure software and software running on arrays. Ironically, that's more revenue in pure software than the total revenues of those poking fun. I'll speak for myself, and call it "Chad's Axiom": "no solution is just software, no solution is just hardware, and anyone that thinks it's the same solution regardless of requirements is living in a mythical land where unicorns live." And, like all axioms, there's a corollary to this one too: "it's more about know-how than it is about any of that other stuff". Listen to infrastructure folks who come to the table with expertise about your CONTEXT. This stuff is complicated - which makes learning and adapting, and innovating fun - for all of us.
That said, we make DO make our own hardware - and we do think it's an important part of infrastructure. We're not ashamed of it, we're proud of it. It is a huge undertaking to make a big change like we just did. So why the big change on the back-end? This serves a couple of technical purposes:
- You can have very high density port configurations in a small footprint, with low power consumption. This is particularly important in iSCSI (and when we do the same on the Celerra, in NFS) centric configurations. to put it in perspective, the smallest new CX4 could be configured with 16 1GbE interfaces in a 2U footprint, and the largest can have 32 ports in a 4U footprint. And that's with the CURRENT IO modules. Looking at them, we could fit more ports into a single module in the future. Here's a picture of the CX4-960 storage processor backend (with a bunch of different modules and two open slots).
- It's hot-pluggable. To me, this is critical for this idea to really work - it's the same core idea as ESXi - a plug and play flexible model. If ESXi, along with neato stuff coming soon from VMware, along with existing VM HA/DRS capabilities makes "plug and play" compute (memory/processor) model, this is analogous on the storage model. We (along with everyone) have had plug and play drives for a long time (i.e. add capacity and IOPs on the fly, non-disruptively modularly or into a frame), and some have had the "add IO processing capabilities and ports" dynamically (thinking of EqualLogic and Lefthand), must most mainstream vendors have not.
- Flexible answer to "what's next". Frankly the answer here isn't conclusive - and likely won't be for some time. This post discussed 10GbE (which I'm a big believer in), but the question (read the comments) is when and what PHY? This gives the customer (and EMC!) the opportunity to adapt mid-cycle. Newer hardware, faster = more flexibility with industry inflection points in the next few years looking to be inevitable. Will it be 10GbE with software initiators/NAS, hardware initiators (i.e. iSCSI HBAs), 10GbE FCoE? The answer is likely to be a mix of all. We're ready. Check out what I have in my grubby little hands - literally some of the few available anywhere in the world!
The one on the left is a 8GB FC UltraFlex IO module - 4 ports. The one on the right is a 10GbE UltraFlex IO module - there are 2 10GbE ports, and a diagnostic port in the middle. These are still early (but working) engineering samples. Interesting to note how massive the heatsink on the 10GbE module is compared with it's 1GbE brethren. It highlights the power issue that still exists with all the 10GbE MAC/PHY ASICs. But hey, when they are ready, we are too!
What is true is that all those points are all very important in the storage layer of the next-generation datacenter - you will have:
- a smaller number of hosts generating a LOT more I/O
- will need faster transports (10GbE), and where a slower transport is used (1GbE) we need higher port density
- end-to-end QoS mechanisms since it will be a unified fabric
- you will need dynamic flexibility to add/remove/change all elements on the fly, non-disruptively.
- Certain core features, which were the battleground of yesterday become a given (virtual provisioning, dedupe, writeable snapshots, dynamic and auto provisioning, performance auto-tuning, etc) - every vendor has their own spin on these things, and they do all have a place in the VMware pantheon - customers should look at them and decide on their merits. We can do them all, and we each think we do them better than others. Most customers I talk to don't even leverage what they already have :-)
Software was as much part of this refresh as anything else (virtual provisioning, increase in all major array functions, spin down, etc) and as I've said, it's about all the parts of the infrastructure having a management model built for the VMware world (our first steps have been integrating everything with the VC and ESX APIs) but more on that later.