Today - the CLARiiON CX4 launched - and it is a big release. You can see more details here.
From my perspective, there's lots of cool stuff (read and come to your own conclusions) but as you can guess from my earlier post on I/O consolidation and contention in the VMware enviroment (nice way to say potential bottleneck) here, my main interest is in the UltraFlex I/O modules. This blatant marketing blurb has a picture of the back of one of the new arrays - those are all removable (hotswappable!) IO blades.
Why are these important? The following are true statements from where I sit, and lead (at least for me) to a couple obvious conclusions of where datacenters are headed:
- Any x86 workload can be virtualized, and what can be done, will be done (we've shown only a small sampling of that here, here, and here) There's too many good reasons to do this. This will include all sorts of workloads that even on their own, have a heavy I/O impact. Put them together and it's straight addition.
- Consolidation ratios are only going to increase. With Intel (and in this cycle, AMD to a lesser extent - but I'm sure they will come out swinging) making a quad core proc for $250 now, and setting clear expectations for 8-core and more in 2009, and memory innovation to come, we will quickly move from 10:1 to 20:1 (I would argue we're already well past that!) to 40:1 to 100:1 and beyond.
BTW, please think about what that sort of hyper-consolidation future implies about: 1) Memory Page Sharing (aka memory dedupe) and about those that CAN do it (VMware) and those that can't (Hyper-V and Xen); 2) whether you care that you can do live, non-disruptive movement of VMs when you have 100 on a single host - is that going to become more important, or less?
- The bottleneck is moving to the I/O layer (both the network and storage transport and the back-end). This is particularly acute on network and IP storage today (again, know that I'm an IP super-fan, and no-fanboi of FC for it's own sake) - where many, many GbE interfaces off a single server are common, and blades once again come into vouge, not for power/space/density issues (VMware makes the only question power/space/density per VM the question) - but rather for IO aggregation/virtualization/management reasons.
- Above all, flexibility is paramount (i.e. the ability to non-disruptively adapt to unforeseen changes) with things like Vmotion and Storage VMotion - and those constructs will increasingly appear in all parts of infrastructure.
Now, making storage built for VMware is only part of EMC's strategy - our view is that everything needs to adapt to a world where nearly every host is a hypervisor, and every app is a VM or VM appliance. This affects infrastructure operations (backup/recovery/DR, etc), management (understanding and adapting to pervasive mobility, PtoV mapping and relationships) and skillset (we're at 400 VCPs and still adding at 50/quarter).
BUT storage is an important part of the strategy - so what are we doing about it? Read on....
Today, our mid-range gets updated - and you can have any I/O configuration that you need. It comes first in the CLARiiON family, but anyone who knows EMC knows that our mid-range hardware is shared - i.e. a CLARiiON storage processor is a lot like a Celerra Datamover, so expect to see large scale 64-bit multi-core Intel-based brains with loads of memory and huge backend IO expandability to become standard in all our stuff.
We get slammed and caricatured as a hardware company by many folks, but the reality is that we do billions in software, both pure software and software running on arrays. Ironically, that's more revenue in pure software than the total revenues of those poking fun. I'll speak for myself, and call it "Chad's Axiom": "no solution is just software, no solution is just hardware, and anyone that thinks it's the same solution regardless of requirements is living in a mythical land where unicorns live." And, like all axioms, there's a corollary to this one too: "it's more about know-how than it is about any of that other stuff". Listen to infrastructure folks who come to the table with expertise about your CONTEXT. This stuff is complicated - which makes learning and adapting, and innovating fun - for all of us.
That said, we make DO make our own hardware - and we do think it's an important part of infrastructure. We're not ashamed of it, we're proud of it. It is a huge undertaking to make a big change like we just did. So why the big change on the back-end? This serves a couple of technical purposes:
- You can have very high density port configurations in a small footprint, with low power consumption. This is particularly important in iSCSI (and when we do the same on the Celerra, in NFS) centric configurations. to put it in perspective, the smallest new CX4 could be configured with 16 1GbE interfaces in a 2U footprint, and the largest can have 32 ports in a 4U footprint. And that's with the CURRENT IO modules. Looking at them, we could fit more ports into a single module in the future. Here's a picture of the CX4-960 storage processor backend (with a bunch of different modules and two open slots).
- It's hot-pluggable. To me, this is critical for this idea to really work - it's the same core idea as ESXi - a plug and play flexible model. If ESXi, along with neato stuff coming soon from VMware, along with existing VM HA/DRS capabilities makes "plug and play" compute (memory/processor) model, this is analogous on the storage model. We (along with everyone) have had plug and play drives for a long time (i.e. add capacity and IOPs on the fly, non-disruptively modularly or into a frame), and some have had the "add IO processing capabilities and ports" dynamically (thinking of EqualLogic and Lefthand), must most mainstream vendors have not.
- Flexible answer to "what's next". Frankly the answer here isn't conclusive - and likely won't be for some time. This post discussed 10GbE (which I'm a big believer in), but the question (read the comments) is when and what PHY? This gives the customer (and EMC!) the opportunity to adapt mid-cycle. Newer hardware, faster = more flexibility with industry inflection points in the next few years looking to be inevitable. Will it be 10GbE with software initiators/NAS, hardware initiators (i.e. iSCSI HBAs), 10GbE FCoE? The answer is likely to be a mix of all. We're ready. Check out what I have in my grubby little hands - literally some of the few available anywhere in the world!
The one on the left is a 8GB FC UltraFlex IO module - 4 ports. The one on the right is a 10GbE UltraFlex IO module - there are 2 10GbE ports, and a diagnostic port in the middle. These are still early (but working) engineering samples. Interesting to note how massive the heatsink on the 10GbE module is compared with it's 1GbE brethren. It highlights the power issue that still exists with all the 10GbE MAC/PHY ASICs. But hey, when they are ready, we are too!
What is true is that all those points are all very important in the storage layer of the next-generation datacenter - you will have:
- a smaller number of hosts generating a LOT more I/O
- will need faster transports (10GbE), and where a slower transport is used (1GbE) we need higher port density
- end-to-end QoS mechanisms since it will be a unified fabric
- you will need dynamic flexibility to add/remove/change all elements on the fly, non-disruptively.
- Certain core features, which were the battleground of yesterday become a given (virtual provisioning, dedupe, writeable snapshots, dynamic and auto provisioning, performance auto-tuning, etc) - every vendor has their own spin on these things, and they do all have a place in the VMware pantheon - customers should look at them and decide on their merits. We can do them all, and we each think we do them better than others. Most customers I talk to don't even leverage what they already have :-)
Software was as much part of this refresh as anything else (virtual provisioning, increase in all major array functions, spin down, etc) and as I've said, it's about all the parts of the infrastructure having a management model built for the VMware world (our first steps have been integrating everything with the VC and ESX APIs) but more on that later.
The CLARiiON is looking more and more like a Symmetrix these days and vice versa... :P
Posted by: DenisG | August 06, 2008 at 12:43 AM
Hmmm, is this EMC's first FCoE ready offering?
http://blogs.cisco.com/datacenter/comments/our_fcoe_partners_are_delivering/
EMC also has a WebCast on the new CX4 Platform.
http://www.emc.com/events/2008/q3/08-26-08-clariion-cx4.htm
Posted by: DenisG | August 06, 2008 at 12:54 AM
Hey, you get to play with all of the latest and greatest toys - that's not fair!
Seriously, kudos to EMC for bringing out the CX4. Your customers (me) are taking note of the rate your bringing quatlity product to market, especially compared to competitors.
Now you say: "the infrastructure having a management model built for the VMware world (our first steps have been integrating everything with the VC and ESX APIs) but more on that later".
You're such a tease! Can't wait to read about that.
Posted by: Virtual_JTW | August 08, 2008 at 08:35 AM
Virtual_JTW - it's gonna knock your socks off!
Thanks for the comment on quality - we take it really, REALLY seriously.
Thanks for being a customer!!!!
Posted by: Chad Sakac | August 08, 2008 at 12:10 PM
Chad! The CX4 looks like the array we've been waiting for! We've made a big investment in 10GbE this past year (partly in anticipation of iSCSI). And the year before that we became 90% virtualized using VMware Server (moving to VI3 in Q4). The CX4 looks like the final piece to our data center puzzle. The problem is I don't think we're going to need more than 10TB of storage at first, do you think it's a good fit? The AX4 certainly looks like an appetizing option, but no 10GbE.
Posted by: Zack | August 08, 2008 at 12:30 PM
Zack, thanks for asking! You have to know that literally posts like your make our product teams smile - people like their baby. I do to, but it's not the same - it's like your mom telling you that your new baby is cute :-)
While I'm not on the sales side of EMC (i.e. I never use our quoting system), I can tell you (as I actually buy equipment for all these projects and to provide to VMware folks that need them) that the CX4-120 is very cost-competitive.
The modular IO complex definitely adds cost on parts/manufacturing, so its going to be while (certainly a generation) before that idea goes DOWN a category (i.e. the AX is an entry-class, but scalable array, the CX family is a mid-range array)
Seriously, what you should do is get a quote - from any EMC partner and come to your own conclusion. You can literally start with only a few drives, and no advanced features, then add capacity, features, and IO ports as you need them.
Find a partner this way:
https://crm.emc.com/OA_HTML/emc_pvLCdPartnerSearch.jsp
Or, if you want to talk to EMC directly, you can call here: 866-438-3622.
Let me know how your VI3 and EMC deployment goes - and I hope to see you at VMworld!
Posted by: Chad Sakac | August 08, 2008 at 09:01 PM
Chad,
Another great post. Part of me wishes you would post more. On the other hand the way it is now, I know that when you do post something it's more than likely going to be well worth my time to read it.
The timing on this announcement was great. I had a meeting scheduled last week with my TC and sales rep. As fate would have it one of my major topics to discuss was that I was starting to get a bit tight on capacity for my VMFS storage on our DMX (for replicated guests) as well as on my CX-300 (for non replicated guests). Of course this is happening sooner than we had hoped to be upgrading our DMX.
For the most part the guests on the DMX are not there for performance reasons. I think the CX4 gives me a perfect route to go where I can replace the CX-300 and move most if not all of the guests from my DMX to a CX4 as well as add capacity for a few years (hopefully) of growth.
It's looking like the CX4 will allow me to make this investment to provide for our current needs while being comfortable in the knowledge that we have the flexibility we need going forward.
As an added benefit I will be able to free up space on my DMX that can be reallocated to the mainframe and / or open systems application that share the DMX. This will probably add a year or more of life to the DMX.
The CX4 will dovetail quite nicely in my redesign of our Vi3 Infrastructure that I am planning to implement in the next 6 months.
The CX4 is another example that reaffirms my faith that we have made the right decision to align ourselves with companies such as VMware and EMC.
Posted by: RodG | August 23, 2008 at 12:41 AM
RodG - thank you - it means a lot to the people who build those products, the people who bring them to you, and those who service you.
It's our pleasure!
Also - FYI - your DMX now supports Virtual Provisioning (Enginuity 5773) - ask your team for an update discussion!
Posted by: Chad Sakac | August 27, 2008 at 09:06 PM
Great post.
Roger L
http://rogerlunditblog.blogspot.com/
Posted by: Roger Lund | December 15, 2008 at 05:18 PM