…how’s that for a headline? :-)
A big part of today’s announcement (just warming up for VMworld :-) that might have gotten buried underneath all coolness of: 1) Unisphere; 2) sub-LUN FAST, 3) FAST Cache, 4) Compression; 5) VAAI firmware support; 6) and a huge performance boost at the high-end of the Celerra with new Westmere based gateways; 7) New lower cost 100GB and 200GB solid state flash drives… was big news..
EMC supports simple, non-disruptive hot add of Native 10GbE FCoE to the huge installed base of EMC CLARiiON and EMC Celerra customers – opening up the converged storage network market like crazy.
So – what exactly am I talking about?
Well – while in the VMware context, as this independent survey showed, FC dominates existing VMware storage protocol, with iSCSI and NFS growing quickly. What it doesn’t show, but is a fact, is that a huge number of customers are coming to the growing realization of the benefits of using both NAS and Block together. Here’s that same data sliced to show the percentage of customers using 1, 2, 3 or more protocols at once to support their VMware environment.
But, the challenge with block storage is very basic – and right now is a fork. For customer with existing FC networks (a TON), there’s very little movement to iSCSI (some, sure, but not a lot – it’s a rounding error). For customers without existing FC SANs, often their first SAN is iSCSI (not by definition, but common). That means that the multitude of existing FC customers with millions and millions of FC ports had institutional resistance to converging their networks to 10GbE.
What FCoE does is give a simple way for the masses to converge their networks. It’s really that simple. Yes, could they converge with just using iSCSI and NAS? Of course, you can point to some narrow use cases which can only be met with FCoE and not iSCSI, but that’s a very narrow set (real, but narrow).
As I write about here a year ago (knowing that we were working on this Native FCoE module) – there’s a lot to this story.
So, you could say (and as a technophile, I do) “sure – they could converge on iSCSI/NAS with 10GbE”, but that’s missing a bit of shall we say – the human dynamic?
FCoE provides a very simple value proposition. Wire once, save money, improve performance, change nothing (operationally). Except of course if you need to change a LOT of kit.
Up until now, that’s why the primary use case was in the “host to top of rack switch” use case – which represented no big change. In other words – it was nice and easy incrementalism: “when you buy a new host, consider replacing many wires with two, and using a converged adapter instead of multiple NICs and HBAs (either as a standalone from Qlogic or Emulex as the leading two with other interesting folks too – or using a converged adapter in your blade server ala Menlo or Palo cards in UCS)”.
On the target (the array), the challenge is that incrementalism is harder – as you refresh your storage on a longer window, and adding a new array is well… a disruptive, complex operation for the most part (storage vmotion and storage virtualization and storage federation generally making it easier). So, adding FCoE was a big step function when it was linked to “get it with the new array you buy…” value prop.
UNTIL NOW.
What we’re delivering is:
- The ability to add native FCoE converged ports to any EMC CLARiiON CX4 or EMC Celerra Unified array sold in the last 2 years or so. That’s a LOT of customers from the leader in this space.
- The ability to do it in less than a few minutes, and do it non-disruptively. It’s a customer-installable option.
- The ability to add it without buying a new array, but at the cost of just the Ultraflex IO module – a cost that is a tiny fraction of a new array – so small that by comparison, it’s free.
- That it’s already on the VMware HCL, and with many other OSes to come on the EMC eLab support matrix.
This opens up this market in a way that hasn’t been true until now (at least in my opinion).
So – how easy is it? Here’s me installing it. In just a few minutes. My 6 year old could do this. Scratch that – my 4 year old could do it (heck, they both play Civilization on my iPad :-)
(forgive the shaky cam)
This (amongst other reasons) is why we designed the IO complex (shared across ALL EMC platforms) to have IO interfaces that are modular and hot-swappable (see this post back here a year ago on FCoE, and two years ago on 10GbE)
That’s customer investment protection. We’re not perfect at it, but we try to keep it as central to our thinking as we can.
[UPDATE – August 25th, 2010 – a couple points of clarification – since the PR reads “Q4 availability”: 1) there are a small number of FCoE Ultraflex IO modules available immediately for early customer evaluations; 2) mass manufacturing is expected in October; 3) in cases where there is either a unified configuration or a Celerra gateway on top of a CLARiiON – don’t use the FCoE SLIC until a DART 6.0 update, targeted for November]
FLARE 30 and DART 6.0 have actually been GA since early this month, we tend to do an internal availability for a while before public release. I’ve been using Unisphere for a while and it rocks. Yes, before you ask, the DART 6.0 UBER VSA will be out soon :-)
Remember if you’re upgrading to use the EMC procedure generator – it will walk you through everything you need. Also, remember that you need to upgrade to DART 6.0 before you upgrade to FLARE 30 if you have a Celerra (the procedure generator will tell you this, just double-stating it).
So – what do you think? Remember, this isn’t a holy crusade for FCoE vs. iSCSI vs. NAS (I’m a huge iSCSI and NAS fan) – but rather making moving to converged 10GbE networks a no-brainer for our customers – all of them.
Wow, your basement is getting very populated! :) :) :) (of course, I'm just joking.. We all know it's Jason Boche's basement! hahaha)
Posted by: Mike Foley | August 24, 2010 at 01:19 PM
Hi Chad,
nice post as usual :-)
maybe you want to update the older posts with the details for todays announcement.
regards
Rainer
Posted by: Rainer | August 24, 2010 at 01:37 PM
I love the fact that you were thinking about it 2 years earlier. Someday I will be a vSpecialist, part of a team with an influence. I love what you folks do over there.
At the moment i'm just a mere VCP & CCNA.
What would be most beneficial to the team... VCAP-DCD/VCAP-DCA or CCNP or other?
Feel free to email me.
Stefan
Posted by: StefanJagger | August 24, 2010 at 03:30 PM
Since I have a new, yet to be powered up, CX4-240 (waiting for Flare 30), and connecting to a new Cisco UCS, I think you should consider sending me a couple of the new FCoE cards and I'll put them into production with the rest of the system!
Posted by: JeffSessler | August 24, 2010 at 06:15 PM
One more question...
Do the FCoE cards support the use of SFP+ CU (copper)cables or is it optics-only?
Posted by: JeffSessler | August 24, 2010 at 06:19 PM
Thanks for the updates Chad - good to see EMC delivering, especially w/ the option for older systems. NetApp may have had a lead in shipping native FCoE, but the ecosystem wasn't really ready for end-to-end. It's getting there now w/ additions such as VMware certification of FCoE (avail now), UCS supporting direct array support (expected in Sept) and core directors (Cisco Nexus 7000 and Brocade DCX) supporting FCoE starting to become available. Look forward to hearing from customers on the journey to convergence.
Posted by: Stuart Miniman | August 25, 2010 at 10:53 AM