Well, the “cut right to the chase” answer is more Ethernet.
I’m pretty satisfied looking back on my public record on this that I’ve been consistent. Started here. Then here. Then here. Of course, the “Multivendor” iSCSI and NFS posts. Ethernet will be the storage standard in almost all use cases. Just a matter of time.
I’m also happy to see that EMC’s is living up to the commitment that I put out there publicly here when we launched Ultraflex. That commitment then was that we were making I/O choice fluid, dynamic and modular – such that as new technologies arrive, as pricepoints hit sweet spots, and as standards make interop mature – our customers can non-disruptively add them to their existing arrays.
Customers who bought EMC gear with 4Gb FC could add 1GbE iSCSI at launch. Today they can add 8Gb FC, and 10GbE iSCSI and 10GbE NAS connectivity non-disruptively. This means all the thousands and thousands of customers can simply add the new protocols – no forklift, no downtime. Ultraflex modules are customer-installable. Commitment made, commitment delivered.
I think that that’s the sort of thing customers look for in vendors that want to be partners.
So – what’s next? Well, obviously FCoE. FCoE is still in it’s formative stages. Certain parts are mature, others are still coming along. Stu Miniman does great coverage of the current state of the union here. It is however mature enough to start using today – just use it first where it’s mature (host to switch). So what about end-to-end models?
We do think that FCoE is important to our customers – and know that they want it end-to-end. Like I’ve stated in the past – FCoE isn’t about “extending FC”, or “better/worse than iSCSI/NAS”. It’s not about “either/or” – it’s about “AND”. It’s about getting rid of any reason not to converge the networks. An FCoE adapter is more accurately described as a NIC that also does FCoE.
To that end, we’ve been working furiously on the standards body, in interop. We’re actively selling and supporting Cisco and Brocade’s FCoE/FC switches.
We intentionally didn’t put Gen1 ASICs in our array targets (even though we uniquely could have upgraded by just hotswaping to Gen2 Ultraflex modules). That would not have been customer-centric thinking – as it was just not ready. The Gen2 CNAs are now available in volume.
NOTE: for those of you adopting now on vSphere with Gen2 CNAs from Emulex and Qlogic – I have had a pop in the number of questions about Gen 2 CNAs on the EMC support matrix and the VMware HCL. They will be on the EMC October ESM update. Re: the VMware HCL, the interface vendors go through a IOVP certification harness with VMware, and the harness for vSphere and FCoE CNAs isn’t done. It’s a VERY high priority and is firm for Q4 (and will likely be soon).
So – what about end-to-end designs including the array target? Well – in the same way that at the CX4 launch, I posted a picture of the 10GbE and 8Gb UltraFlex engineering prototypes in the basement of my house (a weird thing perhaps, but highlights that it’s real and we’re not just talking smack), here is Ultraflex FCoE card.
There’s still work to be done – in engineering, in interop, in the standards (see Stu’s post). But….
Commitment made. No marketing-centric efforts here. Move forward with FCoE with confidence in your hosts and in your aggregation switches, and start making FCoE part of your plans. You will be able to use it with your EMC infrastructure. You can count on us to deliver array targets when the time is right.
What do you mean by "10GbE NAS connectivity" in regard to CX4 platform?
Posted by: Krzysztof Cieplucha | September 28, 2009 at 07:54 AM
UNIVERSITY OF ARIZONA CONSOLIDATES NETWORKS WITH FCOE
http://go.techtarget.com/r/9364852/8303685
Looks like it isn't just a "future" for EMC.
Posted by: Steven Schwartz - The SAN Technologist | September 30, 2009 at 12:04 PM
@Krzysztof - The modern Celerra datamovers use the same storage engine design we use across EMC's storage platforms (and support the UltraFlex IO modules. They have had 1GbE and 10GbE support for a while. The future for us is clear - common platform, and where functionality can't be delivered in a given configuration, the "personality" (the software on the storage engine) is the difference.
@Steven - EXACTLY. Customers should be evaluating and adopting FCoE (on the host-to-switch leg) with confidence, as you can see from the University of Arizona case.
And, in that article, the customer stated (paraphrasing) - we're interested in end-to-end, and know it's on the N7K and EMC roadmaps. Bingo. And the article (and photo) above, along with our history of delivery of non-disruptive Ultraflex IO modules - is why they should know that when it's ready, they can add it to their EMC infrastructure.
Posted by: Chad Sakac | September 30, 2009 at 12:55 PM
FCoE is really important for people thinking about the DC networks over the next few years. Hence I asked Brad Wong, product manager for Nexus about the state of FCoE and where to use it today. Could have asked him all sorts of things but I thought this was the most important. This was from Cisco Networkers this week, see the video here http://rodos.haywood.org/2009/10/brad-wong-talks-about-fcoe.html
Rodos
Posted by: Rodos | October 02, 2009 at 05:30 AM
Still don't get FCoE... It's packing usefull data payload inside SCSI frame, FC frame and finally IP frame... Sorry, but this doesn't make any sense... Usefull data is in fact maybe 10-15% of all data transmitted over some sort of physical connection (copper, light)...
My oppinion is that FCoE is one step towards unified protocol (SCSI, FC, IP, IB, etc), but will use unified frame and increase the usefull payload to, I believe at least 50-60%... The only problem I see is that FCoE is a step in the wrong direction... :(
10GbE is heating processors (on any kind of bus adapter), and loosing 90% of usefull data while heating up the machines doesn't seem to be cool technology...
Posted by: Calypso | October 08, 2009 at 07:02 PM
Calypso, FCoE doesn't get encapsulated in IP, only in ethernet frames. the standard also extends the Ethernet frames to align with the FC size (less fragmentation).
Just as importantly, the gen 2 CNAs have a lower TDP and power consumption than what most customers do today, which is ethernet and HBAs - so, cheaper, lower power. Not less than just 1GbE to be sure, but less than 1GbE + HBAs.
Posted by: Chad Sakac | October 23, 2009 at 05:10 PM