UPDATED June 10th, 2009 to incorporate more on lossless Ethernet (was in “read more” section, pulled to front)
Congratulations to everyone who worked on the standard.
The FCoE standard celebrates its one week anniversary today - the T11 standards body ratified FCoE as a standard in FC-BB-5 on Wednesday June 3rd.
I’m not so much an FCoE true believer so much as I am an Ethernet true believer. Coming to EMC from an iSCSI startup – I guess it was in the watercooler :-) As I’ve interacted with more customers, I’ve gained a better understanding on why FCoE is important. It offers a chance for a unified interconnect – covering a gamut of use cases. Since all FCoE adapters are by definition Converged Network adapters – customer can use one, and use it for 10GbE LAN, 10GbE NAS, 10GbE iSCSI, and Fibre Channel via FCoE.
I still stick by the bet I made with Chuck (come on iSCSI!) – he and I disagree often, but boy the conversation is always be fun :-)
And you can see I’ve been interested in this for some time:
(BTW – this was June 2008 - was after a couple of months of being “deep dive” introed to what was at that point the very confidential Cisco project codenamed California)
So why is it important? On it’s own – iSCSI does not offer what FCoE brings to the table – a real chance for all current use cases/workloads - to consolidate the networks (by eliminating the “but what about this host here”… excuse) which in turn enables reductions in cable and port count, lower space/power requirements, and enable getting to a “cable once” (at least within major generational changes) model.
This doesn’t mean I’m saying iSCSI is bad or even “not as good” as FCoE. iSCSI is undoubtedly less expensive, and runs on a much, much broader set of equipment. iSCSI is the fastest growing storage segment by a long shot for all these reasons, and is often the protocol of choice for customers with no existing shared storage infrastructure. EMC is (and certainly I count myself in that) a huge iSCSI supporter. However, iSCSI does fundamentally have lossy characteristics (using TCP retransmits for data integrity), longer TCP/IP timeout characteristics and other But, without being able to cover the remaining use cases for FC – there wouldn’t be an opportunity for convergence of transport. FCoE is that opportunity.
Now, there’s still more work to be done – as St0ragebear pointed out in the comments, and I think it’s worth pulling up to the front part of this article. First of all – now that the T11 standard is complete, there are other steps for it to become an ANSI standard – including a public review period. The bigger point I had originally in the “read more” section, but I will pull up here. Lossless Ethernet (CEE, DCE, IEEE Datacenter Bridging) is still not a standard, and this will take more time. This is an important part of the FCoE idea. The IEEE 802.1Qbb (Priority Flow Control) project is approved, but the standard is not done. You can find more about that here: http://www.ieee802.org/1/pages/dcbridges.html … Now back to the original text….
I’m a glutton for punishment and love this stuff – it’s fascinating to look back at the minutes, see the progress, who’s driving what, seeing company names change (Nuova Systems). You can see that all here: http://www.t11.org/t11/docreg.nsf/gfcoemi?OpenView. It’s an easy way to see who’s driving, who’s participating (and how actively) and who’s following – at least from an engineering and standards standpoint.
Stuart Miniman from our office of the CTO (who was part of the standard process) does a very good video update on FCoE (including examples of Qlogic Gen 1 and Gen 2 Converged Network Adapters or CNAs):
What does this have to do with VMware? VMware is one of the earliest, most potent use cases for FCoE and 10GbE generally. I said it back in June 2008 (and have said as much at various VMworld sessions with Cisco and VMware) and I’ll say it again – massive consolidation, coupled with massive multicore and huge cheap RAM moves bottlenecks to the server I/O layer. It’s fun to look back a year later and be borne out by what’s happened since.
EMC has been supporting FCoE for some time, and now with the standard ratified, it’s exciting times!
Read on for more gory details!
While FCoE devices (CNAs, switches, targets) have been shipping for some time – up till now, everything has been pre-standard. And in this case, pre-standard was missing important pieces – not the least of which was the FCoE Initialization Protocol – which is really important in cases where there is more than one switch involved – pretty important for any real network. These pre-standard devices were colloquially called “pre-FIP” (think “802.11 pre-N” wireless router). Every FC-BB-5 complaint device must support FIP.
To date – the existing FCoE array targets have used CNA adapters configured in target mode with custom driver stacks which were constructed to work together pre-FIP (as were the intitiators). Also the early gen 1 CNAs (which were the initiators and targets out there) were massive, with totally seperate network and FC ASICs (the early FCoE array targets used these devices). These gen 1 CNAs won’t be upgradable to the FC-BB-5 standard (at least I know for sure for one of them), which isn’t terrible on the host side (heck, put VMware host in maintenance mode, vacate VMs, replace, bring it up – rinse later and repeat), but much more difficult on the target side
When I first started digging deep here, I was frustrated we weren’t using these Gen 1 CNAs in target mode (so we could show “hey look, we’re actively supporting this!” in a 30 second sound-bite), but as I dug deeper and deeper – I understood the rationale, and over time really started to understand the upside of the fact that we’ve moved to a common, modular I/O interface across our product families.
These UltraFlex I/O modules are a hot-pluggable, non-disruptively upgradable PCIe modules (which can also be ultra dense). This means customers have an upgrade path. We ship 1GbE, 10GbE, 1GbE iSCSI hardware-based, 4Gb FC, and 8Gb FC modules, and now that the FCoE standard is ratified, we’re beavering away on an FCoE Ultraflex I/O module.
While this means that by definition EMC won’t be first to ship a product (since we have to engineer a module), it means that no matter what, there is a non-disruptive upgrade path POSSIBLE.
Put another way – while EMC may be disadvantaged by not being first (the easiest route is to take an off-the shelf PCIe CNA, configure it in target mode) from a market position standpoint, the customer advantage is there is no risk (the process for a hardware upgrade on one of those off-the-shelf CNAs on a server is one thing – just VMotion), but for an array target, it’s a little more complex. My 2 cents – I think it’s better for the vendor to suffer a little marketing positioning than it is to have a customer need to deal with any of the “new standard” mumbo jumbo. The flip side I suppose is that it allows “banging out the early kinks” to ship. My two cents if I were a customer – this is what EMC e-Lab is designed to do – we do it, we take the responsibility end-to-end on your behalf (and they have indeed been going to town and are now banging away on the FC-BB-5 based gear).
This is not to disparage the choice of others – only that as I dug deeper, I understood the choices our engineering folks made.
The decision was also fundamentally a pragmatic one. The bulk of the benefits of FCoE come from massive cable/port count, power, management reduction at the host-switch part of the network. Over time, this will extend throughout the network, including the target.
A few common questions I get….
Q: Is 10GbE a standard?
A: Yes, there are several 10GbE standards. The primary difference being the physical link layer, and corresponding distances and cable type. The major standards include:
- 10G Base-SR (fiber optic cables - the most common being the same orange (OM2) or aqua (OM3) multimode 850nm fiber optic cables used by Fibre Channel - in which case it is specifically called 10G Base-SR) and has a moderate distances which vary depending on the cable (26-82m), and uses relatively expensive SFP+ modules
- 10G Base-CX4 (InfiniBand-like cables with large, but lower cost SFP connectors) - and has a short-distance limit (15m)
- 10G Base-T (10 Gigabit Ethernet over Unshielded Twisted Pair) - this uses Cat 6 UTP over short distances (55m), and over the more traditional distance (100m) if using Cat 6a UTP.
- 10GbE SFP+ Direct Attach (10GSFP+Cu) - this uses twinaxial copper cable directly into very small SFP+ adapters, and eliminates the need for the optical elements, but is very short-haul distance only (10m)
Q: Is “Lossless Ethernet” a standard?
A: Not quite yet. IEEE 8023.1Qbb is an approved project, but currently not an approved standard. This is important to deliver truly lossless (which means no Ethernet Frame is lost) Ethernet. Remember that ethernet was originally designed to be lossy, with it being “ok” for Ethernet frames to be dropped. This means that network interface cards and 10 Gigabit Ethernet switches must carefully treat the Ethernet Frames, and communicate with each other - buffering traffic, applying priority controls, etc. IEEE is still working on the applicable standards. You can find more here: http://www.ieee802.org/1/pages/dcbridges.html
Q: Where can I go to learn more about FCoE?
A: As a new technology - one of the best places to go to find information is the standards body - in this case T11. You can see by looking at minutes that Cisco is front and center here. EMC’s David Black (EMC Office of the CTO) is a loud voice (David – you are mr keen – I think you had a near 100% attendance record :-) there in the standard body.
Also, Cisco maintains a popular site:
Stuart Miniman (EMC Office of the CTO) maintains a great deal of great of FCoE content here: