« VMware/EMC Summer Webcast Series | Main | Live, from Cisco Live 2009 »

June 30, 2009

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00e552e53bd288330115709f3f95970c

Listed below are links to weblogs that reference Why FCoE? Why not just NAS and iSCSI?:

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

T

And don't forget that interop testing is also underway for using DCB with iSCSI and NAS as well.

Chad Sakac

Absolutely T, and I would expect some improvements in all use cases from DCB (this is Datacenter Bridging for those who didn't read the thread or the original posts - it's the IEEE word for what is called Datacenter Ethernet/DCE or Converged Enhanced Ethernet/CEE).

That said, as soon as you are using the IP stack for core functions (like iSCSI does for iSCSI initiator/target network portals), some of the behavior is intrinsic (again, not good/bad, just different), and the NAS code paths aren't the same as the block code paths in the devices that are out there (again, not good/bad, just different) that expresses itself as "different behavioral envelopes" (or put other wise certain things are harder on NAS than block, and certain things are harder on block than NAS.

Simon Rohrich

The new Cisco UCS will drive the FCoE adoption. It will also make our model of data center build outs using micro containers work very well.

David Robertson

I have been pushing our network guys to go all Nexus and FCoE in our new datacenter, and I think I have them on my side, will be exciting times shortly for us.

Chad Sakac

David - cool, definitely please post your experiences. I just bought 6 N5Ks, almost 200 FCoE CNAs, and a couple UCS chassis, will be sharing the experiences also!

Brad Hedlund

Chad,
The reliance on TCP can take on an order-of-magnitude increase in complexity and processing requirements at 10GE vs. 1GE. If you understand how TCP windowing affects throughout you can calculate that in order to maintain 10GE throughput at 20-30msec latencies requires as much as 32MB buffers on the TOE - driving up adapter costs. FCoE does not have this issue. Perhaps next SSD arrays with much lower latencies will provide some relief for iSCSI @ 10GE. Until then, FCoE has the clear advantage @ 10GE, IMHO.

Cheers,
Brad

Charlie Dellacona

Chad,

Thanks for continuing this conversation. NFS is a bigger animal, it does things that iSCSI, FC or FCOE do not do. The real question is simply why FCOE with all its needs when there is already proven iSCSI and infrastructure?

ARPs are not an issue, once resolved they stay in caches for weeks. The timeouts are settable and the values can be hardwired in the tables if necessary. Red herring.

You keep eluding to customers who have use cases where iSCSI is not acceptable. Please tell us about one with hyper stringent SLAs in detail. While you're doing that, make sure its doesn't have IP in its critical path anywhere else (like DNS or AD or between the App tiers or the front end connection to actual users). If TCP/IP is acceptable on the front end then it is on the backend as well, a timeout is a timeout.

TCP with SACK eliminates a lot of the problems you are talking about.

Including FCoE in every NIC and switch just needlessly adds cost, complexity, and the bugs they bring when iSCSI already exists and uses proven commodity infrastructure.

As for wiring once, I can already do that with normal NICs and switches because I don't need FC compatability with iSCSI.

You may care to promote ethernet as a medium, but that's not the big value tock, it's TCP/IP as a protocol. That is what turns storage from an island into a network and will bring customers actual value.

"Ships in the night" protocols over a special case single media is NOT convergence. Solutions that leverage commodity hardware, run anywhere in the infrastructure, eliminate specialist skill sets, and aggregate technology demand to drive down cost, that's convergence.

--Charlie

p.s. Sorry I took so long to reply.

p.p.s. I realize the FCoE thing is going to happen. But long ago I railed against ATM for the same reasons. For a while that won too, then it died. I think in part because people kept pointing out it wasn't necessary.

Evan Unrue

Correct me if I'm mistaken here, as I'm still digging through FCoE and learning. My understanding is that currently if I have a datacentre with a significant investment in an FC SAN fabric (Cisco MDS for example), existing FC SAN skillsets inhouse, and a fair few racks of infrastructure; FCoE will allow me to retain my existing investment in my big MDS9500's and use FCoE to compliment my existing FC Fabric, but consolidate the fabric at the edge using FCoE for less complexity at the access layer. Also, my SAN admins still get to manage the SAN in a similar fashion as they normally would.

This sits well, as I would imagine that Data Centers are more likely to want to compliment an existing investment they have in FC infrastructure, while simplifying things at the edge; rather than do a complete tech refresh.

My question relates to complete greenfield sites, not just limited to the top end datacenter. Bearing in mind that using 10GbE we have significantly more bandwidth at the edge, the overhead involved with TCP/IP may be more than acceptable for some, it comes back to ensuring storage is provisioned over a reliable medium with zero or near zero packet drops. If 802.1qbb is a network function then surely the same 'no drop' CoS value PFC uses can be applied to iSCSI traffic rather than just FCoE? (that's an assumption, i couldn't find a definitive answer), which kind of makes the whole loss-less Ethernet function, less relevant to FCoE as a differentiator ? and iSCSI over 10GbE also a potentially viable storage protocol to facilitate all those great things that the IO consolidation brings ?

With that in mind, what other arguements are there for FCoE as opposed to iSCSI? I gather Cisco will be doing some pretty cool stuff with it with the UCS servers and most likely VMWare. Looking for an answer without vendor bias is proving tricky.

I work for an organisation which will be no doubt pushing the whole FCoE message soon enough. This is an objection I may well have to handle.. I'm keen to understand further

Kurt

Evan,
This article below answers a lot of the questions you have. Keep in mind that Cisco has created the article and pushes their data center design, and it seems more appropriate to existing FC installations. Over time, greenfields will surely be TCP/IP based, I believe, and concur with Charlie that this FCoE accomodation is exactly like the old ATM argument. It is just going to take some time. Check out Brocade's website for confirmation that they know full well that native FC is in decline.

http://www.cisco.com/en/US/docs/ECATS/Solutions_Guides/White_Papers/FCoE_NetApp_V12.pdf

TriGuy

So I wonder if someone could get Nexus like performance and infact even better performance at almost 1/2 the cost, while supporting Layer 3 at the ToR, EoR, or Middle of Row, and capabilities to vitualize this switch with another switch up to 70km apart would they do it. And in regards to the tech refesh for existing FC_SAN customers - hello you already have two or three 802.3 NICs in those same servers anyway connected to CORE LAN switch. One could easily upgrade their LAN switch to 10Gbp and the NICs for just the cost of the support on the FC_SAN director. The Net Net, is that alot of customers see the FCoE as an also ran to iSCSI. As such why wait? There are scalable 10Gb switches that can be upgraded to 40Gb and 100Gb. Combined with advanced QoS and LAN Switch virtualization technologies, one can now connect these moster LAN switches without Spanning Tree, HSRP, VRRP, between data centers up to 70km. Wow imagine that, vMotion and an iSCSI target being swapped over standard Ethernet between hot hot data centers at 70 clicks.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Comments are moderated, and will not appear until the author has approved them.

  • BlogWithIntegrity.com

Disclaimer

  • The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC. This is my blog, it is not an EMC blog.

Enter your email address:

Delivered by FeedBurner