« So, how **EXACTLY** does VM HA's admittance algorithm work? | Main | Disaster! (or How Site Recovery Manager saved my bacon) »

June 19, 2008

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Duncan

I couldn't agree with you more on these questions. Although iSCSI seems to have a bad rep when it comes to VMware, I never witnessed a slow setup. iSCSI is easily expandable and definitely the future and with 10Gbe the so called restraints are all gone.

How do you feel about jumbo frames and did you test the performance gains when using jumbo frames with the iSCSI initiator? Although it's not supported yet I guess it could be really beneficial!

Michael Anderson

Today's shipping IB solution is 4x (40Gbit) the speed of 10GigE, which will likely not ship FCoE until at least late this year. Tell me again why a 4x performance solution is not compelling.

Charlie Dellacona

Don't weaken on your the FCoE stance! The last 30 years are rife with stories of the highest end customers finding the new disruptive technology unacceptable. They always wind up adapting to the lower cost solution. Single network convergence is such a powerful idea that it will be irresistable, and that means the whole IP stack - not just the media. FCoE is just a last gasp.

Ole André Schistad

Great post on a topic which has interested me for quite some time.

Personally, I've always found the idea of a block-level storage protocol running on top of Ethernet a very compelling one. At the same time, I've always been skeptical about using TCP + IP as a storage protocol (ie iSCSI), due to the extra processing (which equals latency) required by the added layers of indirection.

I've never quite been able to make sense of why iSCSI is such a great invention. I mean, sure, IP is a familiar idiom to most IT pros, but I wouldn't say that FC is all that hard to "get" either. And the argument that iSCSI is a routable protocol simply makes no sense at all to me... if you intend to stick a router inbetween your storage clients and the storage system, I must only assume that your performance requirements are so low that you would probably be better off by hosting your data in a NAS type of solution, whereas if you are at all concerned about latency you are probably going to want to stick your clients and storage on the same logical IP network anyway. In which case, TCP + IP helps you... how exactly?

FCoE on the other hand is a storage-specific protocol with low latency, and also uses a familiar idiom, ie Ethernet. It does not route (out of the box, but ethernets can easily be extended across WAN links), but imho this is a good thing.

I wish I had access to a storage system with both FC and iSCSI heads, so I could benchmark how the theoretical differences between the two protocols appear in practice, but I would be very surprised if EMC for instance didn't have a whitepaper or ten on the subject.

What I am primarily interested in is how, all other things being equal, iSCSI affects the IO/s metric in random IO for large datasets - a worst-case workload for any storage idiom, which tends to strip away all the boosts that caching, prefetching and other clever things you might stick in a storage system to improve performance, leaving you with a view of the raw average spindle seek times, and protocol processing overhead as the primary bottlenecks.

Maybe I'm completely off the mark here? Curious, anyway..

--
Ole

Brian

On the cable/plant question I think single mode fiber will rapidly displace copper especially as we move to higher speeds (10 gig and higher). Existing Cat 5 cable needs to be replaced anyway and fiber is future proof up to 1 terabit. The Blazar optical active cable from Luxtera with CMOS silicon photonic transceivers in both ends is already being trialed.

David

Great Post.

David Black

"I like grey, cheap, flexible cables, not orange, expensive, cables"

I can't resist - orange cables are just so yesterday - aqua is the new orange :-).

Seriously, orange is the standard sheath color for multi-mode OM2 optical fiber. If one is going to spend the money on new multi-mode optical fiber for a data center today, it should be on OM3 laser-optimized fiber (needed for 300m runs @ 10Gig), and its standard sheath color is aqua.

Enkiguy

Good article - but all the arguments you make work even better with true virtual I/O. 10 x 10GigE NICs on a server? Well, maybe if you really like 3U servers with 5 NIC cards in them! We're seeing that Xsigo's virtual I/O (multiple 10Gig/1Gig/FC links over infiniband) is the way to go. Two connections per server (for reliability) and VMWare can have all-you-can-eat network connections to the outside world.

Brian Johnson

Take a look at the best practices white paper that I wrote regarding 10G and VMware vSphere 4.

Simplifying Networking using 10G Ethernet -- http://download.intel.com/support/network/sb/10gbe_vsphere_wp_final.pdf

Brian Johnson
Intel Corp -- LAN Access Division
PME - 10G and Virtualization Technologies

Brian Johnson


AnandTech's article - 10Gbit Ethernet: Killing Another Bottleneck?

http://it.anandtech.com/IT/showdoc.aspx?i=3759

a

like it

The comments to this entry are closed.

  • BlogWithIntegrity.com

Disclaimer

  • The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Dell Technologies and does not necessarily reflect the views and opinions of Dell Technologies or any part of Dell Technologies. This is my blog, it is not an Dell Technologies blog.