I got a lot of questions on the 10GbE post, which is great. Some of the answers involve graphics, so rather than comments, I'm just going to do a post....
Here are the original questions: http://virtualgeek.typepad.com/virtual_geek/2008/06/10-gigabit-ethe.html#comments
Ok - one at a time (apologize for the delays, have been swamped):
To Duncan:
jumbo frames don't hurt, but don't help that much either. Below is a table from some pure research with an Exchange 2007 ESRP workload (handy because it exhibits both small block 8KB random I/O during the normal workload, and large block (64K and larger) sequential during the checksum and backup phase of the test. I also like it because it's a app workload, not artificial. This test was with VMware 3.0 with the MS iSCSI initiator in the guest, but similar more recent tests show the same behavior using the VMware iSCSI SW initiator in 3.5.
J1 = Jumbo frames on; J0= off.
Takeways: for the IOPs-driven small block workload - the difference is a fat nada. For the throughput large block workload, there's some difference - in a 4hr backup job, it shaved off 12 minutes.
I'm going to do an IP best practices post (get a lot of question re: multipathing configs) and I'll post the data we found. Jumbo frames **CAN** hurt you though in the sense that you need to have the discipline to make sure you enable them from the source to the target, otherwise you get fragmented and those nice iSCSI PDUs get sliced up incorrectly. What's more important is something basic - design the IP network like you would an FC SAN. Isolate it (phyiscally or via VLANs), and build in redundancy. Don't route it. Also, disable the Spanning Tree Protocol on the ports used for iSCSI (or the switch as a whole). This Cisco doc says it best:
"For the purposes of creating the iSCSI test environment, it is recommended to keep the configuration simple and assign an IP subnet that is to be used for both the iSCSI initiator (s) and the Cisco MDS 9000 Series IP services module's Gigabit Ethernet port"...
Ah - KISS. Now, where I have seen iSCSI problems is in high-thoughtput (usually streaming backup) workloads. For this as I said earlier, multipathing (VMware ESX, or for that matter any OS) is important. You should have multiple iSCSI targets configured (VMware doesn't support multiple connections per session yet with their software iSCSI target, and depends on multiple target IPs for network level load-balancing, or multiple targets for storage-level multipathing). EqualLogic does this by default (in their case every LUN is an iSCSI target), NetApp, Celerra, CLARiiON (they have LUNs behind iSCSI targets) all require a bit of thinking by the admin (but not a lot). The Celerra is particularly sweet - you can configure up to 1000 iSCSI targets.
To Michael:
I hear you loud and clear, Michael, and IB is an engineer's dream of a transport. To tell you the truth, I don't KNOW why it's not compelling, but it seems to have been relegated to the sidelines (certainly not by what I post), but rather by the market as a whole. For example, there's a startup called Xsigo that has a really cool IB switch and have made a IB CNA focused specifically at this VMware use case. Yet, in spite of all that goodness, I haven't seen it go far, and have been following closely. My theory is that it has more to do with the cable plant - which is why I suppose (and this is 100% supposition) that 10GbE won't take off (i.e. capture the mass market and rapid adoption curve) either so long as the physical layer is an optical cable. That train of thought is what made me ask my last question: does it have to be CAT6, or would something that is similar (like the SPE+Twinax) be good enough, or - is this a solution in search of a problem (I don't think so, which is the basis of the post). Why do you think IB hasn't taken off?
To Charlie:
Charlie - you know me, I will never lose my iSCSI passion :-) This new FCoE hasn't even tempered my thinking on it - I'm still (and will always be) mr iSCSI. Like I said in the post - **iSCSI WILL BE THE MARKET MAJORITY OF BLOCK IN THE FUTURE***. Heck, I even have a standing bet with Chuck and others on the exact degree of market share dominance by 2012 (and I say that by any measure: capacity/ports/revenue). What I'm saying is that for some smaller (but very important) portion of the market, the need for lossless transport is a deal-breaker, so something will need to coexist with iSCSI. I see your argument - that convergence is the full stack, but I haven't heard that from customers - I've heard that to most, it means the CapEx. But, let's be clear - Ethernet as the physical/link layer is a given, and iSCSI immediately makes that a converged solution, period.
To Ole:
Ole, of course, we've tested it a million ways. Long and short: iSCSI works well with IOPs focused workloads with normal latency requirements. in the 8KB IO size wihout jumbo frames, it adds a few percentage points of overhead (i.e. iSCSI PDU fragmentation with the standard Ethernet frame size and associated TCP/IP overhead). A few percentage points is not a big deal. It's higher on large I/O sizes (64KB and larger), but still small enough that most people don't care except in the academic sense.
But you're right - the point isn't that iSCSI is routable - routing iSCSI is a BAD idea. Storage expects latency that's measured in milliseconds if block, and hundreds of milliseocnds if it's NAS (of course NAS can have millisecond latencies also, but as a protocol, it's designed to expect and operate with more). Ethernet switches add latency characterized by microseconds. This means they are effectively invisible from a performance standpoint. Routers of course add latency in milliseconds. iSCSI is a block protocol, so you do the math, and come to the conclusion you did - the routable thing isnt' why iSCSI is successful.
We've done versions of the table I posted earlier - here is a recent (joint EMC/VMware solutions testing effort) big brother at 8 times larger scale:
Here was the ESX view of the storage subsystem - note the IO reaches around 16K IOps, but maxes out at 100MBps - iSCSI piece of cake:
Here was the CLARiiON's world-view - the SPs were pretty busy, and I can tell you that with this workload, read cache is totally useless - the host-generated IO is almost exactly the backend I/O:
So, iSCSI (if you design the iSCSI network like an FC network), performs fine, the encapsulation and Ethernet frames add a few percentage points of overhead, and adds one or two milliseconds - not bad. The processing power? We've found that to saturate a two full NICs with iSCSI burns about 1 core on a modern multicore CPU. For those of us in the era of processor-bound systems, you look at that initially as a big deal, then think about it and go "so what - buying a quad core vs. a dual core costs me $100"
Now - what about a throughput dominated workload like Business Intelligence (or an aggregate guest-based backup from an ESX server with 20+ VMs on there? hence Avamar). In that case, throughput matters, and while eminently possible on iSCSI, with 1GbE, it's a bit unwieldy - I mean, who is REALLY going to have a bundle of 8-10 cables just for IP storage per ESX box? Think about the cabling mess as you scale that up. Now... 10GbE - another story entirely.
BUT - you are right in the sense that you if you have an existing FC infrastructure (both hardware and people versant in WWNs), iSCSI solves a non-existant problem for you - which is why in spite of it's incredible growth (36% CAGR or more), there is very little iSCSI in larger enterprises. Not because it's not good, but rather it solves a problem they solved before.
It brings me back to my original view - if you have the FC infrastructure (hardware and knowlege), don't underestimate the power of being able to ping your storage target. If you do, what will be the impetus for the next change (change being inevitable). I think it's that VMware and mass consolidation, in it's second wave is going to make 10GbE mandatory just for networking, which then leads to "ah, just converge the IP and Storage networks dammit!"
to Brian:
Brian - thanks for the input - I'm curious what others say.... I'm surprised that there hasn't been more of a move here already towards and optical plant. But, most enterprises are still very much using Cat 5e/Cat 6 even for core uplinks and aggregating/trunking like made. Heck, Verizon runs FIOS to the home :-) It just makes me wonder if there is a cost threshold that must be crossed, or perhaps as I think, it's more basic. People who are networking people are comfortable with twisted pair - and it's less logical than it is emotional (we are all human beings after all :-)
**iSCSI WILL BE THE MARKET MAJORITY OF BLOCK IN THE FUTURE***
Love iSCSI but I don't think it's in the cards...FCoE with I/O convergence and support for lossless Ethernet will kill it. I'd be willing to make a bet on this. I look at Cisco, Brocade, Qlogic, Emulex and see the heavy investments made there and it looks bleak for iSCSI.
Posted by: Nick Triantos | June 24, 2008 at 08:00 PM
Re: Optical plant
The slow displacement of copper by fibre is probably a number of things:
- Cost. There is still a difference and we all have accountants to answer to.
- Handling. Fibre has a higher "handle with care" threshold.
- Inertia. Installed base of copper plant and ports.
Everything seems to ship these days with built-in GigE RJ45. Fibre will reign supreme when the same can be said for SFPs.
Posted by: Marty | June 24, 2008 at 11:02 PM
Nick - I'm furiously working the FCoE angle myself also. I'm totally a "touch it/see it" kinda guy. I like the story, and I see the logic. As I said in the original post: "there are some workloads at our larges customers that demand "lossless" (look up per-session pause) and ultra-low latency (where literally a few ms is make/break."
What I'm saying isnt' that FCoE won't be huge - I'm eagerly awaiting my new Cisco/Brocade/Emulex/QLogic toys for the lab for joint solutioneering, and expect hands-on perspective in weeks.
Rather - what I'm saying is that I don't think it's an either/or. I think iSCSI is more natural in the entry-to-mid market. This is particularly true if you can't get the parts at your local Fry's (like SPF+Twinax). IMHO (and everyone, take a breath, we can discuss this like gentlemen/ladies) FCoE will rule the high end, and the mid-market and entry will be iSCSI. The latter two markets are larger than the first. We won't loose sight of the first (where EMC was successful first), but need to invest heavily in the others as well.
Now, I'm ALWAYS game for a nice bet with a respected colleague - so what will it be :-)
You can join in the bet here: http://chucksblog.typepad.com/chucks_blog/2007/10/yes-we-occasion.html
http://chucksblog.typepad.com/chucks_blog/2007/10/yes-we-occasion.html
Posted by: Chad Sakac | June 24, 2008 at 11:06 PM
Marty - thanks for the comment. I'm really interested in reader perspective here, because I think the physical/link layer aka cable plant question is the crux.
I hear you. So, the SFP+Twinax is the only thing that has the cost/handling equation even close. But, your point on the RJ-45 vs. SFP is right on.
So here's the question - if that (use RJ-45 connectors with twisted pair at the super-high frequencies demanded) can't be done within the power/distance/costs limits that are inherent, does that translate into 10GbE being in the same limbo-ish state as IB? I'd have to imagine that if people could ship 10GBASE-T with CAT 6A or CAT7, they would be doing it now - right?
Posted by: Chad Sakac | June 24, 2008 at 11:25 PM
There's a great FCoE book that goes into the weeds from Silvano Gai titled "Data Networks and Fibre Channel over Ethernet". Silvano used to be with Nuova Systems now part of Cisco and one of brains behind FCoE. For anyone who doesn't have it and is interested in getting a serious FCoE bain dump, I highly recommend it.
As with every new technology, initially, FCoE adoption won't be big. In fact I don't suspect we'll see arrays out natively doing FCoE for another 18-24mos. The current available CNAs (both initiator and target) need some work. However, I agree, in the long term (i.e 5-6 years), it'll become the dominant interconnect on both mid and high-end. Low end, it'll be a matter of economics which means if the pricing is competitive people will go for it.
Because in technology history tends to repeat itself, I suspect we'll end up seeing a little of what is starting to happen with SATA/SAS/FC drives where SAS is starting to expand into both FC and SATA sides. It's more cost effective and as fast as FC, more reliable and faster SATA...
Posted by: Nick Tiantos | June 24, 2008 at 11:49 PM
Great post! First of all, I'll will post this here but may be the right place was http://virtualgeek.typepad.com/virtual_geek/2008/06/10-gigabit-ethe.html#comments.
I arrive late (heavy workload!!!!! ufff),
First of all, a question Chad: I think the picture of the 10GbE performance w/ 3 VMs(in the original post: http://virtualgeek.typepad.com/virtual_geek/2008/06/10-gigabit-ethe.html#comments) is achieved using VMDq, isn't it? (I've seen it in another part but I don't' remember it...). Put an eye into VMDq, it's great!
Related to your questions, I agree with your core premise(VMware's consolidated I/O demands a converged, but virtualized I/O fabric) and I think too that 2009 is the inflection point year for 10GbE (**IN THE US**).
I think that InfiniBand and 10GbE will coexist within a large period of time: just like FC and iSCSI.
I explain my point:
One thing I think is crucial in this discussion is the SEGMENTATION issue. What are we trying to solve with I/O virtualization? Performance? Too much NICs?
In a typical server virtualization environment we normally use 6-8 NICs and 2 for storage (2 HBAS: FC or iSCSI), that makes a lot of NICs.
One of the problems that I/O virtualization can hit is bandwidth, but it is not the major issue, IMHO.
If we use the 10GbE's performance as the major advantage, We will see that in three years the 10GbE *is not enough* and that the servers begin to have 6-8 10GbE NIC, just like now!(but 10 times faster).
I think the MAJORs issues that I/O virtualization have to solve are: SEGMENTATION and the Over-subscription. And the MINOR issue is bandwidth. It's important too of course, but if we really need BW, we don't virtualize the I/O. If we need CPU we don't use Server's Virtualization and if we need I/O we don't use I/O virtualization. "Virtualization is about over-subscription"
I am sure you'll agree with me that the requirement for a high number of network adapters is *primarily* due to segmentation issues rather than raw performance(BW) issues. (A ESX server with 12NIC doesn't need normally 12GBps!!!)
Normally you need 12 NICs because you need to segment your network layout in order to have RJ45 physical copper ports to plug somewhere.
Segmentation, is not a TECHNICAL limitation (there are a number of technologies(VLANs and VMware Port Groups) that could allow you to logically segment all networks above).
It is typically a design decision based on best practices(bad BP) and customer's internal POLITICS(powerful forces!!!!!!!!!!). Usually the reason is because people don't understand or don't trust Ethernet virtualization techniques (VLAN's)
What happens is that :
ON THE ONE HAND: the bigger the the project is (the bigger the customer is) the more stringent the politics are.
ON THE OTHER HAND: We know that the bigger the customer is, the use of FC is preferred.
ON THE THIRD HAND :-) The bigbig enterprises that started with FC, have almost no adoption of iSCSI in large data centers.
I think this will be a possible market for InfiniBand: Big Big costumers.
In my opinion, this technology is appealing because:
a) VMware is supporting it in the VI3.5
b) Infiniband technology can be "bridged" into legacy datacenter I/O architectures such as standard Ethernet and Fibre Channel devices. (No one would want to replace its datacenter network infrastructure: No with 10G, nor with IB)
Bridge: Basically, installing a single Infiniband (IB) host adapter into each server, you can create a number of "virtual ports" that would map into the IB switches and in turns into the IB-Bridges to connect to your legacy Ethernet infrastructure.
This technology allows you to "expose" the same networks you plug into the IB-Bridges all the way into the ESX Servers using a mix of virtual IB Ethernet adapters and VMware Port Groups
Having said this, I think that both technologies are great: 10GbE(check VMDq and Jumbo Frames!!!!, I really like iSCSI) and InfiniBand. I think both will coexist and we will not see a massive adoption of IB (But not for a tech reason).
Chad, thnx for the graphs and info!
What is your opinion?
Posted by: Jon Blazquez | June 25, 2008 at 10:32 AM
Many thanks for the great answer to my question re: iSCSI and performance.
First of all, I realize now that I should have phrazed myself differently. My post was written in the context of FCoE versus iSCSI (but this was not clear in my post, sorry).
I also read, and agree 100% with, your wish to avoid religious wars on protocol X versus protocol Y. So please read the following with as being written tongue-in-cheek, with no flammatory intent whatsoever :-)
Now; the compelling reason for iSCSI (as I see it) is the familiarity and low cost of the equipment, and not the fact that it runs on IP (since IP only really matters if you need to route, which you definitely do not want to with storage anyhow).
On seeing your benchmark results I have to admit that my initial argument regarding overhead is moot - easily solved by throwing hardware at the problem - but I'm still slightly puzzled as to why iSCSI was formalized and embraced by the storage industry whereas FCoE still isn't for sale.
If we agree that being routable is not an argument, I'm really curious as to why the industry, way back then, chose to pursue SCSI over IP rather than SCSI over Ethernet (or FCoE, as it turned out).
Disregarding the separate problem of bandwidth for the moment, I would have thought that it would be cheaper and just plain easier to implement a "SCSIoE" protocol, and reuse the framework of FibreChannel to handle multipathing, than to basically write a whole new stack of protocols, including a new discovery mechanism, if the main purpose of the exercise was to replace expensive and unfamiliar equipment with cheap, off-the-shelf, components.
But maybe this is only obvious on hindsight? Or maybe the implementation of SCSI (okay, FibreChannel) over Ethernet is a harder to problem to solve than doing the same over IP?
Again, I am extremely curious about the whole subject, and definitely not trying to ride a hobby horse here :-)
Posted by: Ole André Schistad | June 25, 2008 at 10:45 AM
You are right that the cost of optical/fiber has been the gating factor to its adoption in data centers etc. But contrary to commom perception it is not the fiber that is expensive but rather the optical transceivers. Traditionally, optical transceivers are made of esoteric materials such as lithium niobate, indium phosphide, gallium arsenide and other III-V class semiconductor materials. They are also manufactured in discrete parts and hand assembled and so their cost is very high... in the range of $350-700 per port. This has made them appropriate for the longest haul segments of the network and as costs have come down fiber has moved to the local, metro, and now to the home. If you look at the gross margins of a optical transceiver company (20-30%) you can see how expensive they are and how electronic transceivers in the data center have a significant advantage. The change now is that silicon photonics is coming of age and a company like Luxtera is now manufacturing optical transceivers in standard CMOS semiconductor processes. Optical transceivers in CMOS have cost parity with electronic transceivers which makes fiber cabling in the data center inevitable. When you consider that Cat 5 cabling needs to be replaced anyway to support 10gig transfer rates the decision to adopt Single Mode Fiber is obvious I think. Very good white paper on this topic at the Luxtera website.
Posted by: Brian | June 25, 2008 at 12:28 PM
I personally think that iSCSI has all the performance capabilities of a fibre channel system, it just depends on how you customize it. They are also far more affordable and accessible.
Posted by: TSS | June 25, 2008 at 04:48 PM
Quick comments to the questions/comments (and wow - love the dialog - keep it coming!)
Jon - I think that you're right that traffic segementation and QoS mechanisms become critical with the consolidated network - there are emerging IO virtualization technologies for both the network and FC side. These ideas are equally critical regardless of the physical and link layers. BUT - the question is which will be the big winner.
Ole - the SCSIoE (aka FCoE) makes sense using your logic, but if you were fundamentally a networking person (no offense), you would say: "storage is an application, and therefore must exist above the transport layer in the stack". That's certainly the purist view, and you do get some good thing. Also, without lossless ethernet, you really DO need transport-level retransmits.
Brian - you had me, you had me, and then you lost me with the vendor plug. At least, if you're going to do it, make it subtle. Your point on the cost of the transceivers is true.
TSS - I agree, up to a point. When you need high thoughput - can you do it with iSCSI - of course. BUT, you either are doing massive 1GbE (eventually this becomes a non-trivial problem). 10GbE of course makes this moot - but then we ask my main question (less iSCSI vs. FCoE) - what will be the cable plant?
Posted by: Chad Sakac | June 25, 2008 at 07:27 PM
Chad - On the cabling plant, it still looks like we're trying to predict another market that is not there yet.
As you mention, your choices are optical, 10GBASE-T and Twinax.
Twinax has good price and power, but will be limited to only those environments where you are really staying in the rack or row - it does not plug into structured cable environments (which almost all data centers have today). Cisco's FCoE solutions require 10GbE today, and optical and 10GBASE-T are too expensive for 2008 deployments.
For some reading on cabling infrastructure - try http://www.ethernetalliance.org/attachments/127_10GBASE_T2.PDF for details on 10GBASE-T (good analysis of data center distances and also shows the shipments of Cat5e, 6, 6a - while 6a is "preferred", 6 should be OK for many and that means that many customers will be able to use their existing cabling). Many vendors (including Cisco) working hard at bringing down the power and price of these 10GBASE-T solutions and they should start showing up in the next year.
As for those trying to push for all optical, customers aren't ready to rip the existing cable plants if they can be reused. 40G/100G may require new infrastructure, but that will be many years away. It looks like 10GBASE-T will be able to extend the existing install base. Copper in the racks and optical to the core is pretty common.
Looks like customers will have options...
Posted by: Stuart Miniman | June 26, 2008 at 05:39 PM
Chad - "I'd have to imagine that if people could ship 10GBASE-T with CAT 6A or CAT7, they would be doing it now - right?"
Having just read that PDF that Stuart linked to, the most interesting info that answers your question is the installed plant percentages over time graphic (page 25).
It looks people have, or will have in the near future, installed plant and structured cabling of high enough quality to run 10GBASE-T if and when the price is right.
The best comment in relation to price is on page 11:
"Historically, the common wisdom for justifying widespread adoption of the next speed of Ethernet has been achieving 10 times the throughput for three to four times the cost. In that respect, by 2007, 10 Gigabit Ethernet had not yet reached its full potential. The cost of deploying 10 Gigabit Ethernet is still much higher than the desired three times to four times the cost of Gigabit Ethernet. Also, at least on the end-node side, the throughput is much lower than true 10 Gb/s in many cases."
So I think it's going to be a case of continuing to install high quality copper running 1000BASE-T until such time as 10GBASE-T hits the price/performance sweet spot.
Posted by: Marty | June 27, 2008 at 05:58 AM
Guys, this dialog is fantastic... Thank you. So, I read the ethernet alliance doc (thank you stuart). They highlighted one thing (not called out specifically, but noted throughout) which was that the PHY for 10G Base-T over UTP required high, high power - in the 7-10W range. This means that it's not a fit for LOM. Couple this with the point that Marty calls out (3x cost for 10x higher throughput as the breakpoint) - I wonder if maybe I'm wrong, and it won't be 2009, but perhaps 2010 that is the year of 10G Ethernet...
The point of building any new cable plant with Cat 6A or 7 is a darn good one.
Still - I would never bet against innovation - can't wait to see. This is the fun of the IT industry....
Posted by: Chad Sakac | July 03, 2008 at 10:41 AM
Oh, one more thing I forgot to mention. the power thing... Was interesting to see the way they planned to shift to lower power transmit db levels for distance-based power consumption.
The VMware use case we've been discussing is a "top of rack" aggregation model - so short-distance transmit/receive options are a possibility.
Lastly - it's telling to look at the date of the article - Aug 2007. That's a long time ago - yet we haven't seen this 10G LOM that they've been talking about.
Posted by: Chad Sakac | July 03, 2008 at 10:52 AM
As one of the people who was involved in iSCSI from the beginning, I thought I'd comment on some of Ole's questions around iSCSI.
"If we agree that being routable is not an argument," We *definitely* don't agree - that approach locks the storage access topology to the network LAN/VLAN configuration; requiring the initiator and target to always be on the same LAN or VLAN is a serious restriction. I completely agree that iSCSI routing should not be done via a stereotypical IP router that adds many milliseconds of delay - try layer 3 support in a layer 2/3 Ethernet switch. Also, FCoE is "routable" in that an FCF can in principle forward FCoE traffic across different LANs or VLANs. See p.75 of my good friend Silvano's book if you don't believe me ;-).
"I'm really curious as to why the industry, way back then, chose to pursue SCSI over IP rather than SCSI over Ethernet (or FCoE, as it turned out)." Ethernet was definitely not lossless at the time, and not having to reinvent TCP was a significant advantage. Without TCP, a dropped Ethernet packet usually costs a SCSI timeout and a SCSI I/O redrive; if that redrive times out, the SCSI I/O is usually failed. Another thing that made a significant practical difference at the time was that iSCSI could be (and was) implemented entirely in software on existing hardware. In contrast, FCoE in practice requires new hardware to support lossless Ethernet, starting with switches.
"Disregarding the separate problem of bandwidth for the moment, I would have thought that it would be cheaper and just plain easier to implement a "SCSIoE" protocol," That'll be a $5 fine (a nickel's just not worth what it used to be) for trivializing what it takes to reinvent TCP. It's easy to design a reliable transport protocol; designing one that is robust to "stupid network tricks" that happen in real networks is a lot harder. I did see at least one attempt at a SCSIoE design at the time the iSCSI work was being done; a lot of time was spent (wasted, IMHO) there in reinventing TCP.
"and reuse the framework of FibreChannel to handle multipathing," That was actually tried; the result was called iFCP - see RFC 4172. On the technical front, iFCP ran into problems caused by Fibre Channel's dynamic address (FCID) assignment and use of FCIDs in ELS payloads.
"than to basically write a whole new stack of protocols, including a new discovery mechanism," What new stack??? iSCSI and iSNS are the only two protocols that are widely used in practice, and ESX does not support iSNS.
"if the main purpose of the exercise was to replace expensive and unfamiliar equipment with cheap, off-the-shelf, components." That purpose was a subject of debate at the time. Things have turned out as Chad described - iSCSI has thrived in markets below Fibre Channel's enterprise market. If you don't think this matters, I suggest "The Innovator's Dilemma" (Christensen) as interesting reading.
Posted by: David Black | July 09, 2008 at 08:28 PM
I agree that iSCSI has all the performance capabilities of a fibre channel system, it just depends on how you customize it. They are also far more affordable and accessible.
Posted by: Werner Ladders | April 12, 2011 at 05:38 PM