« Fun with Vendor FUD, episode #2. | Main | New Celerra VSA (5.6.48.701) and Updated SRM4 in a box guide »

March 31, 2010

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00e552e53bd288330133ec5f2ad7970b

Listed below are links to weblogs that reference Understanding more about NMP RR and iooperationslimit=1:

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Vaughn Stewart

Chad - Great data, thanks for sharing. I would share that these results closely resemble what we have seen with our in-house testing, which is to say the difference in the EMC reported performance results 8.4%.

Look at the results in a relative relationship with the highest value representing 100% obtainable I/O
PP/VE = 100%
NMP RR default = 91.6%
NMP RR 1 IOP = 96.6%
NMP RR 1496702496 = 98.3%

I would suggest that vSphere customers should feel very confident that NMP can address their most demanding workloads, wouldn't you agree?

Chad Sakac

@Vaughn - I would say that NMP RR is excellent, and a great choice. PP/VE is better.

This workload didn't drive network or HBA congestion, have a high degree of random variation in the workloads and IO sizes from the guest, or target array port congestion - those are all the things where adaptive/predictive queuing is better. Creating that kind of test harness is difficult, but not impossible. Conversely, that's exactly the day-to-day at a large scale customers all over the world.

Lab tests tend to be relatively "clean". The real world is messy.

Also, the key is that PP/VE doesn't just change the PSP (path selection plugin) behavior, but the SATP (Storage Array Type plugin) behavior. Things like automated path discovery, proactive path testing (even in periods of no IO), the bigger you are, the more important those operational/manangement things are.

Look, that's not to pooh-pooh NMP RR, EMC supports it, embrace it, and it's free. If you look, there are boatloads of "free in the box" optimization in the native, free SATP around EMC platforms (along with others) - results of work between the engineering teams.

Like I've said: "NMP in the past = no so good, NMP in vSphere with RR = better; PP/VE = best" PP/VE is also not free.

Thanks for the comment!

Jan

Any testing allready done with VNX and vSphere 5 on this?

Also the full whitepaper you refer to on Powerlink does not include test results of other NMP tests, would be great to see graphics also on the others.
One other thing that I was wondering about: the test in this post refers to 4x FC HBA ports, but the esxtop VMware output in the whitepaper only shows 2x FC HBA's in use... Are you speaking about the same tests?

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Comments are moderated, and will not appear until the author has approved them.

  • BlogWithIntegrity.com

Disclaimer

  • The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC. This is my blog, it is not an EMC blog.

Enter your email address:

Delivered by FeedBurner