« Happy Holidays a new Celerra VSA! | Main | A team Happy Holidays shout-out from the VMware-focused Team at EMC »

December 16, 2009

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Marco

If you are talking about that much users, you want infiniband and a LOT of Read and Write caching on your storage (SSD's)?

Ande Leibovici

I find interesting how each white-paper produces a complete different set of numbers leaving us pretty much blind in a pitch black environment.

Duncan paper says "the read/write ratio 40/60 percent, sometimes even as skewed as 10/90 percent. The fact is that they all demonstrate more writes than reads."

VMware paper (Storage Considerations for VMware® View VMware View 3) says "It suggests that over 90 percent of the average information disk I/O consists of read operations."

Any comments in regards to that?

Andre
http://myvirtualcloud.net

Chad Sakac

@Andre - I TOTALLY agree. in the View 3 timeframe, I personally was thinking 90:10 read/write. I can't say beyond the projects I'm seeing personally, which are around 50:50. I think one of the most important elements we could deliver which would help our customers would be tools to quantify exactly what they have. Every workload is also wildly divergent.

@Marco - 8Gbps and 10GbE are more than enough - people often confuse high IOps (throughput) and high MBps (bandwidth). The bandwidth is usually during periods that have large IO sizes, and lower IOps.

David

My personal experience is we're seeing more writes than reads. Why?

- We're using folder re-direction for user data to cifs shares.
- We're never stingy with RAM. Windows caching helps save a few IOP's. (And with ram getting cheaper, I suspect this'll be the case in more setups)
- Even an idle desktop will tend to page stuff out rather than read stuff in.

I suspect Chad's right - either SSD or cache will help a lot. Unfortunatly, no-one has published any numbers, so real world sizing is a black art.

Ande Leibovici

@Marco, supposing a peak is 12 IOPS per user, the bandwidth is 0.75Mbps (if using 64k blocks). For 10,000 users 7.5GB/s. You only need that bandwidth at the storage side as a single host will never support all 10,000 users. In any case better to have everything connected to your core switches if using NFS.

Suppose here we are now down to intelligent caching and spindles!

Please someone correct me if I'm wrong.

Andre
http://myvirtualcloud.net

Marco

Falconstor has a product that is called HotZone, and SafeCache with SSD's, we are going to test with that and will give details on the web how this went.

Scott Brightwell

I just did a performance analysis for a customer with ~400 VDI users attached to DMX (so we can get great performance data), who wanted to "take VDI to the next level" while redeploying on an NS platform to free-up Tier 1 capacity. Their users are happy about performance today, and the team wanted to keep it that way. In the morning these users were each driving more like 20-25 IO/s as they all came online, to an aggregate of about 10k IO/s. This dropped to a steady morning work activity of about 5k IO/s. The afternoon dropped them down to 1000 IO/s. The read/write % was always around 80/20.

This was a rather naive configuration with fully provisioned desktops dedicated to each user, apps and data stored directly in the guests, etc. We figured they needed about 100 x 15k FC's if they wanted to handle the morning boot storm as effectively as on the DMX. We recommended changing the way they deploy desktops, using View Composer, linked clones from a master image, and removing the apps, profiles, and user data from the guests and putting that on file shares. In this way, they could get their entire storage footprint down to about 600GB.

The trick is, this mere 600GB of capacity still has to perform the same as 100 x 15k FC. The need for a single RAID group of EFD's was obvious.

Brian Whitman

@Marco on the 64k block comment. We too saw that XP's published block size is 64k. Based on this we looked at sizing an array for 3000 users at 10 IOPs each and it came out to some insane numbers, like 2 fully populated NS960's to server the throughput (not the IOPs). After taking a look at what block size XP was actually using (on a test harness using common office apps) we saw that the majority was 4k, with some 8k and 16k and very little 64k. We generally use a 4k block size when sizing the storage.

Duncan

This is also something that needs to be taken into account when looking into deduplication. I've seen multiple vendors offer it with VDI as most of these images are similar. But when you calculate all iops you might end up with the same amount of disks, so what's the point?

Ande Leibovici

Have ever thought about not running VDI from a enterprise shared storage?

I actually have written a article about running VDI from local storage, and perhaps using SSD drives.

The IOPS rules still apply however if possible to make the VMs disposable then there is no real need for expensive storage arrays.

Read at http://myvirtualcloud.net/?p=448

comment system

Great post! when taking a chance with VDI or IO you should make a second and third thought about the efficiency and the quality and Flexibility of those protocols, don't ever afraid of spending some extra money for "saving your beck"!

Robert

Always measure when you want to deploy VDI.
We did, and the values we got were quite stunning. :)

I posted short results on my blog:
http://ultrasub.nl/2011/05/05/vdi-and-iops/

Turned out our devvers use a lot more iops then all vendors would like you to think. Of course this is a specific situation, but always, always measure before you design!

The comments to this entry are closed.

  • BlogWithIntegrity.com

Disclaimer

  • The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Dell Technologies and does not necessarily reflect the views and opinions of Dell Technologies or any part of Dell Technologies. This is my blog, it is not an Dell Technologies blog.