So – do these ideas work together? Do they complement each other or compete? What’s the source of these questions Chad – and why are you asking them? :-)
(BTW – this topic is one I’m covering at VMworld, along with some of the testing data)
If you’re interested, read on…
Storage IO Control (SIOC), along with Network IO Control (NIOC) represent VMware’s first forays into expanding the IDEA of DRS (which is in essence distributed resource prioritization) beyond CPU cycles and memory – and into storage and network IO subsystems.
DRS works via vMotion to redistribute VM workloads across a cluster, and also affects CPU scheduling on a single ESX host. DRS works like a charm, with many, many customers now using it in fully automated mode.
For IO though, what was needed was a distributed (multi-host and cluster-wide) resource admittance control model – where IO through the VMkernel is throttled based on priority.
The seminal research work that forms the foundation of SIOC was in this research paper entitled PARDA – published here (and picked up by folks like Duncan and others). Very smart folks behind this.
So – think of this from a storage standpoint for a moment – can you use SIOC in conjunction with array-based auto-tiering? First – the short answer is YES, but I can imagine the concern folks have. SIOC (read the PARDA paper) assumes that admission algorithms are looking at the underlying datastore as a linear resource (all parts perform consistently) – which is not true with an auto-tiered datastore. Also – what happens if the time window used for PARDA sampling and for storage array auto-tiering are in the same order of magnitude – do they conflict?
These are all reasons why I could be I’m hearing many people saying “no”. That all said - I think I know the underlying source of this - in some of the pre-launch training on SIOC, there was a slide that showed a diagram and stated that SIOC and array tiering were not supported together.
After the pre-launch internal training, the work was completed by VMware and their storage partners that determined that auto-tiering models ARE supported with SIOC.
BTW – it wasn’t just EMC, but other vendors who do some automated sub-LUN tiering, who have been working with VMware on this topic.
So – here are the reasons why SIOC and EMC FAST (and generally, sub-LUN automated tiering approaches) are complementary:
- SIOC is awesome in that it spans all storage platforms.
- SIOC relatively short sampling window and core algorithms mean that it’s excellent at dealing with burst behavior and optimizing the overall system behavior in the middle of an IO contention crisis (leading to a guest latency exceeding the threshold). Without taking action like changing the backend performance or storage vmotioning the VMs (or the IO workload changing) – the throttled prioritization will continue (which is a good thing – definitively better than giving everything equal admittance)
- Sub-LUN and LUN-level FAST have sampling windows that are orders of magnitude larger due to the fact that actually moving the data takes resources to accomplish, so you don’t want to do it instantly, but rather over a sustained period. That longer sample window means it doesn’t muck with the SIOC algorithm, but it also doesn’t provide instant relief. Unlike SIOC (until automated policy-based svmotion is instituted), FAST will redistribute the data based on usage of the chunks – lowering the response times of the busiest – potentially lowering them back under the SIOC threshold.
I would argue that VMware SIOC + EMC sub-LUN FAST = Storage DRS… today. Detection and resource redistribution in both:
- the short term window via queue admittance – analagous to CPU scheduling on the ESX host
- the longer term window via moving hot sets of blocks to higher performance storage, and cold sets down to lower performance storage – analgous to DRS vmotioning VMs between the cluster to optimize for the fact that some VMs drive sustained load while others are constantly low low.
BTW - while there’s more real world data to gather and expect to see those soon, testing is suggesting leave everything at the defaults with Sub-LUN FAST in most use cases (where they storage pool consists of tiers at both the low-end and the high-end).
So – net: SIOC is cool, and very useful. Sub-LUN Tiering and SIOC together deliver both the “crisis mitigation and optimization” and “longer timescale resolution” to storage contention. That’s simple, efficient, and good.
Cool new world!!!
"VMware SIOC + EMC sub-LUN FAST = Storage DRS… today"
Or not today. Unless your version of today = sometime in the future when EMC releases FAST v2 to the public/GA....
Posted by: Mcowger | July 23, 2010 at 06:29 PM
@Mcowger - LOL - fair enough. You've got to remember I live in a strange land where I've been using sub-LUN FAST, and vSphere 4.1 for the last 3 months :-)
You won't have to wait very long :-)
Chad
Posted by: Chad Sakac | July 23, 2010 at 07:23 PM
Chad, slightly off topic here, but I need to ask. Has VAAI in 4.1 changed the old best practice of roughly 25 VMDKs per LUN? Or is the integration (say with VMAX) tight enough where I can have very large LUNs and not worry about SCSI reservations?
Posted by: Michael | August 04, 2010 at 09:42 PM
@mcowger - as you saw, sub-LUN FAST GA'ed shortly after the post.
@michael - yup, it sure does. Don't rush to change yet, all the BPs are in the process of being updated.
Posted by: Chad Sakac | January 23, 2011 at 07:52 AM
I THINK THIS IS A GOOD ROUTINE.....IT WILL REALLY WORK FOR ME......WHERE DID YOU GET THE PHOTO OF THIS PAGE....IT SEEMS TO BE AN ORIGINAL PAGE........
20110215pilipalagaga
Posted by: discount designer handbags | February 15, 2011 at 03:58 AM