Well – in a nutshell – more, for less.
The vStorage APIs for Array integration (or VAAI) have been something that we previewed back at VMworld 2009.
Now that vSphere 4.1 is officially out, we can now talk about it without tapdancing around a lot of stuff.
I did a webcast on this topic the week before last. I needed to step carefully around not saying “vSphere 4.1” or committed dates/functions, but now you know… All the info on the webcast was technically accurate, and now officially decoded :-)
The topic is very interesting and the effect of these hardware acceleration offloads can be very pronounced.
Just one example (of many)… using the Full Copy API:
- We reduced the time for many VMware storage-related tasks by 25% or more (in some cases up to 10x)
- We reduced the amount of CPU load on the ESX host and the array by 50% or more (in some cases up to 10x) (by reducing the impact during the operation and reducing the duration) for many tasks.
- We reduced the traffic on the network by 99% for those tasks. Yes you read that right, 99%. During these activities (storage vmotion is what we used in the example), so much storage network traffic can be so heavy (in the example 240MBps worth of traffic), it impacts all the other VMs using the storage network and storage arrays.
And like Intel VT with vSphere 4, these improvements just “appear” for customers using vSphere 4.1 and storage arrays that support these new hardware acceleration offloads.
As cool as the Full Copy API is, I think hardware accelerated locking will have as much of an impact. Locking is thrown out by protocol passionistas as a “why NFS is intrinsically better than VMFS” - which I don’t agree with. There are differences which mean for some use cases, NFS is a fit, and for others the VMFS is a fit – the only thing that is always true is it’s handy to have both as options. I’ve talked about this at length, and if you want to understand more, read here and here.
In my experience, for most customers – they never experience any issues with VMFS locking. In fact, it’s “invisibility” is a good thing – VMFS remains in my experience one of the simplest distributed filesystem implementations. BUT, it can be an issue in certain use cases (and stuck locks suck). Ideally you wouldn’t have to have any considerations for a datastore other than “is it big enough, and does it deliver the performance the VMs need, and does it do it as efficiently as possible”. That is what the hardware accelerated locking does in vSphere 4.1. Literally metadata updates have no impact on other things using the datastore (VMs or ESX hosts). Mucho cool.
To understand more, and see resources to help visualize (and also a demonstration) – read on!
Where do these integrations occur?
Well, here’s my diagram of the places where storage can integrate with vSphere. The parts affected under the topic of VAAI have red boxes around them.
To understand the dialog that goes with this, listen to the webcast (link below). As a point of note – while all vendors are working hard to integrate with VMware (which is good – highlights the importance of VMware in customer environments), to date, as far as I know, EMC is the only single vendor to have products available that integrate with each of the areas in green. BTW - “co-op” means it’s an area where it’s not an integration API per se, but that there is a lot of cooperative development.
I’ve decided to upload the presentation directly in PPT format (with some of the “vSphere.ahem” stuff now correctly showing vSphere 4.1) – I figure it will help customers and EMC partners better than in hamstrung PDF format. Sure, my competitors will get it, and steal a few slides here or there, but hey – who cares, they’re welcome :-)
The presentation has some slides that when put in presentation mode, show the detail of what’s occurring “with/without” the APIs in vSphere 4.1. You can download the FULL presentation here.
You can also watch the recording of the webcast here.
The demonstration of one of the hardware acceleration offloads (the “Full Copy” API) during a Storage VMotion operation I decided to post also (thanks to the EMC Cork Virtualization Solutions crew for doing some of the key work – fellas, you rock!):
It’s notable that on NFS, EMC (and NetApp) can today get a similar effect (better in some ways, worse in others) to the Full Copy API (done differently) – where the array does the snapshot of an individual file (managed via a vCenter plugin), and then via vCenter APIs the VM is customized and registered.
Ultimately, as you see from the integration diagram, at EMC working to extend future vStorage integration (in future vSphere releases) to leverage NFS datastores (the VAAI support in vSphere 4.1 is block only). This would manifest itself in VMware operations being hardware accelerated without the need for vCenter plugins (like VAAI today, it would all be “under the covers”)
What do you need to benefit from this hardware acceleration of storage functions?
- Well, of course, you need vSphere 4.1. VAAI is supported in Enterprise and Enterprise Plus editions.
- If you’re an EMC Unified (an EMC Celerra purchased in the last year – NS-120, NS-480, NS-960) or EMC CLARiiON CX4 customer, you need to:
- Be running FLARE 30 (which also adds Unisphere, Block Compression, FAST VP aka sub-LUN automated tiering, FAST Cache and more). You can read more about all the FLARE 30 goodness here and here if you’re interested in more detail about what’s new in that release. FLARE 30 is going to be GA any day now…
- Also, ESX hosts need to be configured to use ALUA (failover mode 4). If you’re using a modern EMC Unified or CLARiiON CX4 array, using ALUA (with the round robin PSP or PowerPath/VE) with vSphere 4.0 or vSphere 4.1 is a best practice (for iSCSI, FC and FCoE). We will be automating configuration of this shortly in the always free EMC Virtual Storage Integrator vCenter plugin, but for now it’s pretty easy to setup manually.
- If you’re an EMC VMAX customer – it will be a bit longer – but not much, but VAAI support is in the next major Enginuity update scheduled for Q4 2010.
- It is supported on all block protocols (FC, iSCSI, FCoE)
- When does a VAAI offload NOT work (and the datamover falls back to the legacy software codepath) if all of the above are true?
- The source and destination VMFS volumes have different block sizes (a colleague, Itzik Reich, already ran into this one at a customer, here – not quite a bug, but it does make it clear - “consistent block sizes” is a “good hygiene” move)
- The source file type is RDM and the destination file type is non-RDM (regular file)
- The source VMDK type is eagerzeroedthick and the destination VMDK type is thin
- The source or destination VMDK is any sort of sparse or hosted format
- The logical address and/or transfer length in the requested operation are not aligned to the minimum alignment required by the storage device (all datastores created with the vSphere Client are aligned automatically)
- The VMFS has multiple LUNs/extents and they are all on different arrays
So –add this VAAI support to the long list - one more EMC/VMware technical integration (we have 58 at last count!!)
I’m sure that I’ll be asked left right and center… “which other storage vendors support VAAI?”. I’m overt about the fact that I work for EMC, but try to make this a useful blog for all. Unfortunately, on this – the right answer is: “talk to your storage vendor”. I also expect everyone to talk about it and issue press releases. Eventually, I’m sure that these features will be universal (to varying degrees and with varying implementations).
- A good guidepost for customers would be your vendors Site Recovery Manager support. There were a huge number of vendors pointing to “we stand behind SRM!” at launch who didn’t actually have SRAs for a year :-)
- Another good guidepost is if you look back at the VAAI sessions at VMworld 2009, it’s a good signal to the folks who seem the most active in the development of all the VAAI goodness with VMware.
- Another tip - don’t listen to anyone other than your vendor on a technical topic (a competitor will always throw the other guy under the bus) - simply ask your storage vendor to say “supported, or will be supported on this date”… and I would suggest that you ask for a demo – if it’s going to be GA in 6 months or less, it’s running in a lab. If they can’t show it to you (and give you specifics), I would would start wondering.
Speaking of more for less, while this is a technical post (and I try to keep marketing off) – this spoof video with Erik Estrada (what a good dude!) was too funny (and too apropos – after all, we’re talking about efficiency, and doing more with less!) to not post.
So… put another way - “In a nutshell – what does vSphere 4.1, VAAI and EMC support mean for you?”
What Intel VT did for compute in vSphere 4….
…VAAI-enabled EMC arrays do for storage in vSphere 4.1
So – what do you think about vSphere 4.1 and VAAI (I think it’s cool!)? What would you like to see next?