« A bit of insider perspective part I: my take on the Greenplum deal | Main | vSphere 4.1 Cross-EMC support whats the good/bad/ugly? »

July 13, 2010

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Roggyblog

This is a great feature my only wish is that more vendors *cough* 3Par *cough* would support this.
Sorry Mr farley this is def 1up for EMC :P

Simon Long

@Roggyblog, have you seen this: http://www.3par.com/news_events/20100713.html

Thanks for the great write-up Chad, that's helped make things a little clearer for me.

CC

Chad ... check the link again for powerpoint. I am getting a resource not found error.

Nicholas Weaver

disclaimer: I work for EMC - but you know that Simon :)

I see 3Par announced the *development* of the plugin. Does that mean it is GA? Looks like a alignment announcement.

Roggyblog

I had not seen that..thats great news!

Chad Sakac

@CC - thanks - link should be fixed now... thanks!

MauroAyala

Craig, the ppt link is ok now :)

Thanks Chad, really useful to explain that to my customer but in Spanish.

marc farley

Sorry Mr. Roggyblog, but please check out my blog post today on StorageRap. If you don't know how to find it I'll post a link, but I don't really like link-dropping in comments.

Yes, thanks for the good write-up Chad. There is a lot to array integration. One fine point I'd like to make copying EZT to thin works very well on 3PAR arrays with zero detect - no problem, without an expansion in capacity.

Chad Sakac

@ Marc - great to see 3PAR working hard to integrate with VMware, good competition is always respected and welcomed. The note of how EZT on 3PAR works differently is a good one.

I would of course note that an array using the Hardware Zero acceleration can now choose to ignore SCSI WRITE commands that are issued as SCSI WRITE SAME in theory :-) This means that the whole "catch zeroes before they are written via an ASIC" is no longer a unique thing in the VMware use case :-)

While EMC arrays have a "zero reclaim" for things that have already been written, in the VMware use case, the VAAI stuff actually makes us able to avoid writing the zeros in the first place.

So, the "gotcha" really applies only to legacy EZT VMDKs, not new ones.


Vaughn Stewart

--- NetApp Disclaimer ---

Chad,

Solid post with lots of solid info on VAAI. It's nice to see the joint engineering efforts of the industry come together to enhance SAN technologies. I think its very honest to say that with the first release of VAAI VMFS becomes more like NAS in terms of integration and scaling with VMware.

I do disagree with some of the 'mis-information'in shared in your post when you discussed NFS. Are your critical views representative of differences in capabilities with SAN and NAS arrays from EMC?

In order to not muddy up your blog, I'll share my thoughts for those interested here:

http://blogs.netapp.com/virtualstorageguy/2010/07/vsphere-41-and-vstorage-apis-for-array-integration-vaai.html

Again, very solid post.

Marc Farley

Vaughn, Are you saying that VMware's own VMFS file system is not as well integrated with vSphere as Netapp's NFS? If so, I find that slightly hard to swallow. You can make arguments about how well certain functions work in each, but its a bit arrogant to say what you said here. Did I miss news from Netapp today that was something more than an alignment with VAAI at some undisclosed future time?

Nicholas, and speaking of alignment announcements, I didn't respond to your question previously, but the release of 3PAR's VAAI plug-in will be in September.

Chad, the "in theory" part of your discussion about block zeroing meant what? Is there something a little less vague that you can say about EMC's zero-processing technology, such as how it works and when it might be available and on what platforms? Recognizing zeroes as they are read, for instance during a Full Copy operation, and reducing the capacity consumed by the copy is not the same thing as creating an EZT VMDK using Write Same.

Chad Sakac

@Vaughn - my comments were not in any way critical of EMC's NAS implementation, which while not perfect (I don't think ANY product is perfect), I think competes well with any competitor, and it's market growth (50% Y/Y in Q1, Q2 will be announced shortly).

What I meant is that:

I wish there was NFSv4 and NFSv4.1 (and eventually pNFS) support in vSphere so a NFS datastore could be scaled out over multiple ports, multiple processors, and scale-out NAS was possible - like can be done with VMFS today.

I wish that NAS devices (EMCs and others) had failover characteristics that were as consistent under a broad envelope of use cases as block devices (failover can consistently be in tens of seconds, but varies under various conditions and can extend to minutes) - I wish that failover characteristics were not a function of a variety of conditions - like is the case with how block devices and VMFS operate today.

But - those wishes are (today) wishes (and active engineering projects).

Historically. I wished that VMFS metadata update scaling (blow out of proportion in many cases, but valid in some - just like the beefs against NFS by some, some of which I listed above) was more like NFS metadata update scaling - and in vSphere 4.1 it is.

I wished that we could accelerate VM-level copy operations on VMFS like we can on EMC NAS - and in vSphere 4.1, we can.

These things go back and forth, and I could go back and forth with strengths weaknesses - but I don't think that would be too fruitful. You and I (and our respective orgs) are working on updated NFS client work, continuing to tighten failure conditions, and drive vStorage API hardware acceleration support for NFS (and Storage IO Control support). People can move forward, regardless of protocol, with confidence. For the vast majority of use cases (VAST majority) - if that is the debate, wow - IMO, the customer is missing focus on bigger problems.

I'm not a protocol passionista - I believe you get leverage from both protocols.

@Marc - thanks for the question. The caveat listed above is due to a EZT VMDK written on a generic device (for a moment assume this), can't leverage the extended copy command because it's not a 1:1 block range map. If the zeros are never written to disk (but allocated as far as the host is concerned), then EZT is always, thin in practice at the array level. from my understanding (not claiming to be an expert on 3PAR), this is done in the ASIC in your platforms. With VAAI, when the SCSI WRITE SAME command is issued (hardware accelerated zero/init) - in the Enginuity implementation (checking on the FLARE implementation) - the zero is never written (and never needs to be reclaimed), only allocated - so very similar effect (though obviously vSphere 4.1 specific, and the 3PAR approach is general).

Sameer

Hey how can I create the VAAI filters through the CLI in Vshpere4.1

marc farley

Chad, I think 3PAR's VAAI WRITE SAME implementations are probably very similar. A 3PAR array doesn't write zeroes and have the ASIC detect them - it just doesn't write them. As for the different topic - full copy of EZT VMDKs to thinly provisioned volumes - I can see where this wouldn't work for most TP implementations because the clones made this way could be huge - especially if somebody made a bunch of clones of large EZT VMDKs. I wrote a blog post on this here: http://www.storagerap.com/2010/07/clarifying-vaai-capabilities-and-implementations.html

Matt

Does anyone know if there be VAAI support for EMC NX4 devices. We are a small setup and dont have the high end devices. Just curious if smaller shops will get this feature from the lower end storage units or not.

John

For VMFS "hardware accelerated locking", remember that the new 'Atomic COMPARE AND WRITE' command proposed for SBC-3 has not yet been finalized in the T10 SCSI standards, nor has SBC-3 yet been ratified.

So it's kind of silly to me to go championing this feature, which no hardware works with. Sure, VMware got the EMC parent company to do some early prototyping and add support in a small set. But most vendors won't add support until the standard is finalized (for fear it could change), and then it takes firmware updates to add support... which take a long time. Additionally, people with older arrays could be out of luck, as vendors may choose to not go back and add support.

While interesting, it's not quite the victory you portray (yet)

Andrea Mazzai

Looks like the PPT presentation is offline again!

Chad Sakac

Apologies Andrea - ppt link now fixed.

Jose Manuel Carballo

Great post! You've done a great job. I hope this new feature will be available soon in other vendors than EMC

BlueShiftBlog

Great post Chad and thanks.

I was wondering if there were any numbers surrounding how the VAAI features may impact general snapshot performance.

I can imagine how the hardware assisted locking might improve a scenario where there were several snapshots concurrently on the same volume.

However, it's not clear to me to what extent VAAI might improve a scenario where a large log/delta file is being merged on a snapshot close for a single VM. The HA-zeroing might help somewhat I am thinking, based on what the VMDK looks like? Thanks!

Derek

One feature that I would like to see, if it's not present in 4.1 and VAAI, is have ESX send 'zeroize' commands to the array whenever a VMFS block is no longer assigned to a VM. Why? If I vMotion a 200GB VM from LUN1 to LUN2, I want the array to reclaim all of the previously allocated storage on LUN1. The only way the array will know the data is no longer valid is if VAAI tells the array to zeroize the blocks.

Another feature VMware needs to implement is Guest OS integration with VAAI such that when NTFS deletes a file that VAAI tells the array xx blocks aren't needed and zeroizes them. This is the same concept that the Veritas SAN volume manager uses with some arrays, such as the 3PAR to reclaim freed NTFS space.

If VMware implemented both of these features then VMFS LUNs hosted on 'advanced' arrays would stay THIN throughout their entire lifecycle. That would eliminate the need to use sdelete or fill up a VMFS volume with an eager zeroed VMDK to trigger the array's zero reclaimation features.

Storagemeat.blogspot.com

Chad, actually happened across this while reading about the 10 GigE stuff today. Since you mentioned other vendors in the comments, thought I'd point out that HDS also supports VAAI on the AMS2000 line with firmware 0893/B or higher. We've got more detail at http://storagemeat.blogspot.com/2010/08/geeking-out-with-vaai.html.

Thanks!

Tomi Hakala

Chad,

VAAI is supported on CLARiiON with flare 30 and with VMax in Q4 this year, but any news when VAAI is supported with VPLEX?

Derek

I did some performance testing of VAAI with a 3PAR T400 array, and wow, 3PAR screams when creating zeroed VMDKs. VAAI performance is 20x faster than without VAAI.

http://derek858.blogspot.com/2010/12/3par-vaai-write-same-test-results-upto.html

Gguglie

Hi Chad,
I would like to use some of your slide in a post about VAAI on my blog In particular I would like to use the pictures about storage API with/without VAAI, translating them in italian (the post will be in italian).

Can I use them? (giving credit to you of course!).

Thanks
Giuseppe

The comments to this entry are closed.

  • BlogWithIntegrity.com

Disclaimer

  • The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Dell Technologies and does not necessarily reflect the views and opinions of Dell Technologies or any part of Dell Technologies. This is my blog, it is not an Dell Technologies blog.