« EMC vCenter Plugin UpdateVNX/VNXe support | Main | Upcoming VMware/EMC Webcastsincluding Chads Choice »

March 14, 2011

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00e552e53bd288330147e335426d970b

Listed below are links to weblogs that reference Multivendor VMware Performance–thank you (with toys and a treat):

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Jonas Nagel

Hi,

We have many concerns using NFS on Celerra with NS/VNXe (or VNX with DMs), because we're doing full virtualizations, i.e. no physical servers anymore and are weary of (v)DM fail-overs.

Since it's doing a re-mount and potentially some "quick" fsck over the file system, depending on the size of the FS, it might take a while still to remount the NFS. We do know that in the past, DM fail-overs could take as long as 15 minutes, we've also heard that in recent DART codes, re-mount times have been largely improved, but not elminiated and even 90 seconds, in our opinion, is way too much. And what happens if the NFS file system becomes corrupted during fail-over and fsck does it's full check or fails to re-mount the file system (might this still happen?)?

For above reason, and the comparably minuscule fail-over times of iSCSI and FC, we mentally waved off NFS solutions for craplication usage only.

What's your take on this?

Chad Sakac

@Jonas - thanks for being an EMC customer.

Speaking frankly (as I always try to do) NAS failover time remains less than ideal. There have been massive improvements from the past - failovers occur well under the 125s recommended timeout value (which the EMC VSI plugin automatically sets) on ESX hosts.

That said - there's still a lot of work to do. Through 2011, the goal of the NAS team is to get it to under 30s always, through a variety of methods (this 30s mark would let it operate in almost all cases without extending timeouts).

As a general statement answer to the "what's my take" - if I were a customer, I wouldn't use just VMFS on block or just NFS. I think juidicious use of both is the most flexible choice. There are simply still use cases which are different enough that each has their place.

For some reason I don't fully comprehend, many customers want to "just use one". I guess I can partially comprehend the appeal of "just use one" (operational simplicity), but If you're going to look at the world through that lens, then you need to pick the one which covers the superset of SLAs, not the subset - which is one of the reasons many customers (regardless of storage vendor) go VMFS/block only. This is reflected in your thinking.

To me, that's a "too bad" choice (to limit yourself to just one) - you miss out on leveraging the relative strengths of one vs. the other.

There are, BTW, three ways that failover time can be brought down even further:

1) PERHAPS: clustered NAS solutions coupled with near term changes in vSphere where failure/retry uses DNS rather than IP (on failure, the fileserver doesn't "boot", the client is redirected to an already working node. Working on validating this.

2) DEFINTELY: continued acceleration of "classic" NAS failover (see above). Lots of things that can be done here (partial boot state, changes to the core filesystem) that are all being done.

3) DEFINITELY (but further out in the roadmap), use of NFS v4.1 (multiple sessions) or better yet, pNFS coupled with clustered NAS. This is a variant of 1, but depends on vmkernel support for NFS v4.1 or pNFS.

Hope this helps!

Jonas Nagel

Well it's good to hear that you're as concerned about the issue as we are.

Concerning the "mix and match" of technologies, it's not that I or we would disagree, but mostly it's connected to the storage device model, which is used or were used in the past or as you mentioned, complexitity issues for those (e.g. SMB customer admins) needing to maintain the solution in the end, as you mentioned too.

We really hope for 1) and/or 3), because 1) it's what NetApp does, allegedly (can't tell, I've only seen them from outside, blinking in racks so far).

Paul Aviles

Chad, if you go with the patch does it affect any future DART upgrades?

We are using an NS-120FC all for NFS presenting them to vSphere for full data center virtualization and I really like the simplicity of it.

Regards,

Paul

The comments to this entry are closed.

  • BlogWithIntegrity.com

Disclaimer

  • The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC. This is my blog, it is not an EMC blog.

Enter your email address:

Delivered by FeedBurner