It’s the worst-kept secret in storage land – EMC has something BIG coming. In FACT, at the AWESOME EMC Customer Appreciation Party last night at VMworld, I ran into MANY customers that already HAD purchased this new thing :-)
I’ve talked in the past of the VNX development train, and code naming scheme (mountain ranges) here. I’ve also talked about OUR approach to the “how do you change the engine of a car while barreling down the highway before. We have several major pieces of code – two of which are hardened and enhanced via the VNX release train (block abstraction, filesystem code), our management and control stack (Unisphere) and our “abstractor” codebase (C4LX – hardened and enhanced via the VNXe release train – I talked about that here) that enables the two other chunks of code (which are completely modular) to optionally to be run on the same hardware where that is optimal, and separately when that’s better.
The big realization here was that we were out of gas with the block abstraction code in FLARE, and the previous generation VNX would be the end of the line for that chunk of code. BTW – I want to be crystal clear about this. I’m SURE I’m biased here, but… The current generation VNX is the most simple, most efficient, and most integrated with VMware platform on the market – beyond the other folks out there. Here are the stats:
- 71,000 systems shipped (EMC’s midrange is the most widely deployed mid-range storage platform on the market)
- 4.1 EXABTYES shipped.
- 600 customers with more than a PETABTYE of VNX deployed.
- 60% of the systems ship with some flash, and in total, VNX alone has shipped more than 235,000 SSDs.
That’s wildly succesful, and for that – THANK YOU CUSTOMERS! Along with the barrage of Isilon – EMC’s share in the Unified and pure NAS market dramatically increased over the last few years. But – being the best you can be is all about continuously getting better, and dealing with disruption.
What disruption am I talking about? Well, two primary factors:
- Flash becoming a higher proportion of the “IO persistence mix” (and in some cases, the right answer being “the all-flash array”)
- The Sandybridge and Ivybridge (and later) CPU generations moving to a new archtiecture – where if you can can’t scale like crazy across cores with a high-bandwidth interconnect
That second point can’t be underestimated. With Westmere, the days of core performance scaling stopped, and multi-core scaling started in full force. Storage stacks push parts of CPU and interconnect differently than general purpose workloads, and virtualization alone isn’t the answer always to achieving higher thread-bound performance.
We needed to do something big. This is all about a “Multi-Core” ground up rewrite of core elements of the block pooling and abstraction part of the VNX codebase. Multi-Core RAID (huge improvements in underlying FAST VP behavior, RAID performance, part of the basis of active-active and more). Multi-Core Cache (improvements to FAST Cache). Net is a huge performance increase!
[REDACTED – TUNE IN ON SEPT 4TH]
This is important NOT because people pick these platforms only for performance (because if that was the main driver, an all-flash array might be better), but because they want hybrid arrays to DO A LOT OF THINGS. After all, this product category is defined by the “it slices, it dices, it juliennes fries!” characteristics at a wide variety of scales (from small entry, to large!) and workloads (from virtualization, to archives, to backups, to SMB 3.0 to Oracle on NFS, to all sorts of funky stuff).
In that catgory, customers simply expect rich data services. They expect those data services on both Block and on NAS (that these days are getting more and more blurred).
These all depend on the core code that does the block abstraction (and on top of which there is a NAS block-to-file mapping).
As another example of how this is a BIG change (an engineer that knows the IO path will recognize that this represents a fundamental change in the IO path) is XCOPY. Personally, I haven’t been satisfied with VNX when it comes to it’s XCOPY implementation. We can do better. XCOPY is dependent on the core meta-data and block redirection code (by the way, so does snapshot behavior and thin device behavior). In the next-gen VNX, there are no longer internal MLU queues that were problematic, so the next-generation VNX doesn’t have the previous-generation’s XCOPY “less than ideal” behavior (BTW, some of the benefits of this is being worked to back-port to the Inyo codebase for existing VNX customers).
These underlying changes not only dramatically improve overall system performance, but enable new functionality like block-level deduplication, active/active storage models, and VDM mobility.
Here’s a demonstration of the new block-deduplication code in action (obvious use cases are EUC and Server virtualization).
You can download the high-rez version of this video here!
Here’s a demonstration of the new symmetric active-active storage model in action (in Rockies limited to traditional non-pooled LUNs, but that is not intrinsic, expect that to expand over time).
You can download the high-rez version of this video here!
Here’s a demonstration of perhaps my favorite new feature, non-disruptive VDM mobility. This is a big change – VNX NAS failover and migrations have dramatically improved over the last few years, VDM mobility pushes that to a whole new level. In the demo below you can see migration from a “N-2” platform (in this case an NS-480) to a current platform (VNX using Rockies code), and the migration is fast and simple. BUT – it’s also a migration from a non-VDM into a VDM. AFTER this last migration, all NAS migrations (and VDM movement on a platform) can be non-disruptive. That’s big for our customers, and fills an important gap.
You can download the high-rez version of this video here!
It’s also funny, before something new appears, the internal “battle cards” between vendors start to fly to attempt to get their field ready. It being the era of the internet, nothing much stays secret. So, I’ve seen the “battle cards” that others have prepared. As is always the case (I’m sure it’s the case when we do it too!) much in them are wrong (one more reason why I try to follow the mantra in the EMC Presales Manifesto: be a force for good – never go negative!). They suggest that “nothing has changed” in the NAS stack in the next-generation VNX. Umm – nope, that’s not true.
VDM mobility is one example. We also worked in a ton of improvements around core transactional NFS performance (multi-writer locking, m-time handling and much, much more). Customers using VNX NFS with vSphere and Oracle dNFS can expect roughly 60% better IO latency, and roughly 3x more total IOps with similar configurations on the next generation platform.
Hey - I know it's VMworld - but look, there's a big EMC megalaunch coming up on Sept 4th (and there MIGHT be even more to it than what I just alluded to :-)...
This is about 1/4 of the news for next week. There are a pile of deeply cool announcements including something coming from way down within EMC. I can’t say what, but I can give you a clue… Think “Rivers”
TRUST ME… TUNE IN :-)
Are you a VNX customer? Happy? What could we do better? What do you want to see? Are you interested in the next gen? Feedback welcome!
Comments on what those of us with current Gen VNX platforms should expect out of this release, especially with regards to deduplication and latency improvements? Lack of dedupe, especially on LUNS with VMs has been a bit of a headache for us as far as storage costs. It will be a shame if the performance penalty for enabling dedupe is too high for early adopters.
Posted by: S.Fuller | August 28, 2013 at 02:54 PM
I sure hope EMC will put in the effort to backport the rockies MCx code for the current line of VNX.
Posted by: Adam | August 28, 2013 at 11:01 PM
Hi Chad,
Yes I am a customer and one of the larger in my country. What we would like to see is more info for existing VNX customers who have stuck with EMC through he transition. One big question I have and I understand that the MCx software will not run on current VNX CPU's due to core counts but can the the current storage processors be upgraded by sliding them out of the chassis and putting new ones in? I know not in place as the RAID code has changed and thus the data would be lost but this could be avoided by temporary loan disks and moving the LUNs to them during the upgrade could it not? Also if this is all too hard then an upgrade path moving forward as a forklift requirement might get signed off this time but with the way the market is moving monolithic forklift upgrades will be un-achievable in the near future. And last of all is the EMC disk price does EMC understand that the cost of their disks is the sole reason they have competitors yes it has warranty yes it has support but if I can get them for market price (here in NZ that is less than 1/4 EMC price) then I have no problem paying for replacements as needed and I can then finally expand storage to retain data the company wants without vast costs for scaling storage (I might have closer to 1PB instead of 100TB).
Lastly I think it might be time for a VSI blog update as the EMC website shows the VSI vSphere webclient but the install docs for the latest version all mention the C# client.
p.s. VNX cloud edition might be a better fit but :) care to share?
Posted by: Brett | August 29, 2013 at 08:06 PM
S. Fuller and Adam, from my understanding there may be limited backporting of MCx features but only when they don't require the performance boost that MCx provides. For example, don't expect block de-duplication or smaller slices sizes in the VNX.
Ben
Posted by: Ben Conrad | August 31, 2013 at 10:03 AM
Hi Chad,
Will the VNX Simulator that's used by the partner for training and testing also be getting an software update or is that no being considered?
Posted by: Michel | September 04, 2013 at 11:05 AM