Last week we announced and today at EMC World demonstrated the next generation VNXe (VNXe 3200).
There’s the tale of the tape (functions, parameters, behaviors – great blog post here, and another from the awesome Jason Nash @Varrow here), but there’s another story.
The VNX is a “Type 1” storage architecture (tightly coupled, clustered – see my “storage architectures” post here).
This “phylum” in the universe of persistence software stacks is the sweet spot for:
- “Swiss army knife” behaviors – able to serve many functions all moderately well – in essence, “well rounded”.
- Scales down (small) and up (big) extremely well, but doesn’t scale “out” well.
Scaling DOWN is as important as having huge upper end envelopes (though people focus on BIG hero numbers or “scales to infinity!”). For many customers that need something that does: 1) a little of everything; 2) has deep integration into a broad set of use cases (all sorts of application stacks); 3) supports lots of environments (from hypervisors, physical environments and more) – well, they need something small (both physically and economically).
I’ve described earlier how we’ve embarked on a multi-year journey to re-architect the VNX family. It’s not easy when you have a huge installed base, and there are other industry examples of mature, widely deployed platforms that need a “full overhaul” while not missing a beat and barreling down the highway at 200km/hour.
History is littered with architectures that didn’t successfully make this transition.
We’re about halfway down the path with the VNX architecture, and I think the team took a genius way to do it – abstracting functions, changing one at a time, and using release vehicles to do it to harden the new code with as little customer risk as possible.
The VNXe and MCX are critical milestones, now done in this journey – and this is the story behind the story.
Read on!
Think of the VNX as: “the platform where we introduce new functions/features/performance into the VNX code stack”. MCx in the Rockies release (which delivered huge changes to process/core handling, flash scaling, new indirect memory mapping for data services, and the basis for a new active-active model) is one example.
Think of the VNXe as: “the platform where we merge and integrate those new features into more tightly unified and simplified code stacks and customer packages”. The original VNXe was the first at scale GA test of the “C4 Unity” code stack (talked about that here) which enables us to run all the VNX services in a common kernel, with a variety of innovative encapsulation techniques (full OS virtualization the way most people think of it just being one). The first generation VNXe incorporated the block and file code stack/behaviors from early generation VNX (Specifically Flare 19 and DART 5.6 - hence it’s features/functions), and all the block storage presentation was the iSCSI on file stack (using the DART 5.6 iSCSI target).
It’s critical to internalize this. In the VNXe, while it shares “familiness” and functional codebase with the VNX (common features, user experience)… the VNXe has a single linux-based kernel (no “block stack” + “file stack”), and fits it all into a much smaller footprint (no “Storage Processors” + “Datamovers”). A litmus test is that VNXe has no trace of any Windows kernel.
This VNXe 3200 release merges in the MCx code into the C4 package.
- To a customer (more importantly than how we constructed it!), this means the following for THEM:
- VNXe customers get all the goodness of the largest customers in the VNX family. This value of “starting small” matters as much to some customers as “scaling up” does to others.
- Starting small is POTENT: 2U, 8 Sandybridge cores, 48GB of system memory, 8 10GbE ports, up to 4 8Gbps FC ports, 25 spindles in that 2U package (and expandable), fully supporting flash including dense SSD configs, scaling ultimately to 500TB of capacity. This isn’t a “small” box – it just comes in a small package, at a small price. For many customers, this is everything they need.
- Full FAST VP, FAST Cache capability on the entry-platforms (matching the most recent VNX).
- Dramatic 3x uplift in overall system performance (matching the most recent VNX MCx code)
- Full protocol support (FC, FCoE, iSCSI, NAS, etc)
- All of the other goodies about “enterprise app/use case integration) – VSI, ESI, and later in the year, AppSync and ViPR Controller support.
- VNXe customers get all the goodness of the largest customers in the VNX family. This value of “starting small” matters as much to some customers as “scaling up” does to others.
- To a student of persistence stacks (and the storage industry), it means this:
- One more major chunk of the VNX family transition is complete.
- We continue to stress and expand the footprint of the new “Unity” code stack in VNXe. I suspect that the “dividing line” between VNXe and VNX will just disappear over time.
- This is a critical (almost last step) in a VERY interesting case study in modernizing widely deployed mature products (rarely done successfully in our industry).
So – how simple is it?
… And what if you have a lot of them?
… And what if you want application integration?
So… What’s next?
- Well – the NAS portion of the VNX codebase is awesome, but is due for a refresh. Customers really dig what VNX does here today, but there’s room for more. The rich Virtual Data Mover functionality we’ve introduced in VNX is great (layers on top of the core NAS code), but a whole new next-generation transactional, “tightly coupled” clustered file system (totally different than scale-out filesystems like in Isilon) would be good. Think next-generation scale, file/block indirection (bolting the file snapshot and replication tightly to the block mapping) and more. In fact.. this is already underway. The filesystem in the new VNXe is directly “bolted into” (not “on top of) the block abstraction model. In otherwords there are no “dVols”, and the snapshot mechanism for block is in fact the same as the model for the filesystem.
- After one more “sync” point (bringing the new filesytem codebase) with VNXe, the Unity codebase conversion is complete, and we will have literally “changed every part of the car while barreling down the highway”. New core block stack (MCx), new core NAS stack designed for the next decade (and very high flash mix) – all running on a common, tightened, updated kernel. At that point, we can take that single code stack and scale it to different segments and applications.
- The Unity codebase in the VNXe is also the right vehicle for making the “SDS version” of the VNX code stack (for it’s data services) widely available – it’s simpler, it’s smaller, and it’s kernel is linux and can be redistributed (and therefore be “free” of some of the external restrictions that have made the current “VNX VM” be held under lock and key inside the walls of EMC and EMC vLab. This is the essence of “Project Liberty” (more on this in an upcoming post – wait for it!!). What could it be used for? One example would be for Test/Dev, one would be for a multi-tenant use case (everyone gets their own VNX!… All running on a big ScaleIO pool).
I have to salute the great teams that work on the VNX within the Enterprise and Midrange Storage Division. The VNX and VNXe are our “workhorse” platforms. They serve and delight hundreds of thousands of customers – and it’s great to see us bring innovation to those customers who need a little bit of everything. Thank you team!
Are you a VNX or VNXe customer? Are you happy – and what do you think of the course we’re working to chart for you?
I am an existing VNXe customer (3100 with a couple of additional DAEs). It's been a pretty decent box for the workload we've assigned it. I'm liking the look of the 3200 with the addition of FAST Cache and FAST VP, as well as the better performance monitoring/reporting.
A few questions regarding the 3200:
What does the release of the 3200 mean for the ongoing release of updates for the 3100? Will the 3100 still get feature updates?
Is there an upgrade path to take my existing 3100 to the 3200?
Could I take existing DAEs and attach them to the 3200?
Does VNXe now support rebalancing (so when new disks are added, existing data is rebalanced across the new disks as well)?
And not a question, but a comment: The VNXe was originally targeted at the "IT Generalist", but in doing so, the UI was dumbed down too much IMHO. The number of RAID options available has increased over time, but I did hit a situation where we got differing I/O performance between two volumes because the VNXe had automatically decided to layout each volume across a different number of disks, and there was no way to change it without deleting the volumes! Hopefully EMC will be making the UI more fully featured moving fordward. More access to tuning and performance metrics would be very useful - even to those of us who are not dedicated storage admins. Thanks!
Posted by: JR | May 05, 2014 at 10:02 AM
@JR
There is no upgrade available from VNXe3100/3150/3300 to the new VNXe3200. Also any hardware will not be compatible between the 2 generations, including the DAE's.
Posted by: Tom | May 06, 2014 at 05:31 AM
Hi JR,
I work for EMC and recently launched the EMC VNXe3200 into the market. Thank you for your questions here.
The good news is that the VNXe architecture has gone through such a massive shift with regards to processors, memory, and overall design that we cannot accommodate Data-in-Place upgrades, or backport the new features to the 3100. The 3100 will receive fixes but no new major enhancements like FAST Suite support.
We do offer tools to assist with migration from PowerPath Migration Enabler and can also recommend better tools once we know more about your environment. You cannot take existing DAE's and attach them to 3200 - it is just too different. VNXe does support rebalancing when using FAST Suite.
Also I really liked your comment re: "IT Generalist" and although we stopped using that term for this launch we are still focused on not requiring folks to be storage experts.
One thing you can see in our new UI (which you can download for free by googling "vnxe3200 simulator") is that we balance this nicely. We use industry standard terms like LUNs and RAID Groups and Storage Pools versus more confusing terms like Virtual Pools (which were actually LUNs). We don't force fit RAID groups for disk types as we once did, and allow the customer to make the final call. The team has learned a lot since VNXe3100 and I believe it shows in the final version of the VNXe3200.
Posted by: Brian Henderson | May 06, 2014 at 10:30 AM
Is the VNXe 3200 still a linux code base or has it migrated to the same code base as the VNX line
Posted by: Dave Carlton | May 07, 2014 at 10:14 AM
Is there a possibility to move disks from 31xx or 33xx series devices to the new 3200 series devices - or are the disks incompatible as well between devices?
Posted by: MV | June 10, 2014 at 07:41 AM
Is FCOE actually supported? The EMC page doesn't list it ... curious because a few customers have asked about FCOE.
https://store.emc.com/us/Product-Family/EMC-VNXe-Products/EMC-VNXe3200-Hybrid-Storage/p/VNE-VNXe3200-Hybrid-Storage#Specifications
Posted by: sddy | September 05, 2014 at 11:35 AM