[UPDATE: 5/5/15 - 10:08am PT] - to clarify - the current vVNX code doesn't YET support VVOLs. My point was that it will be the first vehicle to try VVOLs in the VNX family in Q3 via an update. Download, try, use - and soon VVOLs will arrive on vVNX, and then native support will follow.
Project Liberty is here. The world will be able to download and use a software-only VNX – completely frictionlessly.
Yup, completely free, for any use you see fit. No timebomb. Of course, like all the things we’re doing in this vein (everything!) it only comes with best-effort community support.
Step 2 – leverage the community.
Step 3 – have fun :-)
We aren’t the first to do this (and a long time ago put out the virtual NAS stack for community use) – but I’m glad that we are for the full VNX functionality.
- vVNX enables people to learn, play, use – great for developing tools against the APIs.
- vVNX enables people to deploy a robust storage stack for industrial/OEM use cases.
- vVNX enables interesting burst to cloud scenarioes where you either way to replicate to a public cloud
- vVNX data services on, well, anything. You can use the vVNX filesystem, or the dedeup/compression engine.
- … and a TON of other use cases like below:
Some of the use cases on this chart we’ve already delivered. For example, the vVNX embedded in a VMAX3 is the embedded NAS functionality.
There are some small caveats with the vVNX that are differences with the full VNX/VNXe but it’s a pretty small list (no FAST Suite, no FC, Recoverpoint, no HA variation). And of course, the performance is highly variable depending on the hardware you use…
There is, however, an important story behind the story – what’s happening with the VNX. For more – read on!
- The vVNX makes the Unity codebase clear. The VNXe 3200 is a completely unified, modularized (containerized) codebase that runs on a lightweight linux kernel. This makes the data services able to be used in all sorts of light form factors and different ways. I think there’s some merit to the internal debate to stop calling it a VNX (though it is a well regarded and successful brand).
- When you download the vVNX – you’ll see that it’s actually newer than the latest VNXe. This is evident by the fact that it uses a Unisphere that is 100% HTML 5, with no Java dependencies (yeah, finally!). Likewise, the rollout of VVOL support on VNX and VNXe will come first on the vVNX, then on the VNXe, then the VNX. NOTE: to be explicit - the current vVNX bits don't yet have VVOL support, it will come via a vVNX update in Q3, Native support will follow. My point here is to look at the vVNX as the "first thing to have capabilities, good vehicle to learn - and some are already ahead of the VNXe and VNX, and VVOLs will be another example of that"
- There continues to be an evolution in the VNX code base – with some of that appear in the VNXe 3200 update.
Check out this demo of the vVNX in action:
.. And check out this latest demo of VVOLs on vVNX – which shows the continued evolution of Unisphere as well.
With the Unity transition nearly complete – the integrated/containerized codebase is starting to stretch it’s wings. We announced one of those examples today: the VNXe 3200 all flash variation.
The VNX/VNXe family is at it’s heart a hybrid, and a hybrid workhorse, like this Clydesdale. It does many things well. Block, NAS – and broad data services.
It can do it in tiny packages, and small entry costs.
No – it’s not the fastest. No it’s not scale-out. It never will be.
But – that enables something amazing:
A VNXe 3200 with 3TB of Flash, capable of 75,000 IOps, fully unified for block and NAS use cases, and in 2U – for $25,000. Wow.
While there are a variety of configurations – we are making things simple for the customers and the channel. Single price that includes the software. Channel partner able to quote and deliver with no oversight (none) from EMC.
The table below shows a variety of all-flash configurations in this tiny but powerful package. The “*” on Price is the street price, and it will be orderable from the EMC Store (http://store.emc.com)
It’s notable that we jam 25 spindles into that tight little 2U chassis – so you can add a lot more capacity via HDDs in addition to the SSDs.
That’s a lot of power in a little package :-)
The VNXe 3.1 Operating Environment also has massive performance boosts (around 30-50% for SSD-dense IOps and around 10-20% for SSD-dense system bandwdith – less with HDD bound configs of course). Oh – and in performance land, VNXe gets larger FAST Cache configurations (up to 400GB), and people who know VNX know that FAST Cache is good :-)
Performance on iSCSI-attached configs didn’t jump as much – mostly neutral, something we will continue to tweak.
Also that VNXe 3.1 OE update includes the first appearance of the next-gen VNX transactional filesystem - UFS64. I wouldn’t advise with this first release using it for broad uses, but you can see where we are going. It supports 64TB use cases, has filesystem shrink/reclaim. UFS64 needs more work – but you can see that what’s starting to happen is that the unity codebase is getting features first (vs in the VNX).
BTW – its’ not only VNXe that gets goodies – VDM development in VNX is bringing a full Metrocluster alternative for VDMs and NAS. There’s also a pile of great updates through 2015 for VNX customers.
Now, all that said… I think it would be disingenuous to not acknowledge what’s going on in the storage industry here (and you can see the impact on other vendors that really depend for the most part on VNXe Unity-like architectures – ahem you can probably infer who I’m talking about, and it’s not just the big one).
The segment of storage that VNX/VNXe participates (Type I transactional storage architectures in the $10-1M price bands) in is surely being disrupted in several dimensions:
- what would have been large capacity VNX “Tier 2” block in the past could today be served by SDS + servers today. For example, it could be by a ScaleIO/commodity configuration or a VNX – answer would depend on economics and density, and at some point the elasticity and horizontal scale-out of ScaleIO would tend to win.
- what would have been a low latency transactional block workload in the past could today be served by an AFA with rich data reduction services. For example, we’re seeing lots of people deploy XtremIO for many transactional workloads of all types (database, virtualization, EuC, EPIC, you name it) and absolutely loving it.
- what would have been a small pile of NAS “Islands” of VNX in the past would today be served by a scale-out NAS offer, and some (the storage backing sync and share or collaboration) stored on object storage. For example, the growth in Isilon is like wildfire (we had a single customer buy 100PB in Q1!!), and our object stores (Elastic Cloud Storage and Atmos) are growing quickly (many EBs sold).
- what would have been a small VNX attached to a set of servers for general purpose virtualization in the past could might today be served by a hyper-converged appliance offer. For example, at small scale, a customer might be delighted by VSPEX Blue using VSAN, and at large scale, today’s new VxRack offer using ScaleIO (more on this later).
- what would have been a VNX being deployed to service a specific application workload like Exchange (which in the past was the driver behind many purpose-deployed arrays) likely would have that workload likely to be in Office 365 :-)
That’s why I’m glad that EMC has leading products and solutions in all these spaces :-)
That all said - what does remain the sweet spot of the VNX family (and I think this is it’s true strength – all the others above are cases where the “Type I” tightly coupled clustered storage architecture ruled for years but are now being succeeded by alternate designs) is:
… a multi-purpose workhorse that happens to come in a small size (and grows pretty darn big before it’s lack of scale-out and dual-controller architecture shows). In those use cases, cost, footprint, performance, capabilities – VNX and VNXe smoke everything else out there.
It’s interesting to see people come to the conclusion that hyper-converged isn’t as dense, and AFA doesn’t scale down quite as far (and in general has less mature NAS), and that scale-out NAS doesn’t do transactional NAS that well…
Try any other way to get something with something super-easy to use, deeply integrated with vSphere/Hyper-V/ Block/NAS, 2-8TB of flash , 50-100,000 IOps, rich snapshot, compression, dedupe functionality all in 2U – it’s a short list of viable choices :-)
Hi Chad,
Are you able to give any insight into when the VNX well get support for VVOLs?
I assume from your comments in the past that the current VNXe3200 will get an update later this year and the VNX will require new VNXe like hardware in the form of the VNX3.
Many thanks
Mark
Posted by: Mark Burgess | May 04, 2015 at 03:17 PM
@Mark - as always, thanks for the Q!
Support for the VVOLs on the VNX family will arrive in Q3 2015 on vVNX, then 1H 2016 for the VNXe. I'd share the VNX plan, but still don't feel comfortable as it's a moving target. We know we need to bring VVOL use cases to the VNX customer base.
I do think we're a little behind, yes - and the teams are working on it furiously. What I'm seeing is that 2015 is the year of "trying on" VVOLs - and with VVOL 2.0 support arrives for remote replication support in the VASA API. This means that while we ARE behind (acknowledged!) - our commitment to be there across the portfolio remains solid.
We were there from the get go, and furiously in sync (as you can see from the years of demos at VMworlds past), but as we got to the finish line for the vSphere 6 release - our release schedules just didn't match up (and VVOL support is, in my view a major release).
It's not a pretty situation, but we are on it!
Posted by: Chad Sakac | May 13, 2015 at 04:23 PM
Can the OVA be deployed using VMware Workstation? Are there any modifications to the VM settings in order to get it to work?
When the OVA is imported into VMware Workstation, once powered on, it appears to hang during the boot process.
bks
Posted by: Brian Shimkus | May 21, 2015 at 01:50 PM