The steady progress to make all our EMC intellectual property available in both “traditional” hardware appliance and software-only (SDS Data Planes) continues!
Today – VPLEX Virtual Edition is GA!
What is it?
- VPLEX (in both hardware appliance and vApp form, as they have all the same features and goodness) creates an active-active storage model that can be geo-distributed. Each bit of data exists in multiple sides, but unlike a “replica”, can be simultaneously accessed (via a distributed coherent cache) across distances.
- VPLEX supports doing this in a couple very compelling use cases:
- one site (called “VPLEX Local”) and is used to eliminate whole arrays as a point of failure, or to make disruptive migrations a thing of the past.
- at two sites that are at synchronous distances - think up to 10ms RTT latency (called “VPLEX Metro”) and is used to make two data centers look like one, to help with data-center level migration, to create stretched Oracle RAC clusters, and very commonly – for creating stretched vSphere clusters (or vMSC – you can read about that here)
- at two sites that are at asynchronous distances (think thousands of km) and high latencies (called “VPLEX Geo”). This has (for now) more constrained use cases (is not supported for vSphere stretched clusters as an example), because the use case requires the ability to withstand partition scenarioes where the two sides are not synchronous.
- VPLEX can also form a “leg” for a Recoverpoint continuous data protection or continuous remote replication session (i.e. crazy rich local replicas, and very feature rich remote replicas).
In fact, embedded in today’s awesome news was a critical new replication topology called “MetroPoint” which has been one of the most requested features – the ability to have two active sites with both local continuous protection and a remote DR copy (but a single copy – representing that the two sites are only one dataset). Here’s what it looks like:
Now, what’s the scoop with the Virtual Edition in particular?
- The FULL awesomeness of VPLEX is now available in a simple, low cost software-only version (technically a vApp).
- There’s a killer new plugin that enables every VPLEX activity (with the physical appliance or the virtual edition) to be done right from vCenter. Of course, this snaps right into the EMC VSI plugin (the “one ring to rule them all” – all EMC vCenter plugins snap into VSI).
NOTE: Until “future vSphere” releases, VPLEX VE is iSCSI only – as FC through vSphere is cumbersome (current FC NPIV = ick), but expect this to continue to change.
What’s really interesting to me about these new “software only” instantiations of storage stacks is that even if they are “feature equivalent” to the hardware appliance (often not the same performance envelope), they enable use cases that couldn’t be done before. We’ll show one example at Area 52 at EMC World this year (hint hint!), but here’s an immediate example:
With VPLEX VE – it’s possible to create an active-active storage model in and out of cloud providers – for non-disruptive workload burst and non-disruptive migration for workloads that weren’t designed for this behavior at the application stack – with no hardware dependencies.
Using Network Function Virtualization (NfV) VPN like Vyatta, and VPLEX Virtual Edition you could have a model to do this (active active across datacenters and in and off-prem service providers) – WITH NO HARDWARE DEPENDENCY. Fascinating! Here’s what I mean:
BTW – there’s a TON new in VPLEX land (highlights how much goes on across the spectrum of the full portfolio – this is one part of one release in one day):
- Improved UI, Monitoring and Reporting
- User troubleshooting and identification of volume level performance issues.
- System level performance of VPLEX and attached storage.
- Statistics on IOPS, latency and throughput bandwidth on reads and writes for each volume are available
- No add-on tools required to view data.
- Storage Analytics (this is about vCOPS integration)
- Users are alerted to issues so they can be addressed before they impact availability.
- Analytics support is from ESA 2.3 for VPLEX.
- Predictive analysis is on VPLEX and attached EMC storage.
- Heat maps are part of the alerts.
- Included is a VPLEX adaptor for vCenter Operations Manager to receive analytics directly from vCenter.
- VAAI XCOPY Support
- Brings performance increases and frees up CPU cycles by offloading data copy operations to VPLEX.
- Reduces storage network traffic as all data movement is between VPLEX and the arrays.
- IPv6 Support
BTW – one closing thought on this one: You can bet your bottom dollar that there will be almost the full EMC portfolio in software-only data planes by the end of 2014. You can see why it becomes even more important to have a software-only control abstration layer (the ViPR controller) that they can all integrate with – that way there is a simplification, a “harmonization” of all the various innovations in the different data plane stacks.
Cool times!
Hey Chad - awesome news. One quick question (asked on Twitter too). You note Metro is 10ms RTT. Historically that's been 5ms RTT. Is 10ms the new standard?
Congrats,
Tom Queen
Varrow
Posted by: Tom Queen | April 03, 2014 at 07:48 PM
Hi,
is it planned for VPLEX (hardware) to allow non-disruptive software-updates on the directors?
We stopped our POC with VPLEX, because we found that updates results in a 2 time 30 sec. downtime for all LUNs which are currently served by the booting VPLEX director. This was not acceptable for us.
EMC engineering says, this behavior is "by design".
Regards, Hermann
Posted by: Hermann Weiss | April 04, 2014 at 04:20 AM
Very interesting Chad! Since it is a virtual appliance, not a kernel module, how exactly does the IO traffic go from VM to spindle? Say the VM is running on ESXi 1, and the VPLEX appliance in on ESXi 2. The iSCSI block has to leave ESXi vmnic and go to ESXi 2 vmnic. How does VPLEX, being an appliance, ask ESXi to redirect the traffic to the VPLEX appliance.
How do we handle the availability of the virtual appliance? I'm assuming there are 2 VPLEX appliance running in the cluster. How do they do active/active? In the example above, does the SCSI block travel to both appliances? If it is not, and it just happen the first appliance dies/not responding, how many millisecond does it take for the second appliance to take over?
Thanks from Singapore
e1
Posted by: Iwan 'e1' Rahabok | April 04, 2014 at 10:22 PM
@Hermann
For all I know, VPLEX does NDUs by design. Maybe it was some kind of bug? The KB was checked? (e.g. there was an DU Event in Combination with FOS 7.0.1 on Brocade Switches)
@Chad
When can we expect to see Technical Documentation and/or White Papers for VPLEX/VE on the EMC-Sites?
Posted by: Daniel Saffran | April 07, 2014 at 05:54 AM
1: VPLEX/ve is limited to 5ms not 10ms since we have not completed the testing
2: ndu of director is expected anyone not experiencing NDU should open an SR. If someone responded that it is by design to have a noutage i need to know who said it and an sr# to track it down.
3: there care a min of 4 vDierctors per cluster in a full ha config. Any single vDierctors loss does not impact availability.
Posted by: Rickbrunner | April 09, 2014 at 01:03 AM