And so, off come the covers! ViPR = Virtualization Platform Re-imagined = Project Bourne.
Like all things, the story behind the story is half the fun. If you turn back in the way back machine here, about 2 years a couple ideas were bouncing around inside EMC. It’s important to understand that this enhances and extends traditional storage virtualization, doesn’t replace it. To understand why – read the post.
This is a really big idea, and will be a really big blog post. I apologize in advance – but IMO it’s warranted.
If you’re interested (in both the origin story, strategic view, architectural model and what’s next) – READ ON, and please comment!
So, there are two “Big Ideas” behind ViPR.
- The first idea was that storage was missing something like OpenFlow – some sort of programmable API that could be used to define services that was decoupled from the storage “data plane” platform itself. This idea is most true within a company with a broad storage portfolio, or within our customers who require multiple platforms (versus those that are best served with one general purpose platform – like a VNX). Invariably, these all have different API models, different service levels… We needed a storage control plane “abstractor”/"virtualizer”/”transmogrifier”. Now, it’s notable that this idea of storage API harmonization has come up before, and has only gotten so far (think Widesky, think SMI-S). SMI-S is getting a little revitalized with Windows Server 2012, but isn’t taking the world by storm. It also lacks critical service abstraction/tenancy concepts that are needed. SO WHAT HAS CHANGED? Two things. The first is the emergence of vCloud Suite, OpenStack, CloudStack and a relatively small number of “cloud OS layers” as the policy control layer that something like ViPR can interface to (vs. the infinity of things that would have needed to support SMI-S/WideSky 5-10 years ago). The second is that people are looking at infrastructure as “programmable services” rather than “boxes to manage” – that’s a big change. Within EMC – a couple different teams were running with this idea of control abstraction – and one early attempt was called Project Orion (faithful Virtual Geek readers may remember that here – I STILL get requests for this – so here it is!) Project Orion was subsumed into Project Bourne. It would need to be able to create logical virtual storage pools across arrays, across distance, with rich RBAC models, rich multi-tenancy models, and present it all via a modern RESTful API.
- The second idea was that data services are changing. Good-old block and NAS storage will always be with us – with their well-known models and interfaces. But the world of next-generation object and non-POSIX compliant filesystems like HDFS beckon for new use cases. The trick is that ideally, you would want to have a modular way that data plane services could snap-in. In some cases, they just pass through the data plane (ergo ViPR in front of block VMAX, or HDS), but there are interesting cases where they are hybrids (think of this example: storage models where you toggle between full POSIX complaint filesystem and rich object metadata API – all for the same data). In other cases, you want to layer rich object and HDFS mechanics on top of varying degrees of persistency layer – from the most basic DAS, to things like Isilon or even high end block mechanics.
People will ask an obvious question: don’t we already have storage virtualization with things like EMC VPLEX, VMware Virsto, NetApp vFiler, HDS VSP and the like? Answer is “yes, but not as far as we need it”.
These architectures have one thing in common: They are an abstraction layer focused on the data plane, and also abstract the control plane – but the control plane is tightly coupled to the data plane abstractor. I can visualize everyone saying “huh?” :-)
Think of it this way – in networking land, the Data Plane is the mechanics of how ethernet frames and IP packets are forwarded/routed. The Control Plane is how decisions about how those forwarding/routing decisions are made. Data plane interactions happen in nanoseconds (which is why even with SDN operational models, and networking virtualization technologies – there are still physical switches with ASICs and merchant silicon). Conversely the Control Plane in networking land operates in timescales which means it CAN be done in software on commodity x86 hardware – and more importantly, decoupled from the data plane hardware. Here’s a summary for networking.
Networking Control Plane | Networking Data Plane | |
Timescale | milliseconds-minutes | nanoseconds |
Example | BGRP, RIP, Firewall configuration | VLANs |
Manifestation | L3/L4 network topology | L2 Fabric |
The fact that the control plane can be decoupled from the mechanics of Ethernet frame forwarding is something whose architectural impact cannot be overestimated IMO (and this is just one man’s opinion). This means that overlay networks can be created easily, quickly, without mucking with all the physical gear. It’s not that you COULDN’T create virtual networks and have programmable APIs with tranditional models where it was tightly coupled to the hardware doing the dataplane work – it’s that people didn’t. I think there is a real reason. The risk of getting something really big really wrong when you’re constantly touching the physical layer goes way up. The likelyhood that you can make the datacenter service for networking programmable goes WAY up, when it’s logical and in software like with VMware Nicira. To me, that’s really a platform for Network Virtualization.
There’s a furious debate going on in networking land about “What is SDN, really” and “what is Network Virtualization”. While easy to dismiss as “useless debate, caused by vendor SDN-washing” (and there is some of that), I think it reflects real architectural ideas. A great example of the debate is at the always awesome Scott Lowe’s blog here (I would encourage you to check it out, contribute to the dialog, and consider the analogies to ViPR)
So – back to storage land… All storage virtualization platforms where really “Data Plane abstractors” – in storage land, “Data Plane” is “write/read this SCSI block” or “give me some file handles I can jam to” (both of these are analogous to VLANs). Further more – the storage virtualization platforms all “inherited the control plane” of whatever was jammed in front of the other storage.
Now – there are some differences between storage and networking which cause the analogy to become stretched. Consider:
Storage Control Plane | Storage Data Plane | |
Timescale | milliseconds-minutes | micro-seconds-milliseconds |
Example | “Tier 3” (with a long set of attributes of what that means) | Block, NAS, Object I/O |
Manifestation | data service policy | data service characteristics |
One big difference is that with the storage dataplane timescales – it IS possible with current commodity hardware to run the data plane of storage in software + COTS hardware models. In fact, most sane storage vendors do this already. The rate of software innovation is always faster than hardware. If you don’t HAVE to solve a problem in hardware (and there are classes of problems for which this is true outside the storage domain), software is the way to go.
This is certainly true of the EMC portfolio. All our platforms use COTS hardware – and where there are outliers (for example the current VMAX RIO inter-engine communication protocol demands a little custom hardware), we try to engineer it out of systems over time.
So – why doesn’t all storage hardware come in “pure software + bring your own hardware” flavors?
I think we will see this over time, but there is something in storage land which is a little unique. Persistence. The storage layer in IT is where the data lives. It’s why the worst days in IT land are when something goes wrong with this persistence layer. It’s also why “mature” storage stacks are so valued (it takes 5 non-compressible years to harden the persistence and data services layer of a block stack, and about 7 years to do the same for a filesystem stack). This is why you see so many ZFS startups – they are leveraging a mature filesystem stack to try to jumpstart.
BTW – the hardest persistence stacks to engineer are DISTRIBUTED persistence block and NAS stacks (which is why they are rare, and so valued).
There’s one other thing. With storage, many of the failure conditions are still at the hardware layer – and with COTS/Software combos, this is harder to get right. Any readers using VMware VSA, or Nexenta, or Lefthand VSAs, or the NetApp Edge stuff? I know, great – right? Do you have experiences (if so feel free to share – good or bad?). I certainly do (I try to play with everything). They work great – but when something fails, it’s less than perfect. When a disk fails, and I’m running it on COTS, or virtualized on vSphere, I don’t get the basic feedback one expects from a storage array. I expect to see a warning in the UI/API, clear notification of WHICH disk failed, including physical notification on the disk enclosure itself. Frankly, if I was using this in prod, I would want someone standing behind me with hot-spares.
This doesn’t mean “Software + Bring your own COTS hardware” is a bad idea for storage – but rather that “Software + Run it on COTS hardware packaged and supported by a vendor” will tend to have longer legs than in compute as an example. This is only true so long as the economics are about the same. Price compare an HA clustered Nexenta config with hardware on their HCL and price compare with a VNX or VNX competitor you’ll find they are about the same at moderate-large scale.
That all means that we’ll see smaller scale software data plane stacks before we see large scale ones. It means you’ll see them more common in clustered vs. distributed flavors. It also means that we’ll see things that don’t actually persist data, but offer data-plane services (think Recoverpoint vRPAs and vVPLEX as examples) first.
Ok – so here’s where we are so far in this long blog post (thanks for bearing with me so far!) :
- “Software Defined Storage” is as inevitable as “Software Defined Networking”
- Like SDN, SDS is a topic about BOTH the architecture of the control plane and the data plane.
- The SDS control plane abstraction needs to be cloud scale (just like SDN).
- The SDS control plane abstraction needs to be distributed (just like SDS).
- “Storage Virtualization” as people mostly think of it today is great, but insufficient to be SDS – as there is no control plane abstraction that is open, that decouples the control plane from the hardware entirely.
- Just like in SDN, with SDS – while the data plane is changing, it doesn’t require that the data plane is also done in software + bring your own hardware.
- SDS will differ from SDN in that we will see varying degree of “software only + bring your own hardware” dataplane implementations over time.
So – with that all said – we think that ViPR is the first SDS platform – an open control plane abstractor, and an open data plane pluggable model.
Here’s a demonstration that shows how virtual storage pools can be created and how data plane services work.
This picture is the logical representation of what ViPR looks like logically.
I have to stop myself from saying “Project Bourne” – ViPR was developed by a new business unit within EMC – and is one of those examples of “Organic Innovation” (our R&D budget and M&A/VC funding budget is about 50/50 on an average annual basis). That team is called the “Advanced Software Division” (“ASD”) and this is their story (and it started a while back – see hints here) :-)
The team that developed Bourne is split between Seattle and Santa Clara. The bulk of the Seattle team came from the Azure team where they learned a lot about cloud-scale operations, and the storage challenges inherent in that sort of thing. They were joined by many folks from VMware (VMware-ites and EMC-ites often flow between big projects like this one).
This is important from a VMware standpoint because it harmonizes and accelerates the deep integration across the full platform families of EMC, but also represents a route whereby rich vCO, vCAC, VASA support immediately jumps across the whole storage industry.
Here’s a quick demonstration of how EMC ViPR integrates with vCAC, vCO, and VASA.
There’s another VERY VERY important thing to point out here. EMC’s strength is our broad portfolio – but it’s also our weakness:
- “One thing cannot do it all” – long ago, we concluded that the idea of “can we shrink VMAX down to meet small customer requirements?” is as silly as “Can we make our ‘general purpose’ VNX which is great at many things, but not the best at any one – and that’s it’s purpose in life – can we make that scale up to the largest enterprises, or ‘bolt on’ a true scale-out model?”. Fundamentally – different workloads drive different core architectural models.
- “We must embrace innovation in all forms” – long ago, we concluded that if we thought “we can singularly out-innovate every startup, every university student with a great idea” – well, we would be stupid. So we invest healthily into both organic innovation and innovation through M&A and VC funding.
But – the weakness is that EMC customers get more and more complexity as each of these platforms has their own control plane and API. Furthermore, it slows down our development of things like VSI, ESI and Appsync – as each of these has to be built using every different way to interface with our platforms.
ViPR is that cross-portfolio API transmogrifier.
Here’s a demonstration of how ViPR simplifies and abstracts the control plane of storage services and storage policy – and how it plugs into the full EMC Storage Resource Management (SRM) Suite – and adds the core concepts of Multi-tenancy to everything it presents.
For those of you that aren’t VMware fans, or are worried about “Closed”, the great news is that this is totally open. In fact, out of the gate, ViPR also accelerates Openstack support, with Cinder (block), Glance (catalog) and Swift (object) API models. ViPR complies with all the major object storage APIs (Atmos, AWS S3 and Swift). It doesn’t yet have every function that Atmos supports, or Centera API compliance – but you can imagine where we are taking this.
From the southbound APIs that drive and automate the underlying storage platforms to create the virtualized storage pools – that model is also open. Out of the gate, EMC provided plugins are provided – which include many 3rd party arrays (for those of you following closely, iWave ITO accelerated this part of ViPR). At EMC World, we demonstrated what will surely be a popular combination – EMC and NetApp.
We are publishing source code and examples for all the critical elements to integrate northbound, to create southbound controller plugins, and full blown examples for anyone to create their own data services.
Let’s talk turkey for a bit – the core architectural model. First thing first – ViPR is a virtual appliance – and we only expect it to be used this way. This is the architectural diagram, of the logical components:
One thing that’s worth pointing out is the core scale-out infrastructure services. This is pretty cool. ViPR is built for Cloud Scale. We’ve learned from huge Atmos deployments, the core meta data must be distributed, so ViPR uses Cassandra for metadata and distributed configuration data. Likewise, ZooKeeper is used as a distributed service tracker.
What about the data services? Equally cool.
Out of the gate, the one that will get the most use will be Object on File. It’s important to know that like all Object storage models, there is always some sort of underlying filesystem that provides the persistence layer. In this case, we think the combination that will prove the most popular will be to couple Isilon with ViPR – that way one gets a high performance scale-out distributed file persistence layer that is mature and solid with a rich object model. Hint, hint – imagine a software-only Isilon ViPR combo :-)
What does this give you? Well you can:
- Access a set of objects directly on the underlying file storage device, with native performance
- Dynamically toggle between Object Mode and File Mode via API
- Object Mode – Full Object Mode API, no File Access
- File Mode – Objects in buckets are mapped 1:1 to files. Native R/W access to file. No Object, other than toggle API
- Full Native API support for Amazon S3, OpenStack Swift APIs, Atmos Object API and Atmos File System
- API Extensions for S3 API and Swift API for byte-range updates, atomic append etc. I.e. better than the real thing :-)
This turned out to be a huge deal for one of the beta customers – who has millions and millions of objects ingested every day, and has to constantly be importing/exporting to and from object storage and filesystem storage models as they curate the content in the filesystem (demanding high bandwidth for video editing), and then ultimately present it back as object. ViPR radically simplifies, as you can toggle back and forth easily.
Here’s a demonstration of how this Object/File use case works.
Remember – out of the gate (and many will jump to erroneous conclusions) there are certain use cases that today only Atmos 2.0 covers (and ViPR does not cover yet), and Centera covers (until Centera APIs merge with Atmos). That said, it’s obvious that over time, it’s in our customer’s best interests (as well as those of EMC) to merge EMC object stack implementations.
Likewise – out of the gate, there are certain use cases that VMAX Cloud Edition’s abstraction and portal do that ViPR does not, but VMAX Cloud Edition is a “platform specific” instantiation of the value proposition of an “abstracted control plane” – over the fullness of time, it’s obvious it’s in our customer’s best interests (as well as those of EMC) to merge EMC control plane abstractions. Customer and market reaction to VMAX Cloud Edition has been enormous and hugely positive. ViPR offers the same abstraction model (but not the consumption packaging by default) – for all storage arrays and architectures.
Know that we’re targeting a rapid service release for ViPR that would add native high-performance HDFS support with a similar “do the basics well, and add capabilities that are unique” that we are on Object out of the gate.
So – is this a pipe dream – or is it real? Oh, it’s real alright. Just like XtremIO, it’s been in a quiet “directed availability” for months. There are customers large and small, service providers and all sorts of industry verticals pounding on it. It’s in production within EMC IT. We’re going to keep that Directed Availability model going for a couple more months to pound on it – but you can expect the GA date very soon.
That said – this is a big idea, and I guarantee we’ll end up over-selling it, and people will get it wrong, and that as it works through DA into GA, we’ll discover problems. This isn’t something I or anyone wants, but it’s inevitable when there’s something big, something new, and people are wrapping their heads around it. We will iterate. This is a BIG IDEA, and if we could get it right – it would change a lot. It’s very analogous to the early days of SDN, which themselves are still starting.
I would highly encourage you to check ViPR out at the EMC Hands on Labs, and after EMC World, it will be in vLab to play with!
As always – feedback, comments – WELCOME!
It would have been nice to see ViPer useing a NetApp Filer or HDS block. All these demo's use EMC products. It meant to vertulize ALL stroage platforms?
Posted by: [email protected] | May 07, 2013 at 02:41 AM
great post - thanks
Posted by: rehan | May 14, 2013 at 12:02 PM
Hi Chad,
Great post on ViPR, any word as to when VPLEX will be supported with ViPR? Ideally, I would love to see the ability for ViPR to understand VPLEX in it's entirety meaning that you can configure the VNX/VMAX/Third Party arrays just as the demo showed, but then configure in a VPLEX 'array' and tell ViPR to provision to VPLEX, perform the discovery/encapsulation on VPLEX and then provision from VPLEX out to the host server(s). Once this functionality is available I believe that we will be in a much better position to pitch the 'software defined storage' idea to customers.
I personally love VPLEX and what it can do but have always thought it was missing some key functionality that I think ViPR will bring to the table. I have lots of other thoughts/questions but will hit you up on EMC e-mail after I have had a chance to play with the labs and try to figure them out myself :)
Cheers,
Justin
Posted by: Justin Mirsky | May 17, 2013 at 10:07 PM
Chad, Great post! Thanks for the detail.
Posted by: Joemuniz | May 25, 2013 at 11:52 PM
Very Nice article with demos and use cases. Does VPLEX manage IBM DS8000 arrays too? Thanks a lot.
Posted by: Venkatesh Krishnamurthy | July 23, 2013 at 03:58 PM
@Venkatesh - yes, you can have VPLEX be in front of IBM DS8000s
Posted by: Chad Sakac | July 25, 2013 at 04:27 PM