A big part of “software defined” is about data plane on commodity to be sure. But, just as big is the part about control plane abstraction. In the storage domain, a big part of the open EMC SDS strategy continues to be to create a sofware-only control abstraction for ALL storage. I mean this in every way:
- open to all protocols (file, block, object, HDFS – you name it)
- open to all storage vendors (not just EMC – but keeps expanding, recently adding a ton of other 3rd parties, HP, HDS, Oracle, NetApp, HP – and most recently including my friends at Solidfire!)
- open to all architectures (physical arrays – but also things like and the ViPR Object/HDFS and ScaleIO that are SDS dataplanes on commodity hardware)
- open to all use cases, and tightly integrated northbound into the vRealize stack, into Openstack (via Cinder – and BTW, the ViPR object stack can be used as a Swift-compliant object store), into the Microsoft stack and more. And the community is starting to integrate it with all sorts of open-source automation tools: from Python, Ruby, Golang, Puppet, Chef and many, many more.
The other upside is how easily you can “bolt in” new platforms without changing other things you do.
Put it this way… EMC has always believed in a “right tool for the job” approach to storage (including when the job is “give me one thing that does everything moderately well” – and that’s a VNX). The upside to this philosophy is that it is the BEST way to meet workload and customer requirements.
The downside – the more storage types you need – the more complex management gets - UNLESS you abstract the control plane from the storage platforms themselves.
Here’s an example (and a real world customer example):
- Many customers are starting to evaluate and pick AFAs for transactional workloads, and migrate some off their existing Hybrid arrays (particularly those that are a good dedupe/compression fit).
- Let’s say they pick an AFA, even perhaps the most awesome one (I may be biased!) EMC XtremIO.
- What happens with all their automation they’ve done to their existing stuff (whether it’s EMC or not)? What if their new cool AFA doesn’t integrate with the vRealize suite? They are stuck – getting to use the “cool new thing” means changing how they operate, and how they integrate – and their going to need to update all sorts of tools. That sucks. And – in general, the cool new things are missing all that automation integration work by definition, because they are new.
This is why it’s always a good idea to use something (and software only) in between platforms and your automation tools. Here’s a sneak peak at the next ViPR controller release that has the XtremIO integration!
BAM. Immediately XtremIO not only is the best AFA out there, but it’s the AFA with the most integration with all sorts of heterogenous tools (including the vRealize Suite – in this case the elements formerly known as vCAC and vCO).
Anyone using ViPR doesn’t need to do a thing to their processes and tools to leverage XtremIO as a new storage option for workloads that will benefit from it.
This is why more and more customers are using ViPR (check out the photo from the SAP team here – and you’ve got to love it when a customer sends this sort of photo)
BTW - thanks to the great EMC team that made the demo: Todd Day, Chris Rigano, Rich Barlow, Mike Lee, Alex Candelaria, and of course the awesome EMC ViPR and XtremIO teams!
Join the ViPR community here https://community.emc.com/community/products/vipr! Get ViPR and use it for free so long as you don’t ask for support beyond the community here: www.emc.com/getvipr!
So… are you using ViPR? Why not? What are we doing right/wrong?
Comments