This is bound to be a little bit of a controversial post. So – what has VMware announced with VIO? It seems to me that VMware is accelerating their work with Openstack – which is a great thing, IMO!
First thing to understand – this isn’t a new thing – both VMware and EMC are working in the Openstack ecosystem furiously. In fact (and this is really impressive) with Icehouse, VMware was a very material contributor – within the top 10 (you can see contributions via Stackalytics here), so no one can say this is a “new thing” for VMware.
VMware and EMC both are “upping” our investments in Openstack every day.
My read on the VIO announcement is that VMware will be building a fully integrated distribution that will primarily suite customers already invested in the vSphere/vRealize suite and looking to make that existing investment a simple, and great way to also use Openstack.
To understand this a little more (and get to my “controversial” point – which is one of those “Federation” topics) – read on!
First of all – man oh man – I’ve talked to 80+ customers 1:1 over the last few months – and Openstack is everywhere!
It’s interesting to see how quickly Openstack (and interestingly, and non-directly-related the same is true of Linux Containerization) have become “mainstream” technical dialogs with customers.
It’s also interesting – almost to a one, those customers are ALSO large scale VMware customers supporting their critical infrastructure, and very happy with vSphere for their application stacks that look to “infrastructure resilience”.
For the most part, most of the customers I talk to are using Openstack for either “new applications” (with much lower infrastructure resilience expectations), “test and dev” and may be in the “early play” phase (though some are in full on prod).
Ok – some context first (for give the fact that this rudimentary for people who live in this space, but surprisingly something that isn’t obvious to many people).
Openstack is a framework, not a “product” – and the underlying “virtualization” compute/network/storage components and implementations can come in all sorts of forms.
Here’s the first form (names in red blocks are examples, not exclusive):
In this particular example, all the components are all 100% open source.
Sidebar: in my recent travels, it was surprising to me that some customer I talked to feel that Redhat’s acquisition of Inktank may make Ceph less “open” (these people biased to Mirantis or Canonical openstack distributions as you might expect), and generally icky response to the Openstack summit in Atlanta comments by Redhat vis a vis support for RHEL in other Openstack distros….
In my experience, it’s not uncommon also to hear people acknowledge openly that these 100% opensource choices are not the best technically in one dimension or another – but they bias towards “if I can’t see the source code, I don’t want it”. Let’s call this group opensource “purists”.
Here’s the next form (again names in red and blue blocks are examples, not exclusive):
Here, some of the components are “closed source”, some are open source. In my experience, this often happens in the networking and storage domains, less so in the compute domain under Nova. Why? This is where there’s a pretty big delta between the opensource offerings and the “closed source”. Example, some customers run VNX with the Cinder driver (or NetApp), or ScaleIO under cinder, or use the ViPR Object stack vs. Ceph.
NSX’s (formerly the Nicira team) long history of Openstack contributions to Neutron mean that often customers choose NSX under Neutron. In my experience – this the blended model used by many. Let’s call this group opensource “pragmatists”.
Interestingly – in my experience with customers - the value “delta” (how beneficial the “closed stack” needs to be over the “opensource alternative”) needs to be quite material to overcome the traction that simple open access and vibrant communities tend to drive. This early traction then creates inertia as things move from “test/dev” into “production”. More on this later!
And the last form looks like this (again names in blue blocks are examples, not exclusive):
Here, all of the components used by the Openstack framework are “closed source”. This “all in closed” has been rare to date.
This “rareness” is interesting – after all, if comes in spite of the fact that many of those same customers already have some or all of those ingredients already deployed, already supporting their existing apps.
The VIO intiative by VMware is targeting making this easier. This seems to me to be most useful to customers wanting to add Openstack to an existing VMware-powered SDDC model. Let’s call this group opensource “dabblers”.
Here’s the controversial point – here’s the basic dilemma I see playing out at customers as they look at new applications that are not expecting infrastructure resilience (inside EMC we call these “Platform 3” applications)… and remember that the SOLE purpose of all infrastructure, including Hybrid cloud IaaS is to support the application stack:
*P2 = “Platform 2” applications stacks, which almost always have some RDBMS (eg. Oracle) at the bottom, and may be custom J2EE/.NET apps, or off the shelf enterprise software stacks. P2 applications depend on infrastructure resilience/backup/DR/etc… They are the kind of apps that if someone said “what happens if I shut off this rack?” someone would start screaming about “do you want to invoke the DR plan!”.
*P3 = “Platform 3” app stacks, which have a data fabric at the bottom – not a RDBMS - are built on PaaS/microservice architectures, and have little to no expectation of infrastructure resilience. They are the kind of apps that if someone said “what happens if I shut off this rack?”, the developer should say : “I’ve built this app with full geo-dispersion and expectations of eventual consistency models – go right ahead”.
BTW – if the developer says “huh?” – you’re in a pickle – you have likely got a problem!!
So – what do you do faced with the decision tree above? Do you build your P3 applications (which don’t expect infrastructure resilience – and this is expected to be developed in the application itself at the PaaS/microservice architecture level) on top of the same IaaS stack that is built to support P2 appplications (that expect infrastructure resilience, HA, DR, backup and all sorts of SLAs)? Clearly – they will have different envelopes (economically, operationally, and technologically).
Again, there is an element of opensource “purist”, “pragmatist” and “dabbler” (and there is NO value judgment in those labels – they are just common groupings I see as I roam) in the answer here:
- A “purist” says: “f@#$ no! – the profile of a infrastructure resilient IaaS stack vs. the non-infrastructure resilient stack is too different! Start building a ‘P3 IaaS stack’ NOW – and do it on 100% opensource stuff” (this basically ignores the existing apps and workloads entirely – and has a certain beauty/acceleration that I get, but never seems to work if you’re not a startup with no P2 workloads). This is why these purists don’t buy “vSphere value add” of things like DRS, FT, HA, heck even vMotion. They don’t care if the storage is resilient or 5x9s, or does DR. They think all this stuff doesn’t matter (and it doesn’t for many P3 workloads – but many P2 workloads COUNT ON THEM).
- A “pragmatist” says: “let’s start using Openstack, and use as much opensource stuff as we can on commodity hardware, but let’s not be silly – let’s leverage what we have where we can, and if we don’t think the opensource option is hardened enough – we’ll look at closed source options. As the new application loads increase, we can pivot our IaaS spend from one side to another – ultimately ending up with parallel P2 and P3 IaaS stacks tuned for the very different requirements of each”.
- A “dabbler” says: “let’s start using Openstack, and just layer it on our existing infrastructure built to support our P2 stacks, after all this P3 stuff is a really small percentage of our app workloads”.
Now – factor in that for most enterprises – RIGHT NOW, their entire business, and entire IT investment, and entire App stack and personel are mostly P2. AND that simultaneously, they know they need to change, and have all sorts of pressure to get into mobile app dev, to embrace public IaaS and PaaS models, and get into the business of analytics.
HERE’S THE CONTROVERSIAL POINT:
- VMware’s VIO is (IMO – remember this is MY blog, so this is my opinion) an awesome (perhaps the best as it comes to fruition) answer for the “dabbler” (number 3 above). EMC will of course support VMware here – but I think it’s an INSUFFICENT answer for the EMC federation.
- EMC must pursue the best routes for the “pragmatist”. This means, sure, supporting VIO – but it means accepting that many customers will have complex open blends of stacks that we must embrace to the fullest (I’m sure VMware will continue to contibute to Neutron and Nova and other projects outside VIO of course). I mentally struggle to see “purists” or “pragmatists” saying “I’m good with running all my Openstack instances on VSAN” – not because VSAN isn’t great, but because VSAN then not only requires (“locks in?”) ESX, but also vCenter, all of which then leads you into the VIO category, which makes sense to the “dabbler” vs. the “pragmatist”. Beyond that, we’re going to need to also:
- Double down on our own contributions directly to Openstack. In Juno you’ll see us playing an even greater part in direct contributions than we have in the past.
- Continue to partnering like crazy with the big distributions (I personally see Mirantis, Redhat, Canonical most often – but anecdotes are not as good as data – check this out from the Openstack Atlanta summit: http://www.slideshare.net/ryan-lane/openstack-atlanta-user-survey) as these are the way that most customers “start”, and also partnering with the “integration ecosystem” that surrounds Openstack.
- Increasingly making our IP (like ScaleIO, which is a great software + commodity transactional easy to access and open – so people can “plug it in” more easily. Not easy – as it’s a big cultural shift for us.
Sidebar: earlier I mentioned that the “value delta” of closed source vs. open source needs to be profound to overcome the “easy access” of opensource as these projects transition. For customers reading this that I’ve talked to over the last few months this is why I’ve been asking you: “is it sufficient if our stuff like ViPR object/ScaleIO are available easily and freely without support, OR do you think we need to go a step further and evaluate opensource options?” – and thank you for your input! I’m listening.
BTW – when it comes to “purists”, I suspect Pivotal and others will pursue that route to the fullest.
Now – if I’m a customer – I look at this and say “Great, I have choice. Do I think I’m best described as an Openstack ‘dabbler’, ‘pragmatist’ or ‘purist’ and regardless of how I answer, I have all three federation companies in EMC, VMware and Pivotal working in my interest – and I can pick the path that suits me”.
It’s another hint of why in the Federation – we have VSAN and ScaleIO (more on that on a post tomorrow!). So long as VSAN “locks in” ESX/vCenter (which are awesome – so if you like that choice, and are OK with storage “captive” to it, then it’s a good choice), it’s INSUFFICIENT for the Federation – we need an “open” software-only scale out transactional storage stack (for customers that pick KVM with Openstack with a non-VMware distribution, or want perhaps physical hosts under Openstack management and use Docker containers).
What do you think? What are you doing re: Openstack (are you a purist/pragmatist/dabbler) – and what do you think of my view?
I agree with most of your comments. The reason I like VIO is that it gives one access to VMWare assets via its API and use so end users and developers can utilize that API to make use of what ever "stack" one is using (KVM+Ceph), (ESX+EMC), etc. I've been investigating VOVA for the last month to get an understanding how the configuration of such works and it seems really interesting. It also allows one to leverage existing VMWare infrastructure to test deployments.
I am a bit interested in noting that the above diagrams listing ViPR as more of an object store as opposed to block. It would seem to me ViPR could be at least used as block more so even than object, so I'm not sure if I understand the diagram above.
Also, regarding comments with Red Had and Ceph, I wonder if the community will see a large delay in packages for CentOS on Ceph's site for newer releases of Ceph as opposed to Red Hat. That is currently the case (but is easy to work around at the moment), however, I don't want to speculate as I'd rather let Ceph/Inktank community discuss this over my opinion of the matter.
On a separate note, why is typepad requesting to "manage my G+ profile"? That...seems a bit involved, and unless those granular things are explained, I would advise anyone merely using the federation to post.
Posted by: Fletch Hasues | August 26, 2014 at 03:08 PM
Fletch,
Cinder block services in VIO is implemented via VSAN so ViPR/ScaleIO is not required.
Ken
Posted by: Kenneth Hui | August 29, 2014 at 04:16 PM
@Ken, @Fletch - I think the key observation I'm trying to make is that VIO for BETTER and for WORSE has a core linkage to the full vSphere stack. So, yeah - you can use VSAN to bolt right to Cinder as Ken notes - thus not needing the ScaleIO stack or the ViPR Controller. Since vSphere doesn't have a Object stack - hence the ViPR Object stack.
My observation (from the customers I talk to) - they are split. Many feel (like you do, Fletch) that they like that it gives them leverage from their existing vSphere assets. Others feel as strongly that they don't want to be "pinned" into the vSphere stack under the openstack framework.
Posted by: Chad Sakac | September 03, 2014 at 12:15 AM