While VMworld will contain many surprises*, but one thing I don’t think will be a surprise is the new “phrase du jour” will be the “Software Defined Datacenter” (SDDC). The blog post that Steve Herrod did on the topic (and the subsequent one post Nicira acquisition) has become the place for “definition” of the term. It’s a formalization of stuff VMware has been saying, driving, and innovating around for some time – so isn’t new, but is a new way of putting it all together.
In my own mind, the SDDC involves several key principles:
- That the policy is enforced in a software layer that abstracts all hardware into compute, networking and storage services which are consumed by all applications. This can be the vCloud Suite, it can be OpenStack – but it’s something like that.
- That the control plane of infrastructure (the thing that is the interface for that policy and tells the hardware what to do) – gets decoupled from the data plane of infrastructure (the stuff that actually does the business of doing whatever the hardware is there to do). This control plane will run on commodity hardware, as pure software, be completely programmable, and is likely to be something pretty open. For Software Defined Networking (SDN) – this is OpenFlow. What will be the decoupled, software, programmable (and run on commodity hardware and likely pretty open) layer for the Software Defined Storage (SDS) world?
- That this decoupling changes what people look to at the data plane of infrastructure. This is the layer that does the business of the hardware itself. For CPU, it computes. For memory, it stores and recalls – but doesn’t retain. For networking, it forwards frames and packets. For storage it persistently stores information. The change that the SDDC movement demands of the data plane creates pressure to run on commodity hardware, and changes the relative priority of features and architectures – with a distinct shift to architectures becoming more important, not less.
I’ve made this the center of SPO3338 (will post including the presentation) – but for the crux, read on!
- The storage community will need to create a model where policy can be applied at a VM-level of granularity. To see what VMware and EMC is doing around that – check out this post on VM Granular Storage. EMC is investing heavily here, and I would argue leading the way.
- The storage community needs to create an abstracted, and decoupled “Control Plane” for storage. Think of an “OpenFlow” of storage. This has been attempted in the past (think WideSky – shudder), but never quite right. I would argue that the two parts that got mangled was either that there was no place to put this (now there is – in the “Datacenter OS” layer), or that people tried to wrap in data plane abstraction (aka “storage virtualization”) – which is a very, VERY hard problem – and also creates crazy vendor dynamics. EMC is investing heavily here, and I would argue leading the way. We’ve hinted at “Project Bourne” for a while, and I’m staying quiet for now, but suffice it to say there is more to come.
- The attributes of value, and where they show up will evolve.
- Control/administration of things like storage, backup, security and more will move UP. They will move to the “SDDC layer”, and even up to the application layer. This is about embedding things like vSphere Data Protection, or RSA DLP, or controls like SDRS, or using vFabric Data Director and just embedding DDBoost. Even further up the stack, this is about using RMAN for Oracle backup in an integrated way, or SAP LVM. If someone shows you their great admin UI for whatever – I would ask: “what are you doing so there is NO UI?”
- Fundamental infrastructure attributes will shift in priorities. In certain use cases – the “utility/swiss-army knife” model wins (think EMC VNX-ish) based on simplicity, efficiency, scale. I would call out that that “scale” these days can be enormous by many people’s standards – the VNX successor will approach a million IOps and huge capacity scale. In cases where “purpose built” is warranted – the fundamental values of “scale-out” will be very critical (think EMC XtremeIO/Isilon-ish) – and as people are realizing, it’s REALLY, REALLY hard to “bolt on” scale-out. It’s an architectural approach that pervades all parts of an architecture. In all cases – things like deep policy integration like VM Granular Storage APIs will really matter. Also – programmability is VERY important. In every case – people not running as software on x86 architectures that leverage the commoditization of hardware – well, suffice it say that I disagree with them.
I think that people that say the SDN movement and the SDS wave to come mean that people who make infrastructure (think EMC et all) are in strategic peril miss the point. Everyone in high-tech is perpetually in strategic peril :-) The real risk is that you get yourself into a position where either for cultural reasons, or business reasons – you cannot adapt/move, and cannibalize yourself as the world changes – and cannot lead that change in some way yourself. I’m glad to say that at EMC, I know this isn’t the case, and I don’t think it’s the case at VMware either!
Will be a great, fun VMworld!!!
*personal editorial: One thing I think people will be surprised at is the change in the vCloud Suite pricing – moving to a no-limitations model. People will come up with all sorts of philosophies about why, why not, how, good/bad, etc – I’ve actually been looking forward to seeing the general reaction. IMO, I always thought the changes with the vSphere 5.0 launch were good – but perception IS reality. I’m impressed that VMware listened, and responded – twice. The first was to increase the original vRAM allocations, and now to move to an unlimited model. Everyone makes mistakes. The biggest mistake is being too arrogant and not listening to your customers. [added] – double that for the fact that they made upgrades free for Enterprise Plus customers.