I figure after one super-nerd out post I deserve to make a "strategy" post.
That last iSCSI one... phew. Though already wildly popular, and the feedback suggest that the extra effort needed to make it "multivendor" was the right way to go. I thought so, but imagine trying to get agreement on stuff when each of ours work differently... it wasn't easy, but paid off in the contributions of all. The band will get back together, and we're already working on some followups....
I've been debating this "BIG picture" post for a while for a couple reasons:
Why should I write the post?
- Every customer I've met for the last 6 months is looking to transform their datacenter. This isn't about any one point technology. "How do I spend to save?"... "How can I leverage virtualization to transform operations?"... "I need to change something fundamental - I can't keep doing things the way I am....".... "How do I get to the 100% Virtualized Datacenter I can see... and trusted vendors - WHAT ARE YOU DOING TO HELP ME GET THERE?"
- The consensus seems to be building, and things are starting to be discussed. It's so exciting, so intriguing, the lack of clarity has some customers thrashing for more info(and the press speculating wildly - and more often than not - missing the point).
- We're getting some folks talking about it at a high level - See EMC's Chuck Hollis talk about it here and here. See Cisco's Padmasree Warrior talk about it here. See VMware's Steve Herrod talk about key ideas about openness here. Reading these are good - because I'm trying to go down deep, and they provide the "50K ft. view"
- I get a lot of "I'm hearing similar things from customers: "on the one hand I like my trusted vendors working together, but is it a conspiracy/collusion" (answer on that one immediately: no - it is not a conspiracy, and there is no collusion. What there is a common vision of the next few years)
Why shouldn't I write the post?
- In spite of the technical focus of the blog, I'm a person with feet in two worlds here at EMC. I'm a VP here at EMC - which helps me have a "broad view" of everything we're doing. I do run the overall VMware affinity program - which means not only the deep engineering work, but also strategy, the alliance, our field sales/specialists and marketing - my job is to make sure EMC is pointing in the right direction in this revolution. But that doesn't stop me from playing with the toys every chance I get :-) As a geek I try to keep my blog focused on technical dialog. Strategy stuff is useful, but I generally leave that to Chuck.
- I'm bound up in the same crazy NDAs that Chuck refers to - and have been for a long time.... So it is a bit of a minefield.
- Some of the pieces are coming in the next few months, some in quarters, some in years. As an engineer by training - my instinct is to hold on to stuff until it's DONE - and this isn't about any given product, but rather our overall strategy.
Between Chuck's posts, Padme's, and my natural desire to be open.... Hmm... between NDA landmines, I'll try and map it out....
Ok - remember all - when I post something here, it's me, Chad Sakac. It's not an EMC position. I know it's hard to separate, but it's important. These are my personal views, my personal observations.
Customers are telling me consistently that they are looking to transform to a new datacenter and IT model. They use different words when describing it. Here are some variants:
- "I want to make IT a service back to the business - literally with an SLA model"
- "I see technologies coming together to enable something... I don't know what to call it except 'global datacenter optimization'"
They know they need to do it for all those reasons, but no-one wants to undertake one of those "here's a vision - now stay with me and one day you will benefit" efforts - particularly these days.
- They need to save money at every step - constant capital expense saving.
- They need to become faster, more flexible at every step - constant refinement in both operational expense and speed.
- They need to use less power and become more green at every step
- We've all been around the block enough to know it's got to be open, built on standards (Scott Lowe was totally right on that here)
It seems from my conversations with our partners, we share that vision - each with our own lens of course.
- Cisco sometimes calls it Datacenter 3.0 or Datacenter of the Future or Next Generation Datacenter (NGDC).
- Their focus seems to be: "in this 100% virtualized datacenter, what's the best way to compute and transfer information?"
- EMC folks have been calling it the NGDC.
- Our focus is: "in this 100% virtualized datacenter, whats the best way to store, backup, protect, manage and secure the information?"
- VMware calls this a Virtualized Datacenter (with the Virtual Datacener OS at the core).
- Their focus seems to be: "how can we best enable the 100% virtualized datacenter - both by what we do ourselves and connecting the parts others do?"
Unfortunately we're also using different words to describe the same thing. Personally - I like the word Chuck is using. "private cloud". Eventually we'll agree on what to call it. But we agree right now what some of it's key technical characteristics are....
Point 1: It presumes a 100% virtualized datacenter (at least as far as x86 workloads go). What can we do to make any x86 workload a candidate for a VM, and how do we help customers accelerate that transformation.
Point 2: Every Layer of the physical infrastructure (CPU, Memory, Network, Storage) need to be transparent. Transparency means "invisible". This implies a lot, and implies that the glue in the middle, like a general purpose OS, needs to provide the "API models" for those hardware elements to be transparent.
Point 3: Every Layer of the physical infrastructure needs to be able to think/understand/respond to "VM objects" (or more accurately, groups of VMs that define applications and application SLAs). These groups of VMs that define the application become central, both as a way to get fast value (Virtual Appliances), and also for the infrastructure to support. Long and short - the Network and Storage need to be "VM-aware".
I'm going to stop here, and point out an analogy for "Transparency" and "infrastructure that responds to the objects". It's very analogous to the 3D video card experience.
- In the early days, 3DFX (remember them?!) needed to work directly with the game developer - and you needed detailed knowledge about how the graphics card worked - what memory model, registers, etc. This is analogous to how we manage network/storage in the Virtualized world today - armies of specialists who manage the systems, and use paper processes between them as "APIs".
- The next phase was where the 3DFX folks wrote a general-purpose (with 3DFX only!) API called GLIDE, and that interfaced (loosely) with OpenGL. This made it much easier to write games for developers, but you still needed to have focused folks that only made things work with 3DFX hardware.
- The last phase was Microsoft creating DirectX 3.0 (like many Microsoft things - 1.0 and 2.0 were really bad). This enabled a broad ecosystem of 3D card vendors, huge simplification of the game development process - and in general, took 3D graphics mainstream.
A few years later, 3D gaming is mainstream - DirectX is at the core, including on your Xbox360, Nvidia and ATI rose to become mainstream, and 3DFX is gone.
Morals to the story?
- Early days are a time for innovators to create a new category, just like 3DFX did. They are usually proprietary/API-less as people figure out what's a good idea or a bad idea. This is the era that VMware started, and continues to lead. I think we are now starting to evolve out of that era into...
- The next phase is about the efforts of leaders to create standard API models. VMware has become more than a hypervisor. Paul Maritz calling it the "Virtual Datacenter OS" is putting a label on something that is new - something that doesn't have an existing category. The Virtual Datacenter OS is not just a hypervisor that lets you consolidate servers, but on the "glue" that sits between the user and the application and all the infrastructure - using APIs and standards to manage hardware layers. An OS does that for a single machine. A Virtual Datacenter OS does that for a datacenter. vCompute, vStorage and vNetwork are not rebranding exercises (though they are also where VMware is sorting their own functional capabilities in those categories), but are the key intersection points where the compute, storage, and networking infrastructure can interface and link transparently into the Virtual Datacenter OS. The other major intersection points are the vCenter APIs.
- The last phase is about mainstream, and mass standardization. Why did 3DFX disappear (I still have an old card as a momento) and the 3D card market become commoditized? Well - the key was that the control point was always the OS, and what drove mass adoption was an API. 3DFX couldn't move from an innovator to being the owner of a new control point. What's instructive now is that the OS isn't the control point. Customers are realizing that in a virtual datacenter OS world - these JeOSes (Just Enough Operating System) models are all that's needed to support an application. You literally just need a run-time for the application. The thing that interfaces with the full hardware layer is the Virtual Datacenter OS.
Is the hypervisor a commodity? Sure. Will others catch up with VMware? I don't know about that - I'm fortunate enough to play with tech that is months out (BTW - my home-brew lab works well with all the next releases from VMware). VMware continues to out invest, out innovate in the Hypervisor, and Hypevisor manager category.
My take? The key for VMware is that their company is looking at that new competitive landscape and recognize: being the ONLY best hypervisor for the Virtual Datacenter OS leads to VMware being nVidia in our analogy, not being Microsoft. nVidia's early chips weren't as good as 3DFX in general, but were better as DirectX rolled around - which made them the best-of-breed choice for graphics cards (a title AMD/nVidia now go back and forth on in a cyclical way). If they do that, VMware may remain the best choice for your hypervisor.... but being the Virtualized Datacenter OS leader may lead them to being the next Microsoft.
I can say from my interactions with them, they understand that well. They are continuing to innovate - and make the best hypervisor sure - but also create the best Virtual Datacenter OS - which is materially different.
What about the about the hardware vendors? Didn't DirectX commoditize the 3D card industry? The example is also instructive again. What happened was that the hardware platform that best adapted (nVidia) to the new API model won. Likewise, people who build "servers built for vCompute", people who build "networking built vNetwork", and "storage built for vStorage" will win (if indeed history repeats itself).
Note that this is being done transparently (none of the stuff above is a news flash, perhaps only how it's presented), using standards that exist (x86, Ethernet, FC/iSCSI, NFS) , trying to create standards that are new (VMsafe, OVF, FCoE, pNFS, SR-IOV, MR-IOV). I'm constantly reminded that EMC must work to be the choice for customers, partners and for VMware on our own merits. The independence of VMware is real, and very, very important.
So - what are we doing?
Point 1: To drive "100% Virtualization"
- Requires A Virtualization Layer that can literally meet the scaling, performance and availability goals of any x86 workload. VI today isn't quite there, but I think the next release is darn close if not there. It needs to be able to do that for servers, and for clients - "any workload" is "any workload".
- Server designs that can deliver scale and performance for that sort of workload. We're in early days here, but already the first generation of "servers built for VMware" (example: Dell R800 and R900 series) are a big leap over general-purpose design servers. At EMC - use of these platforms took us from 7:1 consolidation to 40:1 consolidation, and the wave of virtualizing our mission-critical apps. The next generation are another quantum leap forward.
- More proof-points and "guide posts" to help customers virtualize the tough targets. This is a MAIN point of all that Joint Reference Architecture work I continually point at like here, here, here, and here.
- EVERY EMC product is being turned into a Virtual Appliance. Some we sell, some we use for partners, some we use for development. There are more than 20 right now, and every product that runs on x86 is going that way. We're doing this as an ISV and think other ISVs will too.
- Physical adaptability (i.e. increase/decrease CPU/Memory model) needs to extend into the Networking and Storage stacks. People will REALLY start to see "purpose built servers/network/storage" for VMware in 2009. Hint: If the platform hasn't gone through a major architectural hardware and software refresh in the last year - if people say "we built this for VMware", what they are really saying is "we think our pre-existing architecture is well adapted to VMware". We've been at this for several years now, and 2009 is the first year where enough chunks will be visible for piece parts to become clear. I wasn't kidding when I said that the "modularity" of the CX4 was "revealing our VMware IO strategy. I should have said "part of our strategy". There are other shoes that will be dropping.
Pont 2: To drive "API integration"
- This is going to need to be a great deal of automation (and a part of that will need to be human trust). Think of that this way.... What would server virtualization buy you if you couldn't "let go" of the way you provisioned a server ("I need a dual core machine with 8GB of RAM, and I need to know where it is at all times"). Answer is "not much". People now trust it for CPU/Memory, even though they don't know at any time where their app is.. They do, however, still manage networks and storage in static ways - the same way they did before things started virtualizing.
- The first phase of vNetwork and vStorage integration that will hit this year will be Cisco's VN-Link and EMC PowerPath for VMware - both shown at VMworld in Sept, and both in synchronous pre-release cycle with VMware. These are about making sure that virtual world is able to do everything the phyiscal world can do. They make sure that the datacenter CAN be 100% virtualized (along with of course - the key developments in the next major VMware release).
- RSA is working with VMware on the next VMsafe round. Currently the API's don't exist (VMsafe was originally designed for anti-virus use cases) - but since it allows a point where you can see every instruction, every packet at a fundamental and scalable level, the Virtual Datacenter model enables a new possible way to do Data Loss Prevention, Encryption and Intrusion Detection that you simply can't do in physical environments
Point 3: To create infrastructure that understanding and responds to "VM/Application objects"
- BUT... the next phase is where things really get blended - where thin provisioning is integrated, where management tasks are integrated, where "VM object awareness" is added, and where networking policy portability really takes off. We're still in early days here, but is where the advanced R&D is going.
- Making the physical layer transparent means a lot. I can't say much here until VMworld Europe. Suffice it to say... What would be the minimal, most transparent storage managment GUI for this virtual datacenter? Hint - look up what Allocity (where I came from into EMC) did? what does something 100% transparent look like?
- Everyone agrees that the "what is the management point" right now is not determined. vCenter is surely a critical new management point - so expect to see core management capability for EMC storage integrated into vCenter in the very near term. It will be done in an open way - no collusion, ever. We'll leverage existing open APIs to create plug-in extension models. BUT at the same time - we will continue to integrate into the vCenter APIs (see here - which we've been doing for more than 1 year!) for integrated views in management frameworks that are "home" to people other than VMware Administrators.
Done right - the Private Cloud and Public Cloud can share the applications transparently, and the "Public Cloud" infrastructure layers can "read the same bar-codes" Clearly the infrastructure needs to be a bit different (management model, federation, multi-tenancy, scale and price points are all different), but they need to be linked.
This ain't about consolidating servers (though includes that too!). It **IS** about the next big transformation we all see coming in the IT space we deal in. We're gearing up, and as leaders in our respective spaces, focusing our resources, and driving towards a vision.