I’m really excited about having VCE part of EMC. We will take the Converged Infrastructure market that we created together with Cisco, and accelerate it to a whole new level. Now – listening to a lot of the dialog (interesting SIT coverage on it here – listen in to 36:40 onwards) around the “what does this mean”, “why?”
Here’s my 2 cents in simple engineering thinking. Black and white. Yes and No. A “how to make strategic decisions like this flowchart” :-)
Now, I’m not Joe Tucci, or John Chambers, or David Goulden, or Pat Gelsinger – those guys operate at a strategic strata where I don’t, so I don’t have access to all the information. BUT – to me, at it’s core, this is actually really, really simple to understand. So let’s deconstruct the answers to each of these questions in the flowchart :-)
There’s also one critical thing to understand. There was always an uncertainty that people expressed about “what happens if the partnership dynamic changed?” The answer was always considered in the JV founding – which was this exact outcome. EMC takes on the VCE business. There is no longer an uncertainty.
As always, I’m not trite. I don’t use “sound bites”. I demand (ask?) intellectual rigor of the reader. But – the answers that you get through rigorous thinking and deep dialog are far more CORRECT than short, arbitrary speculation.
To see how I think those questions in the flowchart get answered – which contains all you need to know about VCE, this news, and hints on what’s next - read on dear reader!
Q1: “Do you believe that CI is how eventually most enterprises will deploy infrastructure supporting IaaS?”
A: My personal (Chad’s opinion!) answer is “YES, YES, A THOUSAND TIMES YES!”.
Every customer I meet wants to accelerate towards Hybrid IaaS, and the CI model takes out variation and accelerates to the outcome.
Choosing CI also helps focus attention on what actually makes IaaS work, which is the API and abstraction focus along with management, orchestration, automation. It is NOT about the hypervisor, server, network, or storage themselves.
This is why EMC’s lead posture with customers is “we want to help you build a Hybrid Cloud – and we’ve packaged it up in the Federation SDDC (with SDN layer) and Enterprise Hybrid Cloud solutions (no SDN layer) – which you should deploy on a Vblock for on premises, and partner with a service provider”. Simple, clear, emphatic.
As Eddie pointed out on the SIT episode, there’s a reason why EMC people bang on this drum so loudly. Frankly, if we don’t help customers build well run Hybrid Clouds… well – thank goodness our service provider business is growing fast because it means that over time there won’t BE Enterprise IT on premise. I don’t know if Cisco feels the same degree of urgency re the potential disruption to their enterprise networking and server business.
Furthermore – customers that don’t get the degree of IaaS disruption going on (and try self-assembly stuck in organizational silos) invariably CAN sometimes get to an OK outcome - though many don’t. Those that go Vblock go fast. I know of a customer that went from “I want the Enterprise Hybrid Cloud solution” after seeing it on Sept 8th to having the full stack (including the vRealize Suite and vCloud Air with full self-service portal, chargeback, and workflows that included backup and protection) up and running on Oct 10th. The only thing faster is to sign up for a public IaaS cloud.
Lastly – this isn’t just my opinion, it’s a FACT. The market speaks. You don’t get to a $2B run rate and 6 quarters of 50% growth by “accident” or by “simply having a good sales force”. Customers want CI – period. The only question is HOW LONG will it take before this is the dominant way people deploy converged infrastructure models?
Now – like Eddie noted in the Speaking In Tech episode, this posture is EMC’s strategic view, and it shows up in how we drive CI in the marketplace. Our good sales teams talk about storage (all flash! scale out NAS!, VMAX3!, SDS!) to be sure. Our BEST sales teams listen to their customers – and those customers want to know about “how do I get to IaaS agility”, “how do I build new applications faster” and “how do I use data better”. The answers are not “XtremIO is great!” or “UCS is awesome!” or “ACI baby!”.
In my experience – while sometimes my Cisco brothers and sisters share that posture, it depends on the individual field personnel. Sometimes its all about “UCS is the best compute/network platform”, or sometimes “ACI is the way”. I acknowledge that EMC sometimes does the same narrow focus on a point part of the technology stack (the most obvious example is the focus on all-flash arrays and XtremIO) – but I can authoritatively say that EMC would answer that first question as “YES”, which leads us to say that as EMC we MUST prioritize and invest in CI to extend our #1 position. Every vendor should consider how they would answer that same question. The EMC answer then leads to the next question…
Q2: “Do you believe that there will be multiple types of CI? Examples include Integrated Infrastructure, Common Modular Building Blocks (CMBB), Rackscale Architectures (RSA)”
A: I firmly believe that there are multiple types of CI, and this is also the broad EMC strategic view, so consider this a “YES!”.
I put out a taxonomy for CI back here – and since then I’ve refined it down with others like Brad Maltz. So – Vblock is in the “Integrated Infrastructure” category. But to deny the reality of “Common Modular Building Blocks Hyper-converged appliances” as a form of CI is delusional.
Likewise ignoring the requirements we’re getting from customers and service providers for “Rack Scale” CI that focuses on supporting “design for failure” workloads (aka “Platform 3” PaaS apps and modern data fabrics) would also be delusional.
Each of these have almost ORTHOGONAL design requirements. I would posit that the following statements of strengths/weaknesses apply not only to the examples I cite, but all the examples in that architectural “phylum” of CI.
- Integrated Infrastructure CI systems have a superpower: at moderate scale, they are the most economical and have broad range of compute/persistence mixes. Perhaps most of all, since they are composited out of broadly deployed infrastructure elements – they have BROAD support for a lot of the data services that classic enterprise applications expect. If you have a large virtualized SAP landscape running on an Oracle database and expect to protect/replicate it the same way you have in the past – Integrated Infrastructure is your best bet. Integrated Infrastructure systems have a kryptonite: the struggle to scale down. The smallest one is still half a rack, and costs around $300K. This is intrinsic.
- Common Modular Building Block “Hyper-converged” CI systems have a super power: they can start smaller than anything, and are the simplest things in the world. They have a kryptonite: as they scale, you are always scaling compute, storage, power, etc – in a relatively low-density way. Also, while they are still great for a broad set of workloads that depend on resilient infrastructure, they don’t (yet) have the sort of data services that some of the larger “platform 2” workloads expect. CMBB tends to have a sweet spot of SMB/ROBO – not because it CAN’T scale – but because that’s it’s sweet spot.
- Rack Scale CI systems have a super power: they are built for broad disaggregated and flexible commodity hardware and focus on “new application” (aka “platform 3” or “built for failure”) PaaS stacks and data fabrics. They can operate on a cost basis which is great, and are simple at scales that start moderate and get really big. They use a ton of open source tools. They have a kryptonite: they are not good for running app stacks that are traditional and demand very firm infrastructure level resilience and rich data services (like massive scale replication)
Customers of every stripe on every continent and vertical are telling me – this is a real and representative view not from some “creative mind”, but rather the aggregation of the voice of many customers.
So – the answer to the question of “are there multiple architectural segments of CI” is – I can authoritatively say that EMC would answer that first question as “YES”, which leads us to say that if we want to be the leader in all the categories, we must participate and have great offers in each category. Every vendor should consider how they would answer that same question. The EMC answer then leads to the next question…
Q3: “Is VCE and Vblock doing well in the Integrated Infrastructure segment?”
A: YES. By any measure, there are a TON of very happy customers, billions of dollars of revenue, and a growth rate that is phenomenal. Short answer.
Note: some people speculate about “is it a profitable business”, the answer is also YES. Remember that in the JV structure the profits flowed to the parents. What shows up on top-line as “product” – so when Cisco and EMC report earnings, there’s a part of UCS/Nexus growth that is a result of customers choosing Vblock, as it is for EMC for the EMC components, not as “VCE”) and the expense was the annual funding events where the parties would fund VCE operations. So you would see an expense line from EMC and Cisco, but the profit was embedded in the product numbers.
So – the answer to the question of “VCE and Vblock – doing well in Integrated infrastructure?” - I can authoritatively say that EMC would answer that question as “YES”, which leads us to say DON’T MESS WITH A WINNING FORMULA.
Customers are telling us that the formula of UCS, Nexus, EMC, vSphere and the vRealize Suite – assembled and delivered together is GOOD, and in the segment where “Integrated Infrastructure” is strong (moderate to large scale, workloads that demand rich infrastructure service at every layer of the stack) Vblock is kicking A$$. So – don’t mess with the formula :-) This will accelerate Vblock with exactly that formula.
- EMC will keep buying hundreds of millions of dollars (!) for UCS and Nexus from Cisco the same way that previously as the distinct VCE joint venture did. That UCS and Nexus hardware is best thought of as part of a “product supply chain” where the “whole product” is a Vblock. I suspect Cisco and Cisco’s distribution will keep selling it versus turning down hundreds of millions of dollars of product revenue :-)
- VCE as part of EMC will keep working with Cisco on a joint roadmap with EMC and VMware which manifests as a CI roadmap for Vblocks. You would of course expect that from vendor (Cisco) to a customer (VCE as part of EMC) buying hundreds of millions of dollars of your stuff every year, right?
- VCE will continue to have deep expertise in how to engineer the system, and how to manufacture and ship the Vblock system, because after all – that’s their product that ultimately the customer buys.
The 10% Cisco investment is in effect a validation for customers in the market of 1, 2, and 3. I’ve heard sometimes people skeptical about technology alliances and partnerships. Frankly, if I’m a customer, whether you trust me or not – look closely at point 1 and think about it. That is a fundamental business partnership – and is the reinforcing function.
Done! Simple! The only material difference with the VCE change is:
- Accelerated decision making and action (simpler governing structure for VCE) – this is important, and good.
- An accounting change where the whole revenue booking value will show up on EMC’s books (and there will be an operational expense of buying the UCS/Nexus gear).
Q4: “Is VCE and Vblock doing well in the Common Modular Building Block segment?”
A: NO.
Currently there is no VCE offer in the CMBB segment, and while this segment is currently much smaller than Integrated Infrastructure (think in orders of magnitude of $5-10B vs. $1-2B), it is clear that there is a growing market there. This is the domain of EVO:RAIL, of Nutanix, of Simplivity, of Pivot3, and so on…
There are very few “really small” Vblock customers. This isn’t because there couldn’t be – but rather that Integrated Infrastructure (Vblock or any examples) do not “scale down” well past a certain point. The smallest one would be a half-rack or so. It would cost somewhere north of $250K.
If you are a small customer, or a remote site in an enterprise, you would MUCH rather have an architecture that starts at 2U, a few hundred VMs, a turn-key install in 15 minutes, and a price that would be a fraction of the smallest Vblock.
In those use cases, you simply don’t prioritize things that flow from “can I deploy this enterprise SAP landscape” (rich enterprise data services and networking capabilities). You are A-OK with “I replicate the VM, and I just configure an IP and a VLAN”.
EVO:RAIL is a great, GREAT solution in this marketplace. But, of course, there is NO UCS platform that fits the EVO:RAIL specifications (2U, 4 nodes, internal storage). This means that VCE Vblock could never be the vehicle for a EVO:RAIL thing.
So – if you’re EMC, and the answer to Q4 is “NO”, and you believe that CI is important, and that there are multiple segments and architectures, you need to do something. And if you want to be #1 in all CI segments, your CMBB CI offer has to be the best. Ergo – EMC must have the BEST EVO:RAIL appliance (the hardware of all platforms are the same), which is about integration with EMC support, global supply chain, and richer backup and replication capabilties. This will GA in Q1.
Now, jump to Q6 as in the flow-chart and ask yourself the question there.
OK, so let’s now continue to the next question…
Q5: “Is VCE and Vblock doing well in the Rack Scale Architecture segment?”
A: NO.
Remember that what we’re hearing is that these “rack-scale architectures” are predominantly used for “designed for failure” applications and next-generation data fabrics that DO NOT EXPECT RICH INFRASTRUCTURE SERVICES. This is that whole “platform 3” thing.
Can you run them on Vblocks? Sure. Is that the best way? Well – if for you “platform 3” workloads are a small sidecar, and you have a ton of “Platform 2” workloads that need a ton of infrastructure robustness, yeah.
BUT – if they are a big part of what you are doing or contemplating doing, putting those workloads on ANY CI solution with rich, premium infrastructure with incredible capabilities for QoS, for resilience, for data services is well… a colossal waste of money.
So – if you’re EMC, and the answer to Q4 is “NO”, and you believe that CI is important, and that there are multiple segments and architectures, you need to do something. And if you want to be #1 in all CI segments, it has to be the best Rack Scale CI offer.
EMC Global Platform Engineering does have 3 “general purpose compute” platforms we use for broad purposes.
These are manufactured by EMC (with OEM partners of course – Foxconn, Intel, and others) – and range from “compute/memory dense modular” on the left (codenamed “Phoenix” – used in EVO:RAIL), the one in the middle is familar as a “VNX2 storage engine” – but can be more accurately thought of as a “2U 2N” server. The one on the right is a 2U 1N server that is storage/memory/compute dense (up to 4 sockets). We don’t sell them “naked”. They arrive in CI forms (EMC’s EVO:RAIL appliance), or as “storage things” like VNX, XtremIO, Isilon etc.
We also have these “GB/IOps blocks”:
These range from capacity oriented (360TB+ per 4U, SAS connected), traditional SSD/HDD performance dense (120 x 2.5” form factor, SAS connected) to the strange beast on the right (NAND IOps machine, PCIe NVMe connected, memory mapped or HDFS/KV store API accessed).
These are analagous to components in the Open Compute Platform kit bag – designed to be reliable, but still “commodity”, and compete in economic bands where commodity platforms play.
There could clearly be assembled into a fantastic Rack-Scale Converged Infrastructure thing. It could have vSphere/VIO on it, but it could also have Openstack natively, or even Cloud Foundry dropping Docker containers on physical hardware. It could be used for a dense Cloudera deployment, or the Pivotal Big Data Suite.
BUT – this clearly couldn’t be a VCE Vblock offer.
Q6: “Cloud VCE’s core competency (leader in engineering, assembling and supporting composite Converged Infrastructure solution stacks) be applicable more broadly?”
A: YES!
This is actually really simple. VCE is 2000+ people, who have a TON experience than at engineering, building, and supporting real integrated CI stacks. Are they perfect? No. But in talking to Vblock customers – the experience (particularly high touch, high quality integrated support) is a standout.
Interestingly, while others (think HP as an example) have the all the ingredients – it’s surprising that VCE is soundly trouncing them in the marketplace. Surely this has something to do with the quality of the ingredients relative to the ingredients in a Vblock. But – the real reason is that VCE was BORN to support integrated CI. That’s what they do. There aren’t “support silos” in VCE. Their only product (the product they live and die by) is Converged Infrastructure.
Will VCE change the formula of Vblocks? NO – IT’S A WINNING FORMULA. Go back and read Q3 again if you are confused. Vblocks will remain based on V + C + E ingredients. Vblocks will will remain an engineered system.
… BUT: VCE as part of EMC could also help accelerate having the best solutions in the industry in OTHER Converged Infrastructure segments (common modular building block hyper-converged and rack-scale platform 3).
Think:
- Prior to this change – there would be people fighting for customers business with Vblock and with EMC’s EVO:RAIL appliance. That’s crazy. Now, one team, building the right solution – and with no “compensation driven conflict”.
- Prior to this change – EMC was needing to build up the “integrated support model” for EVO:RAIL – because after it GAs – there will be huge volume for our partners. AND, that would require fully integrated support for VMware and EMC all as one – no hand-offs. How handy to have the people that do this really well, have been doing it at scale for years all in one place now.
- Prior to this change – even if VCE wanted to create a “ScaleIO/ViPR on UCS C-Series” (which would have been possible), that isn’t necessarily the best way to tackle the “COTS, low hardware value, high system value” requirements of Rack-Scale systems.
I think that’s why customers and EMCers and VCE folks are so pumped about this change.
Faithful Virtual Geek readers will realize reading this post, and thinking back to this one, this one, and this one that we’ve been thinking about this for a while. Stay tuned. More to come!
Great insight here Chad,
As a VCE employee im really exited about all this. It opens so many doors EMC and VCE would have struggled to open individually.
It now enables VCE to have conversations with customers at every level of the spectrum.
I know its been said already but this takes VCE to that next level it would have taken a long time to achieve as a JV.
David
Posted by: David Owen | October 24, 2014 at 01:15 PM
Chad,
Nice post -- high level flowchart summary (executive summary) with optional detailed explanations!
But we probably do want to ensure our customers and partners that this VCE+EMC announcement will not impact our ongoing vSpex investments. There are many more vSpex architectures/configs being developed, tested and released.
I believe one of EMC's 'super powers' is the ability to give customers CHOICE -- and continuing to give them the choice between VCE, individual platforms (VNX, VMax, etc.), vSpex, and others solutions you've described is a testament to how we help maximize the Total Customer Experience, and also continue to develop new solutions that meet their requirements and (hopefully) exceed their expectations.
Posted by: David Lapadula | October 24, 2014 at 03:28 PM
Hi Chad,
Great stuff as always.
I have to mention the "elephant in the room" that is NSX.
I have done a lot of investigation into NSX (please see my thoughts at http://blog.snsltd.co.uk/an-introduction-to-vmware-nsx-software-defined-networking-technology/) and you have to say it looks game changing.
How much impact do you think NSX had on Cisco's decision?
Martin Casado made a great comment (over at http://www.networkworld.com/article/2691482/sdn/vmwares-casado-talks-about-evolving-sdn-use-cases-including-a-prominent-role-for-security.html?page=3) that really stuck with me - "Networks are going to be much simpler and cheaper in the future."
Doesn't sound like something that is in Cisco's best interest to me.
Best regards
Mark
Posted by: Mark Burgess | October 30, 2014 at 04:20 AM
Hi Chad,
I also wanted to get your thoughts on a debate that has been going on over at http://blog.nigelpoulton.com/vsan-is-no-better-than-a-hw-array/.
I understand where Nigel is coming from, but I do not really believe there is any real lock-in with hypervisors or storage - there are just too many options (more thoughts at http://blog.snsltd.co.uk/lock-in-choice-competition-innovation-commoditisation-and-the-software-defined-data-centre/).
What I am also interested in is your thoughts on:
1. EVO:RAIL
I just do not get this, it looks like about as locked-in as you can get and way to inflexible and expensive for its target market (more thoughts at http://blog.snsltd.co.uk/vmware-evorail-or-vsan-which-makes-the-most-sense/).
I really like what VMware is doing with VSAN and NSX, and what EMC are doing with RecoverPoint for VMs, but EVO:RAIL just does not make sense - or am I missing something?
2. VSAN Kernel Modules v VSA
I completely understand the portability argument with regard to a VSA, but if you were the hypervisor/OS vendor you would not build this as a VM, you would surely bake it into the hypervisor/OS as it will be more efficient (CPU and Memory).
It reminds me of what some of VMware's NSX guys said to me recently:
1. The Kernel Module based Distributed Firewall can do 20Gbs
2. The 3rd party VM based Firewall can do 2Gbs
I think Kernel Modules win the efficiency battle by a fair margin!!!
I know you had some views on this a little while back with regard to ScaleIO.
My understanding was that ScaleIO is Kernel Modules on Windows and Linux, and currently a VSA for VMware, but this would be moving to Kernel Modules soon.
Can you explain why EMC are making these changes for VMware and what the benefits are compared to the old VSA?
Any comments would be appreciated.
Best regards
Mark
Posted by: Mark Burgess | October 30, 2014 at 04:43 AM