I’m not going to bury the lead:
- Software Defined Storage is ready for the majority of x86 workloads (by volume) as hyper-converged infrastructure or pooled storage.
- SANs are best suited for specific workloads that demand very high capacity or performance, or have complex data protection needs.
For some – those two statements will have readers painfully face-palming – because to them, it’s painfully apparent.
For others – those two statements will inflame and enrage – because to them, it’s painful in that it is upsetting the status quo and their world view.
Perhaps making this more meaningful than the quixotic musings of your neighborhood Virtual Geek – this isn’t just my point of view. This is the official point of view of Dell Technologies, part of which is VMware (which naturally aligns with the above), part of which is Dell EMC, the #1 provider of external storage arrays. That second part is what lends the statement such heft. It’s that second part that makes the statement mean much more than if it’s just me tilting at windmills.
Leaders innovate, leaders create markets, leaders challenge their own status quo.
Let’s equally be clear – while SDS models power the web-scale clouds, in the Enterprise, they are just getting started – the above is a statement of direction, not a statement of “current state of play”. No one has a crystal ball, but this now dated Wikibon study (from 2012) echoes with my observations:
This is supported and highly corroborated by the negative growth rates of the traditional external storage market. Yes, it’s worth acknowledging the “hot spot” of the AFA market that continues to grow – but don’t forget, this is mostly substitution – AFAs are taking share in a shrinking external storage market. Remember dear reader – change takes time, is unevenly distributed, but then tends to happen all at once. The AFA and general external storage market is huge – and isn’t going anywhere… at least not overnight.
The trend is also corroborated by the slowing growth of Converged Infrastructure (CI) – which is slowing. CI remains “hot” at 2-30% CAGR – a wide swing depending which CI player you’re talking about. VxBlocks continue to grow at the high end of the range – but a 30% growth rate still represents a “slow down” from 50% and 100% growth rates in years past.
Another coorboration of the trend is the huge growth rates of vSAN (north of 7,000 customers), ScaleIO (100%+ CAGR) and HCI models like Dell EMC VxRail, VxRack and XC that have growth rates that are measured in multiples of 100% CAGR.
What’s driving this? If you want to understand more dear reader – continue past the break!
The drivers behind what’s happening in the market is not rocket science. SDS models have some inherent advantages that create the first statement (“Software Defined Storage is ready for the majority of x86 workloads (by volume) as hyper-converged infrastructure or pooled storage”):
- Granular scaling model. The scaling model of SDS/HCI is different - which means you can start small, grow, and leverage future commodity economics (something is likely to cost less tomorrow than today).
- Simplicity, particularly at scale. SDS and HCI stacks are generally easier to automate, particularly at scale – which means they have a better OPEX curve. It’s not always true that an SDS/HCI model has a lower CAPEX than a traditional approach (sometimes it is, sometimes it isn’t), but the TCO including OPEX and this “forward cost of capital” model means that the TCO is generally 30% better or more (if this sounds hyperbolic – IDC suggest it’s 50% here).
- Industry standard hardware leverage. SDS and HCI stacks typically can tap into a hardware ecosystem faster – leveraging things like NVDIMMs, NVMe, NGNVM media faster. This isn’t intrinsic (architectural), but rather due to the relative degrees of hardware freedom.
Now – there are certain things that IHMO are myths that are not real – and are SDS challenges/gaps today that create the second part of the statement (“SANs are best suited for specific workloads that demand very high capacity or performance, or have complex data protection needs.”). As a reminder, this is about where External Storage Arrays and traditional SANs will continue to play – though as you can see from what I’m saying here – it’s a consolidating market.
- Capacity density. Since SDS/HCI are deployed on industry-standard server platforms, even the densest amongst them (e.g. Dell EMC PowerEdge R730xd) are still far from as dense as purpose-built arrays and enclosures. While OVER TIME it’s likely that server architectures will pull more and more to “fit for purpose” (think of optimizations towards SDS, or SDN even..) – today, nothing can beat the density of purpose-built bit buckets. Note – it’s not that SDS/HCI models cannot be DENSE, and cost-effective from a density standpoint (particularly with deduplication, compression and erasure coding), but rather the rare air of the “most dense you can get” is reserved for architectures built for purpose.
- Performance – specifically extremes of latency variability. Since SDS/HCI by definition use some form of distributed software stack (data locality or not), and are connected via a general purpose fabric – they generally fall into a “Type III” architecture (aka “loosely coupled scale out”) in this storage architecture taxonomy. SDS models like this can deliver SMOKING total system bandwidth/IOPS, (almost nothing on earth that does this better than ScaleIO as an example) latency, and more importantly latency variability than a “Type II” architecture with a distributed cache and shared memory fabric. If a few hundred microsecond variation, or maybe a few millisecond delta freaks you out – then, yup, you have a workload that will likely be on an architecture built for purpose. Note – it’s not that SDS/HCI models cannot be highly consistent or performing – in fact for the most part they can cover the majority of workloads very well, but rather the rare air of the “lowest, most consistent latency you can get” is reserved for architectures built for purpose.
- Very complex, very specific data services. If someone wants 3 site replicas, with thousands of consistency groups, with a wide range of sync <-> async topologies… well, the SDS stacks don’t do this, at least not yet. Since all storage arrays at their core are software – it’s not intrinsic that they could not. The core question is if you were developing an SDS, would you prioritize trying to replicate the SRDF capabilities of a VMAX3 or other of the highest end enterprise arrays? I sure wouldn’t. Why? For starters, it’s because that’s a small set of workloads… by count (they are very important to customers, though) – and perhaps the hardest set of use cases to target. Furthermore – as people build new apps, they are mostly architecting resilience into the application layer. Lastly, products like Dell EMC RecoverPoint and its competitors are increasingly embracing the SDS option – bringing enterprise replication to the world of SDS and HCI models (if not the very most extreme use cases). Note – it’s not that SDS/HCI models cannot be resilient and have replication capabilities and other data services – in fact for the most part they can cover the majority of data services requirements for most workloads very well, but rather the rare air of the “most complex data services” is reserved for architectures built for purpose, and very mature software stacks.
Netting it out:
- If you add up the 3 strengths and 3 weaknesses of SDS and HCI models you end up with the simple two sentences at the top of this post.
- You also end up with a pretty clear conclusion that SDS and HCI models can cover the majority of workloads (by volume/count), while the place for SANs and CI are increasingly directed towards important (but not the majority by volume/count) workloads.
Q: What do those statements mean for customers?
A: As you are evaluating your infrastructure paths going forward for workloads that won’t go SaaS or off-premises for a variety of reasons… You should START by assuming that SDS and HCI models will work, and then find exceptions – rather than designing for the exceptions.
You may be asking yourself - why this forceful black-and-white statement and position, in such a public way?
In the last weeks of 2016 – directed by Michael Dell, Pat Gelsinger, and David Goulden – my partner at VMware (Ray O’Farrell) and I spent a lot of time contemplating what was happening in the SDS and HCI markets, and our fundamental and strategic position.
Ray and I had been discussing this since the early days of 2015 – and what was clear is that we have a winning hand, but that we could do better at playing the hand.
We have the leading transactional SDS portfolio in vSAN and ScaleIO – two very, very strong pieces of intellectual property (IP). Even back in late 2015 it was clear they were starting to really ramp, and customers were digging them both - vSAN and ScaleIO were accelerating. Now in 2017, this has ramped considerably.
Furthermore, we have the leading x86 server platform in Dell EMC PowerEdge. There are already platforms in the PowerEdge family that power the bulk of the enterprise SDS/HCI in the R630, R730, dense storage configs in the R730. The C6320 and FX2 cover additional use cases. And, there’s the upcoming wave of Sky Lake, Kaby Lake, Purley, NVDIMMs, NVMe and NGNVM (3DXpoint and other ongoing NAND improvements) – which you can bet we will lead.
Winning this next phase of IT requires incredible software IP first, but ALSO an incredible x86 server supply chain.
So – we have a great stacked deck with great properties… Yet – we were spending a lot of time fighting internally. When should a customer go vSAN? When should a customer go ScaleIO? Wait – shouldn’t we protect the external SAN market? What about the fact that HCI cannibalizes a huge CI market (Vblock/VxBlock) where we are the 65% market share leader?
When there is a disruptive change – the strategy and position needs to be clear, a call for all the employees in equally bold clear language. It was time for Dell Technologies to take a clear position – and a position that is consistent across the board.
Ray and I started to work with the broader Dell EMC and VMware teams in the vSAN/ScaleIO teams – and here at the start of 2017 – at the VMware worldwide kick-off and Dell EMC Field Readiness Session, everyone is hearing the same thing from the leadership.
So… dear reader, what does this mean?
- Clarity. If people from Dell Technologies disagree (whether they are part of Dell EMC, or VMware, or others) with the above, they are on an island by themselves, tell them to drop me an email, and I’ll share the memo.
- Focus. You can expect to see us continue to lean in on SDS and HCI, even at the same time that we continue to extend our position our external storage/SAN offers in the High-end, mid-range, AFA efforts, including in Converged Infrastructure forms. I want to be clear – we lead in the external storage SAN and CI markets today (which will be much larger than SDS and HCI for years to come) and we will continue to lead those markets. We view this as an “and” not an “or” after all, even in markets that are shrinking, there are parts that are booming (AFAs as an example), and it also means it’s time for market consolidation. As the leader in external storage in all forms, the leader the leader in CI, HCI, and SDS, we have the sticking power customers need. Expect a lot of market consolidation to occur over the coming years.
- Direction. We are clearly stating where we think the world is today, and where it is going.
This is the first of a four-part series, where I’ll explore this further – including the two types of SDS approaches + the three consumption vehicles; navigating the decision trees; putting it all together + what’s next.
As always dear reader – curious for what YOU think… What’s YOUR point of view, and are you investigating/using SDS/HCI models? Comments welcome – but also share your actual point of view statistically…. Would love input via this survey! I’ll leave it open for a month, and share all the results!