[UPDATE, August 3, 2014, 12:59pm ET] – couple of things to fix near the bottom of the post…
Today – we could open up to the market how well XtremIO is doing. I was keeping this quiet until it was public (no one wants to get in trouble with the SEC :-)
From our Q2 results (here):
“EMC has established clear leadership in the all-flash array market with XtremIO, surpassing a $300 million annualized demand run rate in its second full quarter of availability. ViPR adoption continues with the number of customers doubling in the second quarter compared to the first quarter of 2014.”
Wow. By any measure – that’s huge. And I know EXACTLY where this is happening – and it’s not even around the globe. As EMCers and EMC partners get more consistent and confident in other regions, well – suffice it to say, this is an exciting part of the market.
There was a lot of “naysaying” from others:
- “XtremIO is missing data services”. Answer: we knew the architecture was right for enterprise (centered on the distributed-in memory metadata and internal object-like approach), and felt confident the engineering team could deliver on their aggressive feature roadmap – and they have. Now XtremIO has great snapshots, data at rest encryption, doubled our alreaday great performance (free via software!) – and compression that is as good, as constant, and as “always on in all ways” as our dedupe is right around the corner (people already using it!). Add in things like ViPR Controller support (soon), ViPR SRM support (in now), You can’t change architecture. You CAN add features.
- “People don’t need scale-out”. Answer: umm – just under half the customers buy more than 2 Xbricks – so that’s flat out wrong. Also – as shown in this demonstration – being able to cram more, and more capacity (via snapshots, dedupe, more) without being able to deliver performance against the data (because data services like snapshots or dedupe don’t span “type 1 clustered architectures” when you simply layer on a veneer of “movement and management”)
- “It doesn’t really scale-out”. Answer: We just added the ability to go to 6 X-bricks (which is 12 nodes) – and the ability to start even smaller – with a 5TB X-brick. There is 8 X-brick support around the corner. Remember all – XtremIO is a “type III architecture” (defined by a shared memory space), so it’s a tightly-coupled scale-out design. These never scale-out to “hundreds of nodes” like a loosely coupled cluster – but on the flip-side has insane “constant linearity”. The dynamic ability to add and remove nodes is right around the corner.
- … Oh, and there’s more in the roadmap I can’t say publicly.
Look – I’m not saying XtremIO walks on water :-) We need to keep up work on native RecoverPoint support (today, many customers are also using VPLEX with Recoverpoint for stretch clustering and DR). We could improve some large IO performance use cases. Official vCOPS support (unofficial, but still awesome is here). There’s of course vVol support as VMware releases that, etc…
…But – wow. I’m not a market analyst, but that’s got to be a huge chunk of the AFA market. XtremIO is (still) relatively new product (we’re talking about 6 months of general availability!) in a market where others have been there for years. Our win rate vs. the other players when we compete makes me feel almost guilty (we are looking for some additional XtremIO Specialist SEs BTW).
Oh – if you’re thinking because XtremIO is new it’s not “hardened” – it has some of the best customer SAT and Net Promoter Scores (NPS) of everything we do. Customers LOVE IT.
Also – from out in the field (on July 17th, wanted to hold off until after we announced results – because people could have read into my enthusiasm and that of the XtremIO team):
“XtremIO hit a very significant milestone yesterday. After months of pounding with a very heavy workload, our first array in the field hit 1% wear (meaning 99% of the endurance still remains) on the SSDs. The lucky array is owned by Ahead, so congratulations to them for beating the snot out of XtremIO more than anybody else. They managed to pound the array with 35TB of unique writes per day – as measured at the SSDs after dedupe.
As you would expect with our inherently balanced architecture, all the SSDs clicked over to 1% wear together.
And hot of the presses, Ericsson has done the same thing with their array today. It took 2.6PB of total writes to the array, but they are now also at 1% wear.
Every other array in the field is still at 0% wear / 100% endurance remaining, many with well over a petabyte written to them. Projecting out this level of endurance means that our arrays can last for decades. Remember, this isn’t just about cMLC vs. eMLC, it’s about our superior architecture, content addressing, 100% inline data services, in-memory metadata, and XDP – they all contribute to making flash last longer.
When we say endurance is NEVER a concern on XtremIO, we mean it.”
It’s really fun to bring something awesome to customers.
I’ve said it many times, and I will keep saying it. If you have workloads that need low latency, and need consistently low latency – with rich data services, and really cost-effective flash, but DON’T need things like:
- mainframe support, or huge scale of devices (think tens of thousands) [Update: I want to be clear – that while to many customers this seems “impossibly huge”, there are are MANY customers that expect their storage platform to support thousands/tens of thousands of devices. It’s just not in the “mid market”]
- insane replication requirements (think thousands of consistency groups, aysnc replication with seconds of RPO, sync replication over tight SLAs) [Update: again, clarity needed here. “Insane” is not meant to mean “crazy”, or “inappropriate”. It’s rather meant like “that 1080 inverted with a twist was INSANE!” – in other words, complex, high-end. These are very, very real requirements for some use cases, some applications – and these applications often run huge parts of the world around us. Often these applications cannot be re-written to do “availability in the application stack”, so are stuck with requiring infrastructure to meet the SLA. They are also replication requirements that not a single of the new-generation AFA can meet. None. If you have these requirements, and cannot re-architect your application – you must look at the “Enterprise array” category, and look towards all-flash configurations.]
- corner cases of data integrity (T10DIF as an example)
- also looking for storage in the <$3GB bands for colder workloads on that same platform (though I would argue
…Well, if you don’t need that list for some of your workloads, you’re simply missing out if you “stick with” things like EMC VMAXes, 3PAR, HDS and others “just because”. Don’t get me wrong, for some workloads the list above isn’t a “nice to have”, it’s a MUST have – and that’s not the AFA market (none support that list), and for those workloads I think the new VMAX3 is awesome. I see more and more customers “getting this” – the era of the “hyper-consolidated storage platform” has shifted to an “automated, abstracted layer above architectures that fit the workload variety.” Hence our portfolio, hence the importance of the ViPR Controller.
If you need something focused at transactional workloads, and don’t have the requirements above– you should be looking at an AFA. If you’re looking at AFAs, you should look at EMC XtremIO. This is a message for my EMC brothers and sisters, our customers, our non-customers. Disrupt yourself.
Comments