Flash is changing everything about storage. Sounds like hyperbole? It's not. Sometimes people don't "get it". Flash is like someone coming along and saying "I have a new car on the market. It's pretty awesome. It fits in your pocket, but can still get you around. It's gets 1000 mpg. Oh, and it can go 10,000 mph." That would be a pretty radical inflection point, no?
To understand the detail behind this (with crazy awesome demos!), read on!
Again, my analogy may sound hyperbolic, but it's no exaggeration. That's flash relative to magnetic media. Yes, the $/GB is still higher, but getting closer and closer every day. For some workloads, $/GB pales compared to $/IOps or the relative value of a few hundred microseconds per IO. Sometimes density and power dominate.
That's why any hybrid array (which does tend to be the "meet a broad set of workloads" best architecture still) player out there that isn't EVIDENTLY and RADICALLY re-architecting for more and more flash in their arrays is going to be in a world of hurt. If anyone tries to tell you that they "happened to already be architected for flash", they're high as a kite (if they really believe that themselves). In EMC VNX/VMAX land, we're seeing the mix increase from 2-5% being flash to 10-20% being common. And it's required radical re-architecture of the software and hardware stacks. There are new hybrid storage players (think Nimble as an example) that also are embracing this spot. "High flash percentage" is the design point of the new VNX (see more on this HERE). It's the design point of the next-generation VMAX.
BUT - is that enough? Nope. The players who are approaching this new technology not simply as a "new ingredient", but ALSO "a way to build something totally new" are doing things that the Hybrid arrays will simply never be able to do. In the early days, these will be best applied to focused use cases - but YUP, you should expect them to fight the Hybrids for general purpose workloads over time. Within EMC, our philosophy is “bring it on!” For all the big storage vendors, this is a critical "build vs. buy" moment - sitting back is a NON-OPTION. I would posit that ONLY building a "ALL FLASH" version of your existing platform is also a BAD OPTION (in the end customers will decide).
We looked long and hard before acquiring XtremIO (and I would also posit that big players will struggle to do a successful "build" play here - the internal nature of "antibodies" from the mature stacks is enormous) - and they were in our view far and away the strongest player. TRUE scale out. Data services like Dedupe and Thin - ALWAYS ON. Linear performance - ALWAYS. These are huge in general, and in the EUC use cases, they are enormous. We have been shipping and selling XtremIO through directed availability for months now. After evaluation, the customer pick it north of 90% of the time. Even though it's still just directed availabilty, many customers are FORCING US (seriously) to buy them :-)
Does “Directed Availability” mean “not ready”? NOPE. After all – the ENTIRE VMworld 2013 Hands-on-Lab runs, in effect, on Cisco UCS and XtremIO (with some stuff on older VNX7500s). That’s 480 workstations, hundreds of thousands of IOps, tens of thousands of VMs being created, destroyed – all deduped in real-time. You can read about VMware talking about it here in advance of the show. Here’s a post from mid-show with some stats here. The UNMAP process runs at 5GB/s (!) My FAVORITE quote from the VMware team running the show: “these arrays are bored” :-) I will do the traditional “post show stats” post….
So why not GA? Storage stacks (as Pat said in the opening general session “are hard”). We really, really want to beat on it before we open the gates – because it will be so popular, there will be no “stop for a second” :-)
Why will it be so popular? Well, beyond the VMworld HoL example, check out this demonstration:
You can download the high-rez version of this video here (warning, BIG 300MB file)!
Better yet, in the words of a customer... Here's an example of the power of an all-flash array, and server flash (the customer is one of the world's biggest retailers).
I'm going to blank out the name of the other folks mentioned here in the spirit of "never go negative".
Regarding their VDI deployment (large scale 10K + user deployment) :
- Before XtremIO:
- _____ _____ did not have sufficient performance to support VDI environment
- VDI desktops taking 4-5 minutes to startup/reboot
- VDI environment slow and virtually unusable, needed better performance in VDI environment
- After XtremIO:
- six X-Bricks across two datacenters during Directed Availability, adding three more X-Bricks per datacenter after General Availability when HA becomes available
- easily delivering 260K read IOPS, 180K write IOPS
- Single X-Brick supports 3500 VDI desktops with 5 second reboots instead of 4-5 minute reboots with ______ gear
The best part from the customer is their quote: “You guys are light-years ahead of the competition.”
- INX will also be sharing their customer (Texas Community College District) in EUC5858 (Wednesday, Aug 28, 11:30 AM - 12:30 PM, Moscone West, Room 2010) in the schedule builder here.
- We’ll be sharing the effects of XtremeSF, and XtremIO with Oracle in VAPP5180 (Monday, Aug 26, 11:00 AM - 12:00 PM) in the schedule builder here.
Our view is that it’s NOT all about Hybrid arrays, or even All-Flash Arrays. Server flash plays an increasing role ever day. Today the use cases are mostly in DAS and extreme low-lantency use cases, but technologies like Pernixdata (caching, IO restructuring), vSAN and ScaleIO (software distributed storage stacks) will make this a very interesting space to watch.
EMC XtremSF are our PCIe offering, and XtremSW is the software for management, caching, and more. We're now on version 2.0 that has all sorts of goodies - larger capacities, and integrated management with Unisphere, and the beginnings of integrated hinting/tagging from the host to the array in some use cases. Here's a demonstration of how simple it is to use with vSphere:
You can download the high-rez version of this video here!
Here's another interesting angle. From the top-secret next-generation VNX platform testing, we've tested it with and without XtremSF/SW acting as a cache. Adding a tiny amount of XtremSF/SW caching effect reduced the load on the array by half, and almost doubled the number of transactions per second of OLTP workloads.
Moral of the story: If you're a customer buying a storage array, push your sales person to include a little bit of EMC XtremSF/SW will make your array go further, and if that array happens to be EMC, you get the bonus of integration from a management standpoint.
Here's that same customer (who has thousands of retail locations where they want to use VSA-style storage models on a couple of hosts) who is evaluating running them on EMC's XtremSF:
"We are losing a tremendous amount of performance due to network bottlenecks and latency introduced by the _____ VSA. Still, the performance of the XtremSF Flash cards is much better than spinning disks, and represents a very affordable alternative to external storage arrays when counting space and power.
Our use-case is somewhat unique (and very complex), but it does work while keeping everything within the hosts. We can add/remove hosts to the cluster without disrupting the VM guests or the Storage VSAs.
I did not get an opportunity to test the ______Virtual Storage application solution, but I would expect performance to be quite a bit better. _____ is limited to two hosts, but they use a much more efficient checksum strategy."
“The XtremSF Flash cards are not even flinching. So far, with spinning “rust”, ______ can’t go much past 16MB/Sec read or write. With the XtremSF Flash cards and ______, we’re doing over 200MB during multiple (12) Windows Server 2008 Installs. We’re at the limit of the two disk RAID1 array serving the ISO file.”
“With the other ______ PCIe Flash cards, you must sacrifice a chicken and dance nude in the moon light to get these cards installed and running correctly. The XtremSF Flash cards were much easier to work with.”
Oh, and of course, we have in the crazy EMC R&D labs all sorts of things under evaluation, from new forms of NAND Flash, entirely new persistent DRAM models, and more.... It's a fun and disruptive time in storage land!
Are you using Flash today? In a Hybrid array (as a cache, a tier, or both), in an All-Flash Array, or in server-side flash use cases? What do you use, and what do you see as a result? Share!!!
Comments
You can follow this conversation by subscribing to the comment feed for this post.