You know when you hear your own company say something over, and over again – sometimes one can have their own doubts.
I remember the first time Donatelli (formerly at EMC, now at HP) told me: “in 2-3 years, there will be no place for a high-performance rotating rust disk, we’ll only have solid state and very large slow magnetic media” (that was about 1 year ago).
I was a bit skeptical.
I was also talking to a customer about it yesterday, and we debated about cross-over points (not whether it would happen, but just timelines).
This just clears away any of my skepticism.
So – this article is on Anandtech – one of my favorite non-storage related IT sites. If you don’t want to read the whole article, jump to this page.
Or – let me summarize it for you. Read on if interested, or if after reading the article the “tada!” epiphany doesn’t jump right out at you.
Here’s the standout thing in the article:
- Kingston 40GB drive (the purple line on the chart from Anandtech), based on the 35nm Intel X-25M
- Price: $115 before rebate ($85 after mail-in rebate, but am not going to assume that)
- delivered 4000 4K IOps random writes.
- delivered 7500 4K IOps random reads
Don’t get the big deal? First – look at the odd ones out. Those are fast spinning magnetic media disks.
Still don’t get it? Ok, let me make a comparison. Sure, a 1.5TB 7200 SATA drive can be bought for $115. But it will do 80 4K random write IOps.
So:
- Expressed as GB/$ = the SATA drive is a 38x better deal
- Expressed as IOps/$ = the SSD drive is a 50x better deal (for write IO workloads, for reads, it’s 93x better). The SSD delivers 34 IOps per dollar. The SATA drive deliver 0.69 IOps per dollar.
To match the random read IO performance of that $115 Solid State disk, you would need 50 of the 1.5TB SATA disks.
But surely, if you were looking for performance, you wouldn’t use the SATA disk, right? You would probably use a 15K RPM FC disk. Those cost about $1000. They do about 200 random write IOPs. So, you would need 20 of them to do what that $115 SSD could do. That’s 0.2 IOps per dollar – or 170x more expensive than the SSD on a IOps/$ basis. Oh, you think SAS 15K drives are a better deal? They are – than FC disks. A 15K SAS disk on Pricewatch costs about $210, and they also do about 200 IOps. that’s 0.95 IOps per dollar – or 37x more expense than the SSD on a IOps/$ basis.
Also – all the “more for less” items are purely expressing acquisition cost. Solid state is also orders of magnitude better in power efficiency and density.
Now, sometimes, performance is focused on bandwidth (MBps) – this is usually for sequential IO workloads – very common in backup to disk use cases, and in those cases, spinning rust does OK, and often faster pipes (10GbE for example) are “efficiency technologies” – enabling you to get more for less, and so is big SATA and fantastic dedupe (we do them all of course :-)
For most production workloads in the 4K to 64K average IO size, performance it tends to be gated by IOps.
Ok – fast forward this just a LITTLE bit. a 32GB commerical MLC-based SSD cost $900 near the beginning of this year. Now, it’s 10x cheaper.
Today, there’s only one manufacturer that has a LOCK on the “enterprise” SSD (or Enterprise Flash Disk) market – a company called STEC, and we can put them in all our arrays. We’ve sold out for the last 6 quarters. I also got some great customer feedback after I first observed that (this is a customer who use EMC arrays with solid state right now). From DaveFW on twitter:
“RT: @sakacc re:SSD Its had a HUGE impact on our production oracle database already”
“RT: @sakacc re:SSD we had jobs that ran 2 days take 10 hours now with no impact to users”
But today, without automated tiering, and with the $/GB being skewed against SSDs – they are reserved for applications that have low capacity, but high IOps workloads. Theses aren’t unusually (they are everywhere), but they aren’t universal.
There is more to the story, however. EMC is one of Intel’s biggest (in fact I believe THE biggest) non-server OEMs. We use Intel’s CPUs everwhere, and are now nicely riding the Intel Nehalem roadmap. Pat Gelsinger is here now which also helps. I can tell you for a fact that the Intel Flash folks are consistently making progress towards enterprise-class SLCs. Samsung is pushing hard too. BTW – I’m assuming no “star trek” technology like phase change memory, this will happen with the basic technology that exists today.
What will it mean when a SSD has the same $/GB and 100x better $/IOps than a 15K 500GB SAS disk?
I no longer doubt what Donatelli said.
I also don’t doubt that the arrays of the future will all need the ability to leverage a combination of very large SATA and very large solid state storage. The curve on this is a classic innovators dilemma example of a disruptive technology coming into an existing market. The external forces will simply be too strong to ignore.
Exciting times in storage land – exciting times!
I’ve got to get myself one of those kingston drives!!!!
Would be fun to get input from people on when they think the inflection point (volume of solid state used in enterprise servers and enterprise storage has a higher volume than high performance spinning media) will occur… Please, comment!
A $115 40 GB flash drive certainly gets my attention! Man, those prices are coming down faster than any of us had any right to expect.
Thanks for sharing!
Posted by: Chuck Hollis | October 30, 2009 at 09:08 PM
Chad,
Great info, as always!
I'll bet non-direct costs will hasten the transition period to BEFORE the acquisition costs are equalized between the media... Things like heating/cooling savings, needing a much smaller UPS, being able to use flash RAID5 at much higher performance than disk RAID10 (and thereby needing less raw GB), applying dedupe technologies to further reduce the raw storage required for production or backup space, etc.
I'll make a guess that the transition time for when more net new SSDs are sold than FC/SAS drives is late-2012. Can't compare to all disk, if SATA is still used for high density storage.
Posted by: David Lapadula | October 31, 2009 at 12:14 AM
The whole game will change for ESDs with the sub-lun FAST....
http://thestorageanarchist.typepad.com/weblog/2009/09/2023-the-future-of-flash-is-fast.html
Posted by: Matt Proud | October 31, 2009 at 08:32 AM
And here I was thinking when I bought my Kingston 128gb drive for ~200 about 6 months ago that was a good deal.
These ones do sound pretty sweet - Makes me want to pickup one of those mini-NAS rigs, and swap all the disks out for SSD to start to truly calculate what the max performance you can get out of those are.
This model really makes me wonder if others will adopt it in a deep IO penetration model, akin to how UCS handles the same with it's memory architecture..
Posted by: twitter.com/CXI | October 31, 2009 at 03:20 PM
I've been stopping by your blog to learn more about storage. A friend at work(Cisco) mentioned how fast his laptop runs on Windows 7 using intel x25-m gen 2 SSd. I was not a believer until I read your site and the Anandtech article.
Posted by: broderick pride | November 01, 2009 at 10:46 AM
One more thing. . . how will SSD impact storage networking? Will this drive a need for bigger fatter pipes. . FCoE over 40Gbps or 100Gbps. How does SSD change SAN architectures? Remember. . I'm new to storage. :o)
Posted by: broderick pride | November 02, 2009 at 07:19 AM
We have 11 400GB SSD drives in our CX4-960 right now testing our production CIS database, and we have seen unbelievable reduction in our batch processing time. We just ran a month-end on saturday that took 3 days on R1 15K on a DMX3 and it ran in 10 hours on R5 SSD on a CX.
Posted by: david robertson | November 02, 2009 at 04:04 PM
What about the limited writes of such disks, ranging from 10,000 to 100,000 writes for high end SSDS?
How a V-Max deals with that limitation?
What happens when you get close the limit?
Thx,
Didier
Posted by: PiroNet | November 03, 2009 at 10:15 AM
About the 10K to 100K writes for SSDs this is an interesting article:
http://www.storagesearch.com/ssdmyths-endurance.html
the "10 year limit" seems to be bulls***
Posted by: Marcos Janzen | November 04, 2009 at 05:23 PM
PiroNet - that is absolutely NOT correct (as Marcos points out). "flash" as a product category spans a wide band.
The Enterprise SSDs we use (SLC) have individual cells that go for millions (in fact tens of millions) of read/write cycles.
In addition, the Enterprise SSDs have a larger amount of extra space - ranging from 20% to 100% more actual cells.
In the end, they have the EXACT same warranty from EMC that our FC disks do - and we don't do that lightly.
We have been selling out of them for 18 months now, and as you can see from Dave Robertson (a customer comment that I didn't solicit - but thank you Dave!) - they are being used in VERY intensive IO operations.
Long and short - "short lifespan" = FUD. Also, the degredation of performance that the early commercial SSDs (now resolved via TRIM as anandtech points out) suffered from did not affect Enterprise SSDs. This is one of those "Intel comes and visits EMC every week with the latest Flash disks and says 'we're ready now to be a second Enterprise Flashg source'" things. One day they WILL meet the spec (and we will be able to use them and STEC) and at that point, flash SSD volume will skyrocket (and prices will plummet).
Hope that helps!
Posted by: Chad Sakac | November 04, 2009 at 05:57 PM
Solid state is heating up, anyone interested should sign up for this webcast, taking place on Nov 18th @ 11am Eastern, more info here: http://tinyurl.com/yakxovu
Posted by: mike | November 12, 2009 at 03:14 PM
hi there wow nothing touch ssd flash drives may be in say buy 2345 we would have 1300 petabyte ssd in our cellphones
Posted by: james braselton | February 19, 2010 at 10:14 AM
You may have seen this, but if this dude was able to get these speeds using a PC parts, imagine what EMC is going to develop in the next year or so.
http://www.youtube.com/watch?v=96dWOEa4Djs
Posted by: Matt D Meyer | August 18, 2010 at 10:34 AM