Ok – this could be a REALLY long post – or a short one :-) Dear readers, you know me well enough at this point to know that I suck at short. Also warning – while this was crazy cool tech, there’s opinions littered in this post. With those disclosures out of the way….
…So, I’ve commented before about all the interesting startup action around storage and the trends of flash and virtualization. I’ve also made the observation that (coming from a startup) that storage startup land is VERY tough. You’re talking about persistent data – one of the most difficult things to displace.
That means that to succeed, you need to innovate fast (something startups are very good at), and you need to find a place where the big guys are a little asleep at the wheel. You need both because being a “little” better isn’t enough.
Having one standout feature will win you some customers, no doubt. But – the key is that with massive R&D/M&A budgets, and broad technology portfolios to bring integrated value together can be very compelling – and that’s something the big folks can do.
Don’t take this as arrogance on my part – I have a huge soft spot for the startups, and that is a land of incredible energy and innovation. There are also great examples of startups that broke through. While mine barely made it to acquisition, I remember it very fondly.
Speaking for myself and what I see in EMC – there’s a hyper-awareness of disruptive technologies, and a willingness to cannibalize ourselves where it’s the right thing, and at the right time. We believe in Andy Grove’s “only the paranoid survive” mantra.
So – with all that said – what the heck did we show that would result in THAT intro?
Read on…
There is a fundamental truth that the world of compute and storage are getting redefined – with functions crossing over, merging, blurring. There are more examples than there is space to list, even in a long Chad blog post. There’s also some wildly divergent workloads – some that are an awesome fit for host-based flash cache/storage models, and others that are about the most insanely large, not cacheable datasets imaginable.
Clearly there are going to be times for shared-nothing host-centric models (move data up to compute), and there are going to be times where moving compute to running co-resident with storage (move compute down to data) makes sense.
In the case where you move compute down to storage – you would want to have a very fluid scale-out model, where you could add compute/memory/storage all together – and have it become a big, massive pool. For example, if on a 100 node Isilon cluster, each node was running vSphere at the same time. Couple that with the demo we did here, and whammo.
Well – that’s what we demonstrated.
I’m not sure that enough people picked it up, but Brian Gallagher also alluded to this at EMC World in may – we also are working on this on EMC’s other scale-out storage platform (also nicely x86 based) – the VMAX. Are you getting this?
And hey, in the cases where moving the data up to compute is the right way – host-based flash (both as disk and as cache models) is going to be very disruptive. And of course – that would need to be integrated with VMware, because the compute workloads are likely virtualized, right?
So… you would want to have a PCIe-based flash device, but also tightly integrate it with ESX and vCenter. We have talked about Project Lightning before (PCIe-based flash, and a vision of coherence of cache and integrated FAST policy), but not about the VMware use case, and vCenter integration.
Well – that’s what we demonstrated.
Pretty cool, eh? Now, to be fair, and in the full disclosure – this is a technology preview. No dates were given :-) We still have a ways to go. But – the things going on in EMC-land would make your hair stand on end in sheer face-melting awesomesauce terms if we unloaded it all at once :-)
If you want to see the demonstration, here’s what it looked like:
Hey startups! Game on! We may be big, but we sure aren’t asleep at the wheel :-)
Oh, and if you want to see the other example I was hinting at earlier – check out VSP3205 :-)
Comments