So – yesterday was the official launch of a lot for our Midrange storage customers. I’m pumped, as a lot of the things our customers have been asking about (nice way of saying “demanding”), and also highlights our focus on this virtualized datacenter use cases.
This video is me winging it (“one take Chad” :-) – summarizing everything I think is cool about what we’ve announced (and GAed!)
Read on for more details!
Earlier, I did a preview and demonstration of VM-Aware Navisphere (now GA and available as you upgrade to the latest FLARE rev), and we’ve also talked about the (available for a long time and free) EMC Storage Viewer. I will stay away from claims of “better” and “unique” – but to be clear – the closest (only?) thing I’ve seen to VM-Aware Navisphere is Xiotech’s VMware View – which extends their management model in a similar way. Our goal is to solve this for BOTH the VMware Administrator (in their native management model – vCenter – several folks have things along these lines) **and** the storage administrator (in their native management models). Over time, our goal is to make storage management invisible (driven by vApp policy) – but we have a ways to go before we get there (lots of work though – and you can see it in SS5140 next week!)
One thing I mentioned, and now can talk about more is that our view of “VM-Awareness” (at every layer – vCenter-integration, storage element managers) also applies to local and remote replication.
Local VM-Aware Replication:
Replication Manager 5.2 update 2 also GAed.
This adds NFS support to our existing VMware-integrated local replication (snapshots, clones, and continuous data protection).
It also adds Sharepoint support to the already incredibly broad (I’m not aware of anything that covers this breadth in a single product) application support (in VMs and as physical configurations) – Exchange, SQL Server, Oracle, SAP and others.
In the VMware use case, it directly integrates with the vCenter APIs to correlate underlying storage objects with datastores (VMFS and NFS), and the ESX APIs to trigger VM quiescence and: via VMware tools – guest OS volume quiescence; via Application integration – application quiescence and log handling.
Remote VM-Aware Replicaion:
If we have VM-awareness in the other areas, but don’t for remote replication, we’re still missing out.
For example, when you create a Site Recovery Manager protection group, you can only select VMs that are in replicated objects (today VMFS on LUNs, soon NFS datastores also) – but this is a catch-22 – the replication needs to get setup BEFORE. Today of course, as you setup replication at the storage level, those management interfaces don’t show you WHAT VMs you are replicating.
Wouldn’t it be ideal if you could see what VMs are replicating, which ones aren’t (so you can either replicate their storage, or svmotion them to replicated storage).
Well – you get that now with Recoverpoint 3.2. Check this out:
If you want to download this video in high-rez (EMCer, VMware, EMC/VMware partners), you can do that in WMV and MOV format (with me now using a Macbook Pro – I’m mandating both formats for my team - me included :-)
Look - EMC doesn’t have exclusivity on innovation, or VMware integration. VMware still does the right thing and is open with all vendors and APIs. There’s a LOT more EMC needs to do also – including getting these good ideas applied across all platforms and use cases (working on it).
But – this breadth of integration and coverage helps our customers NOW, and also shows our commitment, R&D, and resource focus in this direction.
I think (and have said on many occasions) – customers should ask and push their vendors to demonstrate VMware integration on all fronts. There can be legitimate debate about the best models, and that something customers should look at closely (not just listen to marketing – from me – uggh – or anyone).
New I/O options, and higher consolidation
The other thing that is not “VMware integration” (but is related in my opinion) is that the 10GbE iSCSI cards are also GA. We’ve supported 10GbE on NFS on Celerra for a while (note some of the crazy performance levels I talked about earlier) – this brings 10GbE across the entire mid-range portfolio. The key is that one thing that we have done that is unique is spent the time architecting it so it’s non-disruptive to add (you can even go from 1GbE to 10GbE non-disruptively). This means that we have to spend a little more time designing the modules (for hot-pluggability), and a bit more work on the actual software that is really the thing that defines a modern array (so it can support device addition/removal) – but in the end, the customer benefits.
I’ve always been a supporter of 10GbE. look back here for a view on 10GbE and VMware in general. Also note a pattern:
- We committed to the market when we refreshed our midrange hardware (August 2008 – about 1 year ago) that we would be delivering other protocol support in a non-disruptive way when we introduced Ultraflex as an I/O architectural model (here). The core idea was that customers wanted the same degree of non-disruptive upgrade to ALL their infrastructure (including interface and protocol support)
- Notice that I showed an engineering sample on the blog at that time – we take hardening things as seriously as we can. It takes time, and effort.
- Now, we’re delivering on that commitment. I’m proud about that – we try our hardest (not claiming we’re perfect) to do what we say.
More to come on this front :-)
While I/O type is useful ACROSS all different storage use cases, we’re seeing that VMware is one of the earliest use cases of 10GbE at our customers. The introduction higher density connectivity (8Gb FC and 10GbE iSCSI), coupled with modern servers designed for VMware and vSphere’s improved scaling also means that we needed to add a lot of other less sexy, but just as important improvements under the covers. Massive increases in initiator count (really important in VMware use cases). Increases in number of hosts, LUNs and all sorts of other things. These are those limits “no one ever tells you about until you run into them” (not because they are obfuscated – but because it’s the nitty-gritty). Everyone’s got ‘em – but shows how we’re working on the cool things you see, and also the things you DON’T see.
All this (and much more) is live at VMworld – in the EMC booth, and in the VMware booth – you can come, poke at it, and ask questions. There’s still days to go – so of course there’s a lot more to come!
I REALLY want to know what more we should be doing on all these fronts – you can tell me at VMword, and civil comments (even those that vehemently disagree, but are civil please) are always welcome on this post or any….
Looking forward to seeing you in San Francisco – and come to SS5140 (EMC/VMware supersession with Steve Herrod and I) and SS5240 (VMware/Cisco/EMC supersession with Ed Bugnion, Scott Davis and I) – I want to see and talk to you!
when is Flare 29 going to be GA :) I just got 6 400GB flash drives yesterday for my CX4
Posted by: david | August 26, 2009 at 05:22 PM
FLARE 29 is GA yesterday morning! David - let me know your experiences!
Posted by: Chad Sakac | August 26, 2009 at 06:55 PM
its not on PowerLink only FLARE 28 is, is it someplace else?
Posted by: David | August 26, 2009 at 08:02 PM
Lemme dig - will get the total answer for you.
Posted by: Chad Sakac | August 26, 2009 at 08:05 PM
I also need to know if it is compatible with Invista 2.2
Posted by: David | August 26, 2009 at 08:06 PM
Great update. Sounds a lot like you're approaching capabilities found in Akorri, etc. Can't wait to see it all next week. Is 10gb also in NS for iSCSI?
Posted by: Keith Norbie | August 27, 2009 at 06:51 AM
@David - the official post to Powerlink is on Sept 2nd. Will dig into the Invista config - will likely take a bit longer for that combo to be completed by elab.
@Keith - 10GbE for iSCSI (and NFS) on Celerra has actually been around for a while - from the small ones to the big ones. In reality, the larger NS-960's are a better 10GbE fit (it takes CPU horsepower to drive the throughput). Re: Akorri - I think they still do some things better than we do in VM-Aware Navi and Recoverpoint (single end-to-end performance view) - but we're not resting on any laurels!
Posted by: Chad Sakac | August 27, 2009 at 10:27 AM
thnaks chad! i also sent you a email the other day, i am sure you get 1000's of emails a day, had a question in there for ya if you get a chance, thanks!
Posted by: david | August 27, 2009 at 10:31 AM
Hello Chad,
Great post as usual.
I have problem to be sure of the solution i need for my datacenter
- one production site with 3 ESXs and about 50 VMs
- thinking to CX-4 or Celerra for storage
- backup site with a 20 Mb bandwith
- thinking to RecoverPoint/SE or CelerraReplicator for replication
My main question i can't find the definitive answer is : "will my replicated VMs and applications such as Oracle be consistent" or do i need replication manager or such ?
Posted by: Did | September 08, 2009 at 08:53 AM
To get application consistency (and more importantly application-integrated recovery) you have to use something that integrates with the app APIs, and at EMC if using array replicas, that's Replication Manager.
Now, it is an awesome tool, simple and easy to use, and multiple access controls mean that app owners can even manage their own jobs.
BUT - I want to make it clear - SQL Server and Oracle honour ACID database properties and are guaranteed to be crash consistent. Recovery and log replay/handling can be done manually. Only Exchange remains the only standout where the JET database is not guaranteed to be crash-consistent. The horror days of Exchange 5.5 and earlier where literally it was 50/50 whether it would restart after a crash, however, are long gone. The vast majority of the time, even JET will recover from a crash - BUT THIS IS NOT GUARANTEED. For perspective on that - remember that a MSCS or WSFC (sometimes called single copy cluster) in essence uses crash consistency to recover from a server failure.
Hope that helps - and if not - ask for more, happy to provide more detail!
Posted by: Chad Sakac | September 10, 2009 at 10:56 PM