Phew – after 5 days in Vegas, you get pretty cooked. At home now finally and love seeing my family. Well – a quick little summary of the week, and what we announced, showed, and discussed.
Was a GREAT VMware Partner Exchange (PEX) – thank you VMware!
So – what did we see and do?
- VMware’s continued growth is one of the largest drivers for partner growth. PEX attendence was up 77% over last year.
- VMware is super-focused on the partner community. This came through loud and clear in Carl Eschenbach’s keynote.
- We saw this too in the EMC bootcamp. We held a bootcamp on the Monday of the event for the EMC partners present. There were almost 200 people there all day long. Thank you EMC partners! Would love your feedback on the event.
- The VCE roundtable was packed – with 202 partners. Got great questions, and great feedback. Cisco’s posting the video shortly.
- The VCE reception was also fantastic, thanks everyone for attending.
- My team: there were a bunch of new members of the vSpecialist squad – was great to hang out with you!
- The EMC booth: was cool that we pulled together the Vblock 0 prototype to show at the show, and got positive feedback on the demos. Good team on the schwag too – those little wind-up flashlights will help deal when the inevitable apocalypse arrives :-) Oh, my kids will like them too :-)
- TAP: Got solid feedback on the vSphere roadmap and Storage Ecosystem roadmap sessions from some of the team newbies (of course, the vets all were in the loop on this already)
- VCDX defense for some of the folks on the team. Scott, good luck, I’m confident in you!
- Steve Herrod did some big unveiling. Project “Redwood” (end user self-service portal targeted for Private and Public cloud uses) was public outed for the first time, as well as the next version of VMware View (loads of stuff in here, more to come soon). He also talked about the scaling and feature goals of the next generation of the vSphere generation. Even with all the legalese and caveats at the front of the session, it’s very exciting stuff. My team and I are lucky to have insider front-row seats to everything that’s been going on – so none of this was a surprise to us, but it’s great that it’s getting out there.
- Fun stuff:
- Parties: the tailgate party Cisco and EMC sponsored, and was loads of fun! Great exciting Superbowl game. I have to say that personally, I think The Who can still rock.
- Playing craps with friends – I hit 4 straight yos when it counted, and the money was raining in. Also on the last night with some new friends where I was sucking it, but man, they were rolling like nobody’s businesss….
- The big party was fantastic – at the House of Blues, with a great 80’s band – the English Beat
- The VMware EBC on wheels – just awesome…. Pictures on that one below.
- The VMware/EMC/NetApp alliance team dinner :-) Pictures and the story on that one below!
In the EMC Bootcamp we spent the day on helping partners get more out of their business, and making sure they had all the latest tools we provide to help them help their customers. I did the keynote which was “VCE: What’s going on behind the scenes and how can you make the most of it”. This frankly discussed where we are (including gaps) since the VCE launch, as well as providing a technical preview into the integrated 3-company roadmap for the next year.
I also showed the new Celerra NFS datastore VMware capabilities, previewed the next version of the EMC storage Viewer, and the next version of Unified Infrastructure Manager (UIM v2). I also gave some big hints on the stuff that’s coming at EMC World – each of which will be huge….
If you’re interested in more detail, including demonstrations and screenshots of these new and “arriving so shortly it’s essentially now” functions that we talked about, read on….
The Celerra compression and deduplication engine has been updated and now handles the VMware use case.
Dedupe and compression are both variants of data-reduction technologies. I don't want to be TOO much of a nerd here, but the point is valid.
I want to explain this quickly, and I think the commonality and difference between compress/deduplication needs to be stated. Why? Now the full on “who’s production data reduction/dedupe/compression” fight is inevitably going to start up. Until now, except customers putting NFS datastores on Datadomain (not generally a good idea as that’s not what DataDomain is designed for – it is designed to be a killer dedupe backup target), NetApp seemed to be the only vendor in the market with a production capacity efficiency benefit on top of thin provisioning for production datastores.
Data reduction techniques have varying effectiveness/cost (and here cost means "CPU cycles, processing time, performance impact", ergo not $$ but "engineering costs") depending on the dataset. A trivial example:
- filesystem containing ten files. Four Files are EXACTLY the same.
- filesystem containing ten files. Files are similar, but not the same.
- file-level dedupe is extremely efficient in the first example (low impact, high capacity efficiency gain).
- compression is moderately efficient in the second example (low impact, moderate capacity efficiency gain)
- block-level dedupe is more capacity efficient in the second example (generally higher “cost”, high capacity efficiency gain)
Celerra F-RDEv2 (the nerdy engineering name - "File Redundant Data Elimination") is accurately characterized as dedupe and compression. It finds and deduplicates files at the file object level (which is the most efficient, and largest immediate savings for general purpose NAS), and compresses within files. F-RDEv1 skipped files >200MB. This meant that the original release (now out for about 1 year) had little efficiency effect on the “VMware NFS datastore use case” where the bulk of the capacity is in large multi-GB VMDK files.
Of course, the broad use of our NAS devices tends to be dominated by basic unstructured NAS, and we’ve been delivering massive efficiency gains there for our customers for a year now, and that use case was our original design focus.
The march of storage efficiency technologies continues for EMC as it does in the industry as a whole…. F-RDEv2 (which is GA, and has been and will continue to be free) now has no file-size restriction. This means that on top of Thin Provisioning, it provides about an additional 40-50% capacity savings gain when applied directly to the VMware on NFS use case. Testing has shown that it has no material effect on write performance (even helping in corner cases) and about a 10% impact on read performance. Dont’ think of what it’s doing as a “zip”, it’s leveraging core Recoverpoint technology that’s used for real-time compress/decompress.
On a side note, one thing that has been fascinating to watch inside the company has been the acceleration of innovation and integration across the parts of the company over the last 2 years. They’ve moved to this approach called “consumer/provider” where the roadmap has various teams providing deliverables for some, and consuming those of others. This is most visible within the recently renamed Unified Storage Division (that has CLARiiON, Celerra, Centera, Recoverpoint). For example, the iSCSI stack from CLARiiON and Celerra have merged. The block virtualization layer (performs critical elements for things like Thin Provisioning and other cool things to come) in CLARiiON and Celerra are actually the same codebase (CBFS) now. Another example is Avamar and Recoverpoint IP being embedded in the NAS code. Anyone who knows engineering of this scale knows that it takes about 2 years minimally for changes to show up. Trust me, there are massive cool payoffs in store here based on the work of the last two years. Oh – EMC World is so close :-)
Another nice thing about the engineering approach used by F-RDEv2 is that it has no impact reduction on filesystem size, is unaffected by any other Celerra feature (snapshots etc), certain other things that are nice, like being able to target datastore or VM-level objects (and many other things)
When will you be able to get this? Early March, and it is FREE!
This continued march of storage efficiency (in both capacity, power and flexibility dimensions) will not stop…..
Virtual-Machine Level array-based snapshots and clones and dramatically simpler provisioning
The same release that expanded the application of redundant data elimination to customers using Celerras also added the ability to snapshot and clone individual files within a filesystem (in fact also clone a file ACROSS filesystems). While inevitably there’s areas where each vendor does something before the other, this is one where NetApp got the ball rolling with their Rapid Clone Utility (RCU) and ONTAP 7.2.3. and later. While I don’t claim to be an expert on NetApp, these seem to be very analagous (as always there’s certain things EMC does they don’t and vice versa) customers interested should compare both and evaluate for themselves.
“There’s an App for that” in VMware-land = “there is also a vCenter plugin for that”
So, as the array got this “VM level” operation we extended the vSphere client to make using it simple and easy. It also makes provisioning NFS datastores a lot easier (automatically configuring all the Celerra and ESX host properties), scaling easier (does it across the entire vSphere cluster in a single operation), and also makes expanding datastores a snap.
This also adds has the compression/dedupe control directly in vCenter – as well as the ability to quicky and easily see the capacity savings.
BTW I don’t want to over-sell this function. I personally think that in the client virtualization (View) use case, more customer pain is more about client image management and composition, not storage capacity. The problem of client image management and composition can actually be solved much better at the vSphere layer. People who see View 4 (and partners at PEX got a preview of the next rev which takes it even further).
Don’t get me wrong, being more efficient is good, but our guidance for customers will not to be to use this new function to eliminate the stuff that View Composer/Thinapp can be used for. Also, in the View use cases, the most important thing you can do to reduce storage requirements is drive down the per-guest IO workload (follow VMware’s VDI best practices!)
The VM hardware-accelerated snap/clone is more useful in many general VMware virtual machine use cases. BTW - this same idea (hardware-accelerated ESX VM-level snapshot/clone) is coming to block datastores in the vStorage APIs for Array integration, which EMC will be completely supporting on our current-generation array block targets.
When will you be able to get this? Early March, and it is FREE!
This continued march will not stop….
Here’s a demo of these new EMC Celerra NFS functions… Those of you wondering if you can try it with the Celerra VSA, let me do one more update before you spend the time to make it work (it works with the CMR-11 Celerra VSA that is posted, but is simpler and easier with the one targeted for March).
You can download the high-resolution WMV version here, and the MOV version here.
Previewed the next version of the EMC Storage Viewer vCenter plugin
Customer feedback on the EMC Storage Viewer vCenter plugin has been very positive, and we’re investing even more resources into it now.
The feedback that we got was that:
- Make configuring Solution Enabler (a pre-requisite) easier. Note that Solutions Enabler is now available as a Virtual Appliance (like an ever expanding set of EMC products) here.
- Add performance data (coming in this next release)
- make features/functions more consistent across both NAS and Block use cases (coming in this next release)
- Give the VMware administrator provisioning control, but only if their “portion of the storage array can be carved out”. Since most arrays (outside the storage used in a Vblock) are used for multiple uses at the same time, there’s reasonable concern about one action for a given use could unpredictably impact other uses. The way we are implementing this, the storage team can assign virtual storage pools to the VMware team, who can then provision themselves directly within the vSphere client. A screenshot is below.
Previewed the next version of the EMC Ionix Unified Infrastructure Manager (UIM) v2
BTW – My comments here (on vBlocks/portal in the “cloud compute use case) and here (on multitenancy in cloud compute cases) now make more sense that the idea of Redwood is out there. So let me explain it a bit further.
Basically Redwood takes the vSphere/vCenter layer which provides big aggregated pools of CPU/Memory/Network/Storage and hides all complexity and puts a multitenant front-end that enables simple end user self-provisioning.
But, the pools themselves are assumed static. In the example Steve used during the demo, the end user provisions a VM and starts using it. The VMware administrator sees that the datastore that Redwood selected was getting full, and then uses storage vMotion to non-disruptively move the virtual disks to another less utilized datastore.
What we’ve been working on with Vblock and UIM is to extend the idea of automated infrastructure right down to the infrastructure stack itself that supports vSphere. This would mean that the datastore could automatically expand in the example use case. Or if more or less compute was needed OVERALL within the vSphere cluster, that could be added/removed as needed.
Put another way:
Our design goal is to make it so that if vCenter and Redwood need the vSphere cluster to get 4 new hosts, they appear. Need more datastores? No problem, added – automatically. Using less and want to put the hardware back into a pool for other clusters? No problem, vacate, maintenance mode, remove cluster node, and release unused storage and networking elements – all automatically.
In discussions with service providers and enterprise customers trying to stand up these cloud compute services (IaaS for public or internal use cases), the biggest challenge has been the construction and maintenance of the “end user portal”. Redwood simplifies and automates the layer between the end-user and the vSphere layer - Redwood is focused on making that easier for the enterprise/service provider.
The second most difficult technology challenge at the infrastructure layer was trying to link into a bunch of disparate APIs for what ever combination of gear they pick to simplify and automate infrastructure provisioning at scale. UIM and VBlock makes the actual infrastructure itself elastic and elastic in an automated way - UIM is focused on making that easier for the enterprise/service provider.
This highlights the point we’re trying to make with Vblock. It represents a single “product”, not a combination of products (the ideas of the VCE SST and the VCE support model flow from that – a “product” is not sold and supported by 3 companies, but rather by ONE company – these work efforts are hard, much harder than creating the reference architecture). So what is the product? It is a VM-housing black box. We can then optimize to a MUCH higher degree. And while there are others who try to be a “manager of mangers” – trying to cover the infinite set of permutations of servers/network/storage/OS is nearly impossible to nail. UIM is not analagous to BMC Bladelogic. It’s analagous to a “Element Manager”, where the element is a Vblock – who’s constituent elements are known and very defined.
It’s then possible to create a product that not only can manage multiple Vblocks from a single console, apply multitenancy models to the infrastructure if needed (though now that everyone has seen Redwood, it’s clear about my earlier point, and also what Chuck has been saying here - customer/customer multitenancy comes from the end-user portal. Enforcement of multitenancy through the rest of the stack doesn’t solve the “who watches the watcher” problem (that’s the translation of the latin quote Jonathan from NetApp uses in his comment in the thread). This question (“what stops the service provider from being able to get at my “stuff”)
Actually, if you go back and watch the tapes, Steve Herrod actually mentioned something coming to solve that too – he mentioned “watch closely to see things new about securing the cloud and demonstrating that capability and compliance posture to the end-customer”. It won’t be long… wait for it… wait for it…
Pictures of some of the fun..
Ben Matheson (VMware), Ed Bugnion (Cisco) and I opening up the tailgate party on Sunday…
So – what’s the story on the VMware/EMC/NetApp alliance dinner?
It’s no secret that I have a lot of respect for NetApp, both as a company and their technology. I also like having a competitor that is investing in VMware’s success in the market, and alongside EMC’s storage efforts (putting aside the fact that we also invest down the RSA and Ionix tracks). Good competition pushes us both continuously.
When it comes to VMware-focus and integration, in my eyes (which may be off-base), it’s basically EMC and NetApp and then the others are so far back that it doesn’t matter. That doesn’t mean they don’t have VMware integration, and that they aren’t fine, fine products, but they just don’t invest as much in this specific space.
Look, it’s not just engineering/product, but look at PEX for partner coverage. Ditto at VMworlds past and future. Whether it’s sessions, PR, marketing, customer stories. Heck even the pure passion that comes out in some of the discussions. Some times it drives me crazy when they do something that to me seems “over the line”, but hey I’m sure they think the same about us – and at least it’s never boring :-)
So, we’re having the EMC/VMware alliance teams dinner and I found out that the NetApp/VMware alliance teams was in the same restaurant. I wanted to propose we all just park our badges at the door and have dinner together, but wiser minds on my team suggested that it would be rude to cut into their dinner. But, I wanted to do something, so I sent over a couple bottles of champagne with a note. I was worried it might come across “dick-ish”, but apparently not (which is good – was not my intent) Then, Jim Sangster from NetApp kindly came over to the EMC table, then I came over to the NetApp table, and we shared a toast.
Was good to meet Jim Sangster (Sr. Diretor of the VMware Alliance) and Mitchell Ratner (VMware Global Alliance Manager) in the photo at left and share a toast in the photo on the right – and a great week overall.
To all the VMware and EMC partners at the event – thank you, and it was great to hang out, learn and have fun!
Thank a lot for this complete review, here there is a lot of useful information I need in may day to day customer meetings, I love your approach when you talk about Netapp and the value that means to have a good competency, I am proud to be part of the EMC family.
Posted by: MauroAyala | February 13, 2010 at 01:59 AM
Looks like a lot of exciting developments coming with NFS and EMC. Anything new for us Clariion customers besides the Viewer? I am personally hoping the new EMC Storage Viewer eliminates the 50 second wait. Thanks for filling us in, Chad!
Posted by: Kent | February 13, 2010 at 03:14 PM
@Kent - EMC midrange customers (both Celerra and CLARiiON) have a BIG treat coming soon - EMC world is right around the corner.....
BTW - if you use EMC Storage Viewer 2.1, the CLARiiON response time has had a big boost... Also MAKE SURE you're using the latest solutions enabler (you can use the 7.1 vApp version if you want) - this also speeds things up a lot.
Posted by: Chad Sakac | February 13, 2010 at 04:40 PM
Good post for those of us that couldn't make it in person... thanks!
Posted by: Simon | February 15, 2010 at 03:58 AM
Chad,
Informative and classy. Thanks for this detailed post. Here's to another year of healthy competition.
Mike Riley
NetApp
Posted by: Mike Riley | February 15, 2010 at 11:00 AM
Chad - is there newer code than Storage Viewer (2.1.0.0) and Solutions Enabler (v7.1.0.0-1099)? That is my current code level, which has the 50-second wait.
Posted by: Kent | February 15, 2010 at 04:00 PM
Great Stuff Chad, as usual. Just when I thought I couldn't be any more astounded by the unified storage platform, you drop these nuggets and tease me/us at the same time!
The value of the Celerra just keeps getting better!
Posted by: Keir Asher | February 15, 2010 at 07:58 PM
Chad - thanks again for the champagne. Thanks also for the nicely calibrated balance of full-on competition and mutual respect.
Garry Wyndham
NetApp
Posted by: Garry Wyndham | February 16, 2010 at 06:08 PM