« EMC, Cisco, VMware and the Private Cloud | Main | VMware Partner Exchange Cisco/EMC Keynote »

April 14, 2009


Feed You can follow this conversation by subscribing to the comment feed for this post.

David Robertson

WOW! And i was considering going without a Symm in our new DataCeneter, I might certainly be thinking otherwise now.

Elias Patsalos

Excellent posting Chad.


Elias P.

Paul Wegiel

WOW... EMC takes it to the next level.. again.


I’d like to make a few comments about the four numbered points in your posting.

1. The choice of chip is one of the engineering decisions that goes into every system but it’s irrelevant to the user, since it’s what can be done with the system and how much real world work the total system performs for the user that counts. The Storage Performance Council has recognized the Hitachi Universal Storage Platform V (USP V) as the Highest Performing Storage System in the Industry. The SPC-2 benchmark results reaffirm several key benefits the Hitachi USP V platform provides customers including:
Greater Productivity: Increased Efficiency: Lower TCO: and Reduced Risk

How about EMC publish some industry standard benchmark figures for the Symmetrix?

2. You talk about hammering the VM message for x86 workloads but make no mention of virtualizing external storage assets. The USP V and smaller footprint USP VM can both host Hitachi and a wide range of storage systems from other manufacturers. Customers need to be able to virtualize all their storage assets and be able to manage their total storage pool from one screen of glass with one management interface.

3. The ability to move date between storage tiers has been available from Hitachi since 2000 with the Lightning 9900 and since 2004 Hitachi has extended automated tiering with the USP to externally attached storage, and with the USP V to thin moves, copies, replication, and migration. Even now, with the V-Max, EMC customers will still need to use external software to migrate, move or copy data from the DMX to V-Max.

4. Indeed, as you say, “Technology must contain cost” but what about a very important cost containment – the ability to virtualize and manage external multivendor storage? A single USP V can manage up to 247PB of total storage capacity.

Hu Yoshida has written a blog which goes into this in more detail:

Chad Sakac

XRbight - thanks for the comment.

1) I'm not sure if I agree. Using (and being frank here) commodity components means a more cost effective design, but more importantly, we've seen a cross-over where x86-64 simply is starting to deliver a significantly better performance, and a VERY much better price/performance. I suppose ultimately I agree with the point that ultimately its an internal design choice - and that price/performance is the expression - which customers will need to choose. There is another factor though - this allows us to leverage all the other things that are happening in the x86-64 space, but we've managed to do it while maintaining the global cache characteristics that are one of the defining elements of these enterprise-class arrays.

2) We have published (already) loads of performance testing and results on the V-Max (and many EMC arrays). EMC generally doesn't think that the SPC-1 and SPC-2 tests are particularly useful, I've never found a customer running an SPC-1/SPC-2 workload. Every database workload is so wildly variable, that it's tricky. At the really high end, you and I both know that HDS and EMC will put in our systems side by side and run the actual customer workload. We do that all over the market, and generally, EMC comes out on top - but every customer varies. Now, where do we publish performance results - where the benchmark reflects something practical. For example, Exchange Jetstres/Loadgen are pretty close approximations of a given Exchange workload, and we publish those out the yin-yang. My last comment - I think that the SPC-1 and SPC-2 (though this applies to all benchmarks) are particularly off when you're talking about these enterprise class arrays (of which USP-V is certainly EMC's strongest competitor) which are designed to support a whole ton of different workloads at the same time.

3) Ah, the joy of the "competitive talk sheet" :-) That's why I try not to talk about others - it's too easy to be wrong. You don't need external software to migrate between the DMX and V-Max. Also, I've got to say - in the Virtual Datacenter, for 100% non-disruptive migrations (in all heterogenous configurtions) we're finding that Storage Vmotion does a REALLY REALLY good job :-) Some customers want to put a device BETWEEN their host and their heterogenous arrays (USP V, IBM SVC, EMC Invista), but often the downsides of the "man in the middle" make this a faustian bargain. It is a legitimate design choice - but I've got to say - if data mobility between heterogenous platforms is the ONLY use case, I wouldn't want to design for that, and give up everything else. I would virtualize everything and use storage vmotion.

4) It's the same trade-off as the list above - storage virtualization (as defined by the "stick something between your host and your array" of the USP-V/SVC/Invista/vFiler just have not taken off. I think the reason is that the use case is so narrow. I'd also say that "containing the cost" of your arrays by sticking an expensive thing in the middle just doesn't seem to make sense to me.

Hu is certainly entitled to his opinion, and as a strong EMC competitor, I wouldn't expect him to say "yup, it's great, oh well". Barry has done a more complete discussion on this (and I'm sure his site will be ground zero for the HDS/IBM response) here http://thestorageanarchist.typepad.com/weblog/2009/04/1058-v-max-does-what-hi-star-cant.html#more

The comments to this entry are closed.

  • BlogWithIntegrity.com


  • The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Dell Technologies and does not necessarily reflect the views and opinions of Dell Technologies or any part of Dell Technologies. This is my blog, it is not an Dell Technologies blog.