After looking at my home lab (Building a home VMware Infrastrucuture Lab). This is my OTHER VMware lab during my day job:
This is my joint EMC/VMware lab in Research Triangle park. It and it's major sister sites (Hopkinton, MA; Santa Clara, CA; Cork, Ireland) are part of our distributed Solutions Office operation - where we test not interop (that's a much, much bigger lab called eLab - where we do core interop testing and things like run the our gear through VMware HCL cert harness).
So what's in this lab?
- more than 450 ESX servers, each running the latest builds (and of course, beta builds). We keep around some physical servers, but really just to re- baseline against new ESX server builds (i.e. rebaseline physical to virtual and previous virtual rev to current virtual rev).
- more than 40 Virtual Center hosts, Lab Manager, Life Cycle Manager and Stage Manager.
- more than 40 Arrays (both EMC and non-EMC) - more than a PB of storage
- It's not done at this site, but in Santa Clara (picked for closest proximity to Palo Alto) is where we do our VDI testing - and have a rig designed to scale to multiple tens of thousands of clients, and where we have VDMv2 and all our integration there.
- We have totally automated harnesses for Exchange 2007, SQL Server 2005, Sharepoint Server 2007, Oracle 10g, 11g, SAP (and many many others)
- We use a wide variety of load-generation mechanisms, from LoadGen to Quest Benchmark factory to ORION
- From a human standpoint, there is a team of around 60 people at EMC that work on these labs non-stop (literally 24x7 - because we split work shifts around the clock).
It's hard to tell in the photo - but the floor is BOWED - this is on the 2nd floor of the building in RTP.
I can say that last year, the solutions validation testing around VMware was rough and tough a $30M effort.
The purpose of these sites is to REALLY test. Build docs that aren't marchitectures, but include failure characterization (i.e. total envelope testing). Test all the functional use case scenarioes (i.e. how customer use our products).
The key is that we do this at a wide variety of scales: hundreds of users to tens of thousands, client counts in all ranges, and all sorts of database workloads. Why - every customer is different - we have solutions targeted at customers getting started (heck, even consumers now), all the way to the biggest of the big.
We don't publish all the docs publicly - which is a policy I personally disagree with. Why? Because competitors are geared to compete with EMC as the defacto "800 lb Gorilla", and rip apart the failure cases (news flash for anyone who does this for a living - everything has a failure limit - the name of the game is to know WHEN a given config will fail, so we can customer minimize customer pain) and build competitive material on it. Sounds, reasonable? Not to me. I think we might as well be wide open with it - I'm not afraid of describing why a scenario failed in a given workload or a given config. I think we should publish it all. What do you think?
Here are some of the lighter (and some of these ain't so light!) examples of doc packages (we usually publish a "reference architecture", an "applied technology guide", but don't publish - except internally for customers, partners, and EMC on Powerlink - the big-daddy - the "validation test report"): http://www.vmware.com/partners/alliances/technology/emc-whitepapers.html
There are also extensive Partner Engineering center - i.e. there's one in Redmond across from the Microsoft Campus, and one in Santa Clara for VMware. Santa Clara isn't just where we do VDI, but anything that's very intimate with VMware. That joint work resulted in this: 100,000 I/O Operations Per Second, One ESX Host
Long and short - MAN, we take VMware very very seriously - and are making the investments in both people (important) and gear (less important - but still important)
Wow -- this is a GREAT start!
Posted by: Chuck Hollis | June 06, 2008 at 10:39 PM