UPDATED: 8/20/2009 – 6:39pm EST: caption correction
If you haven’t seen it – this is totally worth checking out.
It’s been a MASSIVE effort on the VMware GETO team, with huge amounts of Cisco and EMC gear to support the Hands-on-Labs, the Best of vSphere sessions, and the Sunday vSphere hands on.
This has got to be the BIGGEST sandbox going on right now :-) Has been a huge labor of love by the GETO team, and to the on-site EMC team supporting them as we work through all the stuff you expect (and don’t expect) with something like this – my hat is OFF to you, and while there is still tons to do, a week to make sure shines (through all the unexpected bumps) and look forward to celebrating together!
You can follow the whole thing here: http://www.winkflash.com/photo/public.aspx?u=vmware-geto
But – these are my favorite pictures… read on…
Below: This has GOT to be the largest number of UCS systems in any one place at this time. Cisco has pulled out all the stops.
Below: Imagine plowing through all this gear that’s just constantly plowing into the lab…
Below: Cisco UCS 6120 fabric interconnects (Rodos – a fully populated V-Max interconnect looks VERY similar :-)
Below: Taste varies, but personally, I think the blue LED look of V-Max is cool :-) Each of those storage engines, is very low latency serial 10Gbps connection that links each Storage Engine in the scale-out design into a single unit, including shared memory space. COOL stuff. These V-Max arrays are supporting sunday vSphere hands on and the best of vSphere sessions throughout the event.
Below: Tons (literally) of EMC CLARiiON CX4 and EMC Celerra arriving there – these support the majority of the hands-on labs, and similar gear supports all the things you see VMware booth itself. 10GbE NFS and iSCSI everywhere.
Below: This is the most amazing thing of all – NetApp filers right beside EMC. With all the blog back n’ forth (I always try to stay above the fray here on virtualgeek – and that’s my commit to you the reader) – one would think that like anti-matter and matter – being brought in close proximity, they would immediate annihilate each other in a fury of energy leaving nothing left :-)
Below: Here you have the bulk of the storage that will be supporting the event, connected into the supporting fabric.
Below: Core networking stuff – including Nexus 7K (2nd picture as core switching)
I must be dreaming...
Posted by: NiTRo | August 20, 2009 at 05:55 PM
Chad, you have the wrong photo for the V-Max engine, that is actually a photo of two Cisco UCS Fabric Interconnects, 6120's! The photo has 17:03 as the time stamp. I can see I am going to have to give you a UCS lesson when we catch up. [cheeky grin]
Posted by: Rodos | August 20, 2009 at 06:30 PM
Thanks Rodos - my bad, corrected the caption. As it happens - a fully populated V-Max matrix interconnect looks a LOT like the UCS 6120 (they are not the same, but look almost identical, except the color of the chassis - which I should have caught).
Fast post = sloppy.
Posted by: Chad Sakac | August 20, 2009 at 06:42 PM
It is pretty insane how much hardware there is--I was over at the hot stage facility with the VMware team a couple of weeks ago getting the "grand tour." Unbelievable. According to VMware, there's enough compute power in there to run--on the low end--over 30,000 virtual machines. On the high end, that could scale over 60,000 virtual machines. Awesome!
Posted by: Scott Lowe | August 20, 2009 at 07:07 PM
Chad you will have to give me a tour of a V-Max chassis. The ship heading to Australia is a little slow and my box of toys does not have its V-Max yet. I wonder if I can find one on Ebay.au.
Rodos
Posted by: Rodos | August 20, 2009 at 09:23 PM
vExperts should get a free tour!!!
Posted by: Dave Convery | August 21, 2009 at 09:04 AM
Chad
Nice post. I just love a plugfest.
For those wanting more detail on the NetApp gear (I specfically told it to play nice next to the EMC gear)...
There are 4 pairs of FAS3170 in the main datacenter, right next to the Clariion racks. That equipment is for the Performance lab. There is another set of 4 pairs of FAS3140 and FAS3170 for the User Self Paced Labs. The USPL gear is serving boot LUNs for 8 HP c7000 blade chassis along with all the NFS datastores for the VMs and associated bits. There are some other neat things happening in there, but I don't know if those details are public yet.
CYa there!
Peter
Posted by: Peter Learmonth | August 21, 2009 at 02:08 PM
Pretty cool! That is a LOT of hardware!
Posted by: Douglas Gourlay | August 21, 2009 at 07:09 PM
Welcome to the Virtualization Heaven!! Do they need an hand? I got two :D
Posted by: Miguel Miranda | August 21, 2009 at 07:44 PM
Definately floats my boat!!!!!!
Posted by: George D | August 21, 2009 at 10:06 PM
"NetApp filers right beside EMC... ...one would think that like anti-matter and matter – being brought in close proximity, they would immediate annihilate each other in a fury of energy leaving nothing left :-)"
Very funny...
Actually, they must not have been close enough. You didn't see any sparks at all? Blue ones?
Posted by: Paul P | August 22, 2009 at 12:10 AM
Chad,
How does the cooling work in this room? I don't see the typical raised floor with perforated tiles. So is it perhaps row based cooling? Or perhaps the room is pressurized with cool air and heat exhausted via chimneys in the cabinets (although I dont see chimneys either). Just curious.
Cheers,
Brad
Posted by: Brad Hedlund | August 25, 2009 at 11:16 AM
For your VMax picture, your caption reads:
"Each of those storage engines, is very low latency serial 10Gbps connection that links each Storage Engine in the scale-out design into a single unit"
Are you sure you don't mean 10GBps (gigaBYTES)? :) From Barry Burke's blog:
http://thestorageanarchist.typepad.com/weblog/2009/04/1056-inside-the-virtual-matrix-architecture.html:
"For the curious, the first generation of the Symmetrix V-Max uses two active-active, non-blocking, serial RapidIO v1.3-compliant private networks as the inter-node Virtual Matrix Interconnect, which supports up to 2.5GB/s full-duplex data transfer per connection – each “director” has 2, and thus each “engine” has 4 connections in the first-gen V-Max."
Posted by: StgGuy | August 25, 2009 at 08:32 PM
I think he means 10Gbps.
There are 2 - 10Gbps / 8 = 1.25GBps
1.25GBps * 2 = 2.5GBps
Posted by: Don | August 28, 2009 at 08:27 AM