The VMworld 2011 Hands-on-Labs are always a tour de force, and one of the most popular activities at VMworld. AFAIK, there is no comparable thing at any technology conference (we’ve started our efforts based on the VMword HoL at EMC World – and are getting there – had 200 lab seats at EMC World 2011).
Also in my opinion – this is one of those “why go to a conference?” answers. There is NO substitute for the hands-on learning you can get in a place like this (and immediate answers when you need it).
It’s also a technology tour de force. It pushes VMware and partner technology underneath to the limit (sometimes even demanding things beyond the currently shipping products).
This year, the whole shebang ran on vSphere 5, with vCD 1.5 (though not the GA release, but stuff further out). Much of the underlying infrastructure ran on a pair of EMC VNX 7500 arrays running GA code – each loaded for bear with SSD, FAST Cache, FAST VP, and loads of 10GbE. The bulk of the load ran on NFS, and each VNX was configured with 3 file blades (and one standby blade).
The tale of the tape:
- 131.115 Terabyes of NFS traffic
- 9.73728 billion NFS ops
- Avg IO size of 14kb
- on the VNXes, Internal average NFS read latency of 1.484ms
- on the VNXes Internal average NFS write latency 2.867ms
Or more importantly – a total of 13,415 labs run, creating and destroying 148,138 virtual machines over 4 days.
Here’s a shot of people enjoying the labs:
This year was also really cool in the sense that the VMware labs were open to partner labs as well. These were scenarioes that enabled partners to showcase how their technologies enhance VMware, and ran on the same public cloud infrastructure. There were 337 people who took those labs (and created/destroyed 5055 VMs). A huge thank you to VMware to adding these partner labs – happy to participate, and will do so again!
On top of that, we ALSO had access to the EMC vLabs. These were run out of the EMC booth, and enabled customers to play with the pure platforms themselves in very open-ended labs. These are hosted out of the EMC Demo Cloud – which is how we supported EMC World (and will continue to open up to the world more and more)
- Total vms 1718
- Total 402 users
- Total demos 474
- VSI 85
- UIM 22
- Avamar 55
- VNX 89
- VNXe 32
- Archer 6
- Isilon 49
- VPLEX 46
- VAAI 35
- Atmos 20
But – back to the VMware Hands-on-Lab…
Here’s the “scene” as you walk into the lab area – the “lair”. It’s like a deep, dark, dungeon. Aka, just like home :-)
Here’s me seeing the vCenter Ops dashboard for the first time (awesome stuff – more detail below)
Here’s the scene in the “control center” – the inner sanctum where the VMware Integration Engineering team lead by Mornay and Dan pulls it all together and works through the inevitable bumps of pulling something like this off. Notably, when I was there, it was near the full seating (480 seats). Dan – we WILL be accelerate past you, and we’ll be playing leapfrog :-)
If you want to understand more about that comment from Mornay about Clint in that 3rd video (and you REALLY should), read on past the break where more of the details including:
- more on the VNX 7500 configurations used behind vSphere 5 and vCloud Director behind much of the labs
- more on the Hyperic adapter and vCenter Ops integration that was created for the labs…
READ ON!
After VMworld 2010’s hands on labs (supported by 3 NS-960’s) which I discussed here, we sat down with the VMware Integration engineering team. Their comment was that they had a hard time instrumenting the infrastructure to their satisfaction. This was the heat-map (of the block infrastructure supporting the NAS) gathered over the 4 days back in 2010. You can see that the hottest part was the connectivity between the file blades and the block components (in red).
The VMware Integration Team (formerly known as GETO) was frustrated they didn’t have more visibility, and weren’t as familiar with the EMC tools as they were their own.
Step 1) Upgrade the NS-960’s with VNXes. The VNX has much more bandwidth between the block and NAS components, as well as more bandwidth overall. There has also been a ton of NFS latency optimizations since 2010 (which shows up in the crazy low latencies observed under load).
Step 2) Let’s see if we can instrument it better using the tools the VMware folks would want to use – like Hyperic and vCenter Operations. Behind the scenes, Clint Kitson built this integration, which was used extensively at VMworld this year.
So – here’s Clint discussing the Hyperic plugin and vCenter Operations integration he created for the VMworld Hands-on-Labs.
And, here’s a walkthrough how vCenter Operations was used to diagnose a network latency issue that was plaguing the first day of the lab – where the configuration of the physical infrastructure “pods” was causing latency issues. It was really cool to be able to use vCenter Operations to correlate issues across “infrastructure siloes”
This teardown picture tells a story unto itself – that’s a LOT of networking :-)
Again – my hat is off to the VMware Integration Engineering team – a great mission accomplished!
Great article !
I especially like the vcOPS demo. Clearly you are using the Enterprise version here, I wonder is there any chance for either you or someone from VMware who setup vCOPS for the labs to drill into the backend infrastructure. I imagine that there was a fair amount of server resources and storage being used just to enable OPS to work in this big scale deployment.
Posted by: AndrewCooke | September 07, 2011 at 09:14 AM
I heard my name there towards the end.
Posted by: Ron Davis | September 08, 2011 at 08:52 PM