Those of you that know me know that my deepest, darkest fear is becoming a talking head, a bureaucrat, or that most evil of corporate types – the politician.
In my work life as I move into more and more senior roles at EMC – some of that stuff comes with the territory, but god help me if it comes to define me. I find that maintaining the necessary pace for work (and a work life balance of a sort) requires a “sprint/rest… sprint/rest” pattern. This was coaching I got once from a great mentor and coach here at EMC – Frank Hauck.
For me – during “rest” periods, I get back to a zen-like state by playing with technology, learning new stuff, and getting my hands dirty building stuff. That and a little more exercise :-)
Over the Christmas holidays, after the kiddies opened their presents, I opened one of mine – or to be more precise – a present to myself :-) A little binge at NewEgg to refresh the home lab. After the break – I’ve documented the parts and process. For those of you interested in building your own home labs, you may get something out of this.
I’m fortunate to have multiple labs at my disposal.
- Like my EMCers and EMC partner brothers and sisters (you can access it at http://portal.demoemc.com) – I use vLab – a massive on-demand lab environment hosted by EMC on Vblocks in Durham. We’re investing more there, and figuring out ways to enable some to have “authorship”. In my experience, this is best for learning, and demonstrating things.
- I have a lab at an EMC office where I have stuff that is too big to have at home in a sustainable way. Generally, I find myself using this relatively little, and it falls out of date. I’m thinking that I should really retire it, and continue to invest in EMC kit at EMC partners. In general BTW – that’s what I tell people when I get requests for “field kit” from EMCers.
- My home lab. At it’s peak – this was 12 ESX hosts with 8GB each, FC and 10GbE infrastructure and a whackload of arrays (EMC and non-EMC). The power consumption was through the roof, and I needed to get custom circuits to handle the amperage. I find that I tend to use the home lab more for learning and relaxation – because it’s right there those rare moments I’m at home, and I play after the kids are in bed, and it’s ok to have a martini in the “lab” :-)
The problem with the home lab was it was just way too much space/power/cooling. All of the datacenter kit generates a TON of noise and heat.
So – as I approach refreshing it – I’m aiming for a simpler config based on these guidelines
- 2 ESX hosts – with as much memory as I can pack in at a reasonable cost.
- 2 homebrewstorage arrays with a focus on IOps/$ and low power draw. I want to play with ZFS, but want something that just be a tank as needed (and a backup target since I’m mucking around all the time – I’ll use a Iomega PX array).
- a single GigE Ethernet switch. Don’t really care about redundancy for this. It’s a home lab.
- a jump box host. Could be a VM, sure – but I just find this handy for all sorts of stuff.
- … all aiming for low noise, low heat, and being able to hang off a normal 10A home circuit without blowing breakers all the time.
Now – I’m not aiming for the “lowest cost possible config” – I think that it’s certainly possible to shave a lot off all this, and there are lots of great posts out there on how other do it. I want it to be a rock, fast, and be able to run some of the heavier EMC Virtual Appliances, as well as the full vCloud and vCenter Management suite, and also giving me a snadbox to play and learn more about Hadoop.
Having gone through 3 generations of vSphere home labs – I’ve found that it’s always the disk controllers and the network interfaces that caused me pain – Intel MBs tend to work best as they use their own MAC/PHY chips, everyone else always uses Realtek MAC/PHY chips. Ever since vSphere 4.0 there has been pretty darn good support for Intel SATA controllers, so that’s usually a wash across MB manufacturers.
Here were my parts lists. I picked it all up from NewEgg.ca. I do NO advertising on this blog as a matter of principle, so this isn’t an add for them, I just find them to have good stock, good delivery, good prices, and good replacement policies.
- ESX Hosts (remember, aiming for as much core/mem as I can get in a relatively low-cost system):
- Motherboard: Intel Extreme DX97SR – picked for large number of onboard SATA and GbE ports + Intel MBs are solid.
- CPU: Intel Core i7 (Sandy Bridge generation) 3930K – picked for Sandy Bridge generation (Ivybridge remains more expensive, particularly at large core/cache counts, and early testing is showing some, but not a ton of performance improvements), but also for a 6 core config (want as much CPU for as little power as possible), and 12GB of L2 Cache.
- 8 x 8GB: Corsair Vengance 8 x 8GB DDR3 1866 – not the fastest, but no slouch.
- 1 x NIC: Intel EXPI9402PT 10/ 100/ 1000Mbps PCI-Express PRO/1000 PT Dual Port Server Adapter – picked because I know it works out of the box with the vSphere builds – and is also a rock. Quad ports are super expensive, but dual ported adapters have only a small premium over server-class single-ported adapters.
- Video Card: EVGA Nvidia GeForce 210 1GB – picked because it was less than $25 :-)
- Powersupply: Antec 550W Basiq PS – picked it because these systems are pretty low on power draw, but the Antec PS is solid.
- CPU FAN: Intel RTS2011AC – picked because it was low cost, and my experience with Intel heatsinks and fans have been top notch. I have to say, there are a lot of options (and when building gaming systems, sometimes I go off the beaten track, use Artic Silver and crazy cooling configs), but when you just want a simple, works great, quiet cooling option – man, Intel does it right, and does it cheap.
- Storage COTS system
- Intel MB: Intel Extreme DX97SR – picked for large number of onboard SATA and GbE ports + Intel MBs are solid.
- CPU: Intel Core i7 (Sandy Bridge generation) 3820 – picked for Sandy Bridge generation, and more than capable with quad cores, 10MB of L2 Cache. This is, after all a storage system, not an ESX host. Good price too!
- 2 x 4GB: G.SKILL Sniper Series 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1866 – picked because it was a deal – I’m less fussy about memory in the storage subsystem than in the ESX host (mostly because there’s simply less of it)
- 2 x NIC: Intel EXPI9402PT 10/ 100/ 1000Mbps PCI-Express PRO/1000 PT Dual Port Server Adapter – same logic as the ESX host NIC pick, but just got an extra to have more front-end connectivity for obvious reasons.
- Video Card: EVGA Nvidia GeForce 210 1GB – picked because it was less than $25 :-
- 4 x Intel SSD: Intel 520 Series Cherryville SSDSC2CW480A3K5 2.5" 480GB SATA III MLC Internal Solid State Drive (SSD). Picked for huge IOps, reliability (both of the drive and the overall system), but most importantly, lowering the heat/power of the storage system.
- 4 x Seagate 3TB: Seagate Barracuda STBD2000101 2TB 7200 RPM SATA 3.5" Internal Hard Drive. Picked for capacity store at some reasonable performance.
- Powersupply: COOLER MASTER Silent Pro RS850 – picked because the storage COTS system has a higher power draw (the drives) than the ESX hosts. A very solid (and heavy – always a good sign with a power supply) power supply.
- CPU FAN: Intel RTS2011AC – picked for same rationale as the coolers on the ESX hosts.
- “Storage Tank”
- I have an Iomega/Lenovo PX6-300d – which is amazingly potent – and I use it in the home lab mostly as a backup target. It has 2 256GB SSDs, and 4 1TB SATA drives. I have two of these Iomega NAS systems at home – the other is an IX4 (now pretty ancient), and this PX6. The PX6 is in a different class. The Lifeline software is way more feature rich, the Atom-based CPU is WAY more powerful than the Marvell SoC in the IX4, and it shows. Handy, small – and hey, as a bonus, not only does it back-up my home lab, but I use RSYNC to backup one Iomega device to the other – which is important – the IX4 has my most precious photos, videos of the kiddies (which I also backup to the cloud).
- Netgear JGS524 24 Port Gigabit Ethernet Switch – picked because of a GREAT price ($211 – which is a great per-port price for a solid GbE switch), but also because it’s fanless.
- Jump box host:
- Acer Aspire E1-571 15.6" LED Notebook - Intel Core i3 i3-2370M 2.40 GHz, 6GB RAM, 500GB HDD. This was an unexpected pick for me. I’ve always used a micro system for my jumpboxes (the system I have as a gateway into the home lab, hardened, remotely accessible. In the past, laptops were just too expensive. The reverse is now true. You can get a more than capable laptop for CRAZY low cost (this one was $400). The upside is that it’s got a built-in monitor when you need to walk over to the physical system for some reason or another. This system is running Windows Server 2012, including Hyper-V. I had an older 128GB SSD kicking around, and threw it in there – yeah, for IOps, but more for reliability. If the jump-box fails while I’m on the road – while I back it up to the PX6, I can’t access the lab until I get back home.
Here’s pictures of me unpacking all the kit…
.. ah Christmas morning (well really boxing day :-) for Chad…
… here’s the Intel DX79SR motherboard I used for both the storage array and the ESX hosts. I have to say, having used a lot of Intel MBs over the years – this one is outfitted with a lot of bling. Not a bad thing – top notch parts and construction.
… here are the CPUs I used. On the left, the ESX CPU – i7 3930K (there was another one for the 2nd ESX host of course, just not shown in the picture), and on the right, the CPU used for the storage COTS system – an i7 3820. Both are LGA 2011. I have to say, the new socket form factor is solid. It’s interesting how the little things that get engineered in each progressive generation make a difference. My last home-brews were LGA 775 systems, and the new BGA package has a ton more pins – but the locking mechanism in LGA 2011 is a lot more elegant (and solid).
… here’s the memory used for the vSphere ESX hosts. I’m a fan of getting multi-DIMM kits – makes sure the memory timings all match. My experiences across memory manufacturers hasn’t really varied too much. Frankly, I pick based on the deal. That said, Corsair and Kingston have the best “brand” at least in my own mind having built a ton of systems. It’s INSANE how much memory you can get for relatively little these days. If you told me I would be getting 64GB of RAM for less than $400 a few years ago, I would have thought you were crazy. Will be interesting to see what the future holds – with things like Phase-Change Memory (BTW – yes, we have our first engineering prototypes of PCM in the EMC Flash engineering labs… will be a few years out, but cool!)
… and here’s the memory for the storage COTS system. It’s just 2 x 4GB, but hey, that’s as much as you have in a VNX 5300 or a FAS 3240 (though of course, this is non ECC mem we’re talking about vs. server-class memory in those types of systems). Should be more than enough to run a fast OpenFiler, Nexenta, or heck just Linux NFS/iSCSI server.
… I tend not to scrimp on the NICs. When you think of the billions of frames they forward – and the dependency on solid drivers (particularly in vSphere, where what you get is what you get when you’re not safely in the HCL)- well, a few extra bucks can save you a LOT of headaches. Also, I find often that I’m best served with more NIC interfaces. I hoped I could use the 2 onboard NICs on the Intel MBs (more on that later), so I figured that adding a dual ported Intel NIC for each ESX host would do the trick, and I would throw 2 dual ported NICs in the storage COTS systems.
… For the persistent storage in the COTS storage system – I picked up 4 of the Intel Cherryville MLC 480GB SSDs, and 4 of the 2TB Seagate 7200 RPM Barracuda drives. I’m always amazed at the progress of technology, but the flash stuff (both in my work, and at home) is really amazing. The SSDs weren’t cheap ($500 a pop), but can cook through ~ 20,000 Write IOPs and ~50,000 Read IOps (with workers driving QDs of 32 – with QDs of 1, it’s more like 5,000-6,000 of each). That’s insane value. And when you look at what I do in this lab, it’s all about the IOps for the datastores underneath the vSphere hosts. The total of 8TB raw of SATA capacity is there for archive (more on this later – I only got 2 of them to work right).
… Here are the cases (storage COTS system on the left, ESX host on the right). I just reuse cases when it’s not for a gaming system or a media server (where I care more). For these home-lab systems, I’m not picky. I think these are Antec (have had them for a while) – I don’t go SUPER cheap (I find myself cutting my hands on those super-cheap cases). I suppose I could have gotten a better case for the storage system – more drive accessibility and mount points would have been nice. No biggie.
Experiences over the last few days during build and configuration:
- Always remember to run a burn-in test on your hosts right after you have them booting. I tend to fancy MemTest, which I let run through the full test (plus a couple hours). Flaky memory is so hard to track down when your lab starts acting weird, and consumer DIMMs have caused me problems in the past. All the systems flew through the tests flawlessly.
- REMEMBER: Don’t go super cheap on your power-supplies. This will bite you in the rear. I’m fine with cheapo cases – but bad power-supplies will cause flakiness, and increase odds (this happens all the time) you’ll get blown caps on the MB over time.
- For the ESX hosts – it’s amazing how much you can get for so little. Sandybridge is a bit cheaper than Ivybridge, and the performance delta is light (to negative in some cases) for these compute workloads. Also, crazy how little 64GB of fast RAM can go for. Yes, there is faster RAM out there – but this is a price/performance thing. I’m not building a gaming machine here…
- I will get around to using vSphere auto-deploy at some point, but am just used to the USB-boot ESX install. It works, it’s simple for this purpose.
- vSphere 5.1a installed without a hitch on the hosts.
- Gotcha #1: I noticed during the vSphere install – only 3 NICs were recognized. Bummer – turns out that the second NIC on the Intel MB doesn’t seem to use the same MAC/PHY chip, and doesn’t work. GRR. Look how much attention spent on NICs, and still… Well, good thing I bought the Intel Dual Port NICs in addition.
- For my first run at the Storage COTS system (I expect I’ll try multiple different variants over time), I picked Nexenta for the ZFS server. Why Nexenta? Well, I know a couple of people that have joined them recently. I’ve heard generally good things – and ZFS has some solid goodness. I wanted to give the use of SSDs as a WiL (write intent log) and as a cache a whirl which is in the latest ZFS builds. That said, I’ve had good experiences in the past with OpenFiler too. I just wanted to give Nexenta a personal whirl.
- Gotcha #2: The Nexenta install was not turn key. It would always hang while it was loading the boot kernel off the install image. Issue in the end was that I was installing off a USB external CD-ROM (a little google searching pointed to others with the same problems). I generally don’t put CD/DVD-ROMs in the hosts – after initial install, I never use them. I needed to pull one of the SATA drives (I was using all 8 SATA ports on the Intel MB) and boot and install directly off a SATA-attached CD-ROM.
- Gotcha #3: During the Nexenta install – it would never recognize all 8 drives in the system. Turns out that the Intel MB (note the early gotcha on the 2nd onboard NIC) is using a 3rd party SATA controller (Marvell) for SATA ports 7 and 8. Argh. I decided to just run off 2 of the 4 2TB SATA drives and maximize the SSDs.
All in all – one of the easiest home lab builds I’ve ever had the experience of making. Highlights how progress continues – both in terms of price/performance, but also in terms of the quality of hardware and software. Here’s a pic of the whole kit and kaboodle running in the basement.
BTW – that coiled thing above the laptop is a thermocouple, and controls the power to that outlet underneath the laptop. If it gets too hot in there, it cuts the juice.
The previous home lab was huge, sprawled all over the place – and it generated a ton of power/heat. I can now tuck more cloud power into a tiny closet (making my wife and kids happier).
Still on my to-do list:
- Finish configuration and tidy the networking cables.
- Install rest of the vCloud Suite (right now only have vCenter and
- Play more with Hyper-V and SCVMM
- Do some system benchmarking (both of the ESX hosts and the Storage COTS system).
Will update this post as I do more…
Hope you enjoyed this post as much as I enjoyed building my 2013 lab! I’m off to the EMC leadership meeting and kickoff next week, nice to be able to remotely access the lab while I’m there – and feel like I’ve regained my “zen” nerd state :-)
Comments always welcome!