This is something I consider mandatory if you're going to take VMware as seriously as I think everyone should :-)
There's been numerous good sites on this (here: http://www.vmweekly.com/articles/cheap-esx-server-hardware/1/; and here: http://www.techhead.co.uk/building-a-low-cost-cheap-vmware-esx-test-server), but I started differently than most - rather than starting with something that's a pre-packaged, I like to buy the parts and roll up my sleeves.
So, here are the shots of my 2 labs:
First - this is the view of the entry to the server closet, which is literally a closet with a "raised floor" - not for cabling and airflow, but rather so that when the basement inevitably floods, the gear is spared. It gets hot, so I put in an industrial thermocouple that acts as a cut-off of the dedicated 20A circuit I pulled in - for a shut-off just in case my sophisticated HVAC system fails.

This is the view of my "production servers" - an Intel Q6600 based cluster (8GB RAM each) and a AMD x2 3800+ based cluster (4GB each) - cookbook instructions below. I use the two bigger Dell PowerConnect 5324s for iSCSI and LAN/NAS traffic - the little Netgear switch is for VMotion. I could have used VLANs, but I like the physical topology, and had the switches kicking around (or just say "screw IP best practices", but what's the fun in that). The little host in beside the Intel cluster is my VC host. You can save some bucks and run VC as a VM of course (handy in some DR use cases!). I'm glad that VMware made that a supported use case around the VC 2.0.1 timeframe (can't remember the exact release)

This is the view of my "DR cluster" - an Intel E4300 based cluster (8GB RAM each). These are cheap as dirt, but kinda sucks that they don't have VT. I also just use a cross-cabled Cat5e cable for Vmotion. I use this for when I'm just playing around with Site Recovery Manager (how do I do that without arrays? READ ON!!!). The poster has one of my favorite quotes:
"Whatever can be done, will be done. If not by incumbents, it will be done by emerging players. If not in a regulated industry, it will be done in a new industry born without regulation. Tehcnological change and it's effects are inevitable" - Andy Grove
Slightly less inspirational (but only slightly) is the rolled up construction plans for my elaborate MAME-based arcade that I'll get started on any day now.....
This is a quick shot of my sophisticated HVAC system. This rig generates a LOT of heat, particularly when I spin up the arrays. I bought a cheap 300CFM bathroom fan, and then tore apart all the drywall and vent it out. the intake is baffled, so when I close the door, the whole thing is pretty quiet.
If this is making you eager to do the same thing - it's gotten ridiculously cheap, and ridiculously easy. I'm going outline how to build two ESX servers - one where you're looking for the CHEAPEST thing you can build, and one which is a little more pricey, but a great "bang/buck" ESX lab.
ESX 3.5 makes doing this a LOT cheaper - now that SATA drives/controllers are supported, Nvidia NICs are supported in ESX 3.5 bits, but there is a trick - technically the Nforce professional chipsets are the ones supported, but they use the same MAC as the cheapo consumer stuff - you just need to make sure that the motherboard doesn't use a Realtek NIC (or buy one of the Intel NICs)
General things to make sure you do
- Get a CPU that supports 64-bit guests - this is generally an Intel CPU that starts with the letter "Q" not the letter "E" (or just check the specs and look for VT support). Any Athlon 64 or opteron works. Can you go cheap - yeah, but that often costs you 64-bit guest support on Intel. If you're going cheap, I personally go AMD.
- Get a motherboard that supports a minimum of 4 GB of RAM - 8GB is nice (all ESX servers are generally constrained by RAM)
- Get a decent (but still super-cheap) GigE switch - something that supports VLANs so you can create configs that work with less physical NICs - it's crazy, but you can get an 8-port switch with full support for 802.1q, p, and everything else you could possibly need.
- Make sure you have a motherboard that has onboard VGA - you don't need a good graphics card, but you need something for initial config.
UPDATE (Jan 5th, 2009) - one of my colleagues sent me a new "record cheap" dual core 8GB config, and I've done a post on that HERE - you might want to start with that, as technology moves pretty fast - heck, some of the older stuff below you can't even BUY anymore :-)
AMD ESX configuration (as cheap as it gets, but you have everything you need) = $337
This config leverages the fact that ESX 3.5 supports Nvidia NICs - and there will only be one NIC for VMotion, network, and IP storage. Name of the game = how cheap can you go
Intel ESX configuration (a super cheap quad core, 8GB, lotsa GbE powerhouse) = $695
This config leverages the fact there are ridiculously cheap multi-core CPUs and RAM these days. the NICs on Intel motherboards are usually based on older Intel or Realtek chipsets, (no driver support in VMware) - so you need to find some fancier (but still cheap) NICs. Name of the game here = how cheap can you build a powerhouse that you can run 10 VMs at once without breaking a sweat?
OK - what now?
- You will need to buy two of whatever model you get - for VMotion, VM HA, DRS, Storage VMotion, etc... (so AMD total cost = $674, Intel cost = $1390)
- You will need ESX Server and Virtual Center - within EMC, we have a VMware/EMC ELA (remember - VMware operates independent of EMC as the parent!). You can, of course, download the software from http://www.vmware.com/ - they have 60 day eval timeouts.
- What the heck to use for shared storage? Well, I have a CX300i, an EqualLogic PS100E, a Storevault S500, and have an Openfiler box - but I'm a freak, and have a very supportive wife. There is another simple option (that I actually use more than any of the others): use a Virtual Storage Appliance - these turn the DAS storage in the ESX server into iSCSI LUNs or NAS. EMC offers a free, unlimited, no time-out Virtual Celerra, which just runs on ESX, and is otherwise a fully functional Celerra - anyone who wants one, head over to this post...
Recent Comments