This is something I consider mandatory if you're going to take VMware as seriously as I think everyone should :-)
There's been numerous good sites on this (here: http://www.vmweekly.com/articles/cheap-esx-server-hardware/1/; and here: http://www.techhead.co.uk/building-a-low-cost-cheap-vmware-esx-test-server), but I started differently than most - rather than starting with something that's a pre-packaged, I like to buy the parts and roll up my sleeves.
So, here are the shots of my 2 labs:
First - this is the view of the entry to the server closet, which is literally a closet with a "raised floor" - not for cabling and airflow, but rather so that when the basement inevitably floods, the gear is spared. It gets hot, so I put in an industrial thermocouple that acts as a cut-off of the dedicated 20A circuit I pulled in - for a shut-off just in case my sophisticated HVAC system fails.
This is the view of my "production servers" - an Intel Q6600 based cluster (8GB RAM each) and a AMD x2 3800+ based cluster (4GB each) - cookbook instructions below. I use the two bigger Dell PowerConnect 5324s for iSCSI and LAN/NAS traffic - the little Netgear switch is for VMotion. I could have used VLANs, but I like the physical topology, and had the switches kicking around (or just say "screw IP best practices", but what's the fun in that). The little host in beside the Intel cluster is my VC host. You can save some bucks and run VC as a VM of course (handy in some DR use cases!). I'm glad that VMware made that a supported use case around the VC 2.0.1 timeframe (can't remember the exact release)
This is the view of my "DR cluster" - an Intel E4300 based cluster (8GB RAM each). These are cheap as dirt, but kinda sucks that they don't have VT. I also just use a cross-cabled Cat5e cable for Vmotion. I use this for when I'm just playing around with Site Recovery Manager (how do I do that without arrays? READ ON!!!). The poster has one of my favorite quotes:
"Whatever can be done, will be done. If not by incumbents, it will be done by emerging players. If not in a regulated industry, it will be done in a new industry born without regulation. Tehcnological change and it's effects are inevitable" - Andy Grove
Slightly less inspirational (but only slightly) is the rolled up construction plans for my elaborate MAME-based arcade that I'll get started on any day now.....
This is a quick shot of my sophisticated HVAC system. This rig generates a LOT of heat, particularly when I spin up the arrays. I bought a cheap 300CFM bathroom fan, and then tore apart all the drywall and vent it out. the intake is baffled, so when I close the door, the whole thing is pretty quiet.
If this is making you eager to do the same thing - it's gotten ridiculously cheap, and ridiculously easy. I'm going outline how to build two ESX servers - one where you're looking for the CHEAPEST thing you can build, and one which is a little more pricey, but a great "bang/buck" ESX lab.
ESX 3.5 makes doing this a LOT cheaper - now that SATA drives/controllers are supported, Nvidia NICs are supported in ESX 3.5 bits, but there is a trick - technically the Nforce professional chipsets are the ones supported, but they use the same MAC as the cheapo consumer stuff - you just need to make sure that the motherboard doesn't use a Realtek NIC (or buy one of the Intel NICs)
General things to make sure you do
- Get a CPU that supports 64-bit guests - this is generally an Intel CPU that starts with the letter "Q" not the letter "E" (or just check the specs and look for VT support). Any Athlon 64 or opteron works. Can you go cheap - yeah, but that often costs you 64-bit guest support on Intel. If you're going cheap, I personally go AMD.
- Get a motherboard that supports a minimum of 4 GB of RAM - 8GB is nice (all ESX servers are generally constrained by RAM)
- Get a decent (but still super-cheap) GigE switch - something that supports VLANs so you can create configs that work with less physical NICs - it's crazy, but you can get an 8-port switch with full support for 802.1q, p, and everything else you could possibly need.
- Make sure you have a motherboard that has onboard VGA - you don't need a good graphics card, but you need something for initial config.
UPDATE (Jan 5th, 2009) - one of my colleagues sent me a new "record cheap" dual core 8GB config, and I've done a post on that HERE - you might want to start with that, as technology moves pretty fast - heck, some of the older stuff below you can't even BUY anymore :-)
AMD ESX configuration (as cheap as it gets, but you have everything you need) = $337
This config leverages the fact that ESX 3.5 supports Nvidia NICs - and there will only be one NIC for VMotion, network, and IP storage. Name of the game = how cheap can you go
- Athlon x2 4000 retail - dual core (comes with the heatsink/fan) = $55 (http://www.newegg.com/Product/Product.aspx?Item=N82E16819103774)
- Generic ATX Motherboard - based on the 430, 6100 or 6150 chipsets - just MAKE SURE it has the Nvidia NIC, not a Realtek NIC = $54 (http://www.newegg.com/Product/Product.aspx?Item=N82E16813157108) NOTE - THIS ONE HAS A REALTEK NIC, so you need to by an additional intel NIC. I use an old ASUS A8N-VM CSM socket 939 motherboard, and it has a Nvidia MAC and works great - but you need to find an older Athlon that fits that socket 939 form factor.....
- cheap as dirt HDD = $49 for a 160GB drive (http://www.newegg.com/Product/Product.aspx?Item=N82E16822136075)
- cheap as dirt ATX case/PS = $23 (http://www.newegg.com/Product/Product.aspx?Item=N82E16811164073)
- big 2GB DDR2 memory sticks (you can start with 2, and add another 2 later) = $72(2 x $36) http://www.newegg.com/Product/Product.aspx?Item=N82E16820141300
- cheap as dirt DVD/CD (to install the ISO) = $29 (http://www.newegg.com/Product/Product.aspx?Item=N82E16827106228)
- If you're not sure what MAC the NIC uses on the motherboard, or just want to be safe - add 1 Intel GbE NICs (these are a trick - you need specific ones for the Intel e1000 driver that comes with ESX 3.5 to work - hard to find, and DON'T buy the server MT versions - find the cheapo desktop GT PCI or PT PCIe versions - hundreds cheaper and work fine) = $42 http://www.allstarshop.com/shop/product.asp?pid=16016&ad=pwatch
Intel ESX configuration (a super cheap quad core, 8GB, lotsa GbE powerhouse) = $695
This config leverages the fact there are ridiculously cheap multi-core CPUs and RAM these days. the NICs on Intel motherboards are usually based on older Intel or Realtek chipsets, (no driver support in VMware) - so you need to find some fancier (but still cheap) NICs. Name of the game here = how cheap can you build a powerhouse that you can run 10 VMs at once without breaking a sweat?
- Lots of CPU Intel Q6600 retail - quad core (comes with the heatsink/fan) = $270 (http://www.newegg.com/Product/Product.aspx?Item=N82E16819115017)
- Intel G33/P35 based motherboard (you want ram slots and PCIe slots) = $54 (http://www.newegg.com/Product/Product.aspx?Item=N82E16813121099)
- Note - some people have struggled with the SATA controller on this board, others have had it work - one that has been proven to work consistently is the EVGA Nvidia 780i (http://www.newegg.com/Product/Product.aspx?Item=N82E16813188024&Tpk=EVGA%2b780i)
- cheap as dirt HDD = $49 for a 160GB drive (http://www.newegg.com/Product/Product.aspx?Item=N82E16822136075)
- cheap as dirt ATX case/PS = $23 (http://www.newegg.com/Product/Product.aspx?Item=N82E16811164073)
- Lots of RAM - big 2GB DDR2 memory sticks = $144(4 x $36) http://www.newegg.com/Product/Product.aspx?Item=N82E16820141300
- cheap as dirt DVD/CD (to install the ISO) = $29 (http://www.newegg.com/Product/Product.aspx?Item=N82E16827106228)
- 3 Intel GbE NICs (these are a trick - you need specific ones for the Intel e1000 driver that comes with ESX 3.5 to work - hard to find, and DON'T buy the server MT versions - find the cheapo desktop GT PCI or PT PCIe versions - hundreds cheaper and work fine) = $126 http://www.allstarshop.com/shop/product.asp?pid=16016&ad=pwatch
OK - what now?
- You will need to buy two of whatever model you get - for VMotion, VM HA, DRS, Storage VMotion, etc... (so AMD total cost = $674, Intel cost = $1390)
- You will need ESX Server and Virtual Center - within EMC, we have a VMware/EMC ELA (remember - VMware operates independent of EMC as the parent!). You can, of course, download the software from http://www.vmware.com/ - they have 60 day eval timeouts.
- What the heck to use for shared storage? Well, I have a CX300i, an EqualLogic PS100E, a Storevault S500, and have an Openfiler box - but I'm a freak, and have a very supportive wife. There is another simple option (that I actually use more than any of the others): use a Virtual Storage Appliance - these turn the DAS storage in the ESX server into iSCSI LUNs or NAS. EMC offers a free, unlimited, no time-out Virtual Celerra, which just runs on ESX, and is otherwise a fully functional Celerra - anyone who wants one, head over to this post...
WOW!!! I'm impressed and perplexed.... what do you use it for? You discussed all the "whats" and "hows" but not the "whys." Is it just for practice? Buy your supportive wife something special, "just for practice."
Posted by: Susan | June 07, 2008 at 08:26 PM
Thanks Susan - and I try to be as supportive for my wife as she is for me :-)
So what's the "Why" - answer - it's important (at least for me) to REALLY know what you're talking about. I manage large team in a large business, but fundamentally it's about expertise in the applied intersection of EMC and VMware technology. Having a home lab lets me do things that I wouldn't be able to otherwise.
For example - Storage Vmotion right now is only offically supported in Fiber Channel configurations, but it didn't make sense to me (it is after all essentially: 1) ESX snapshot; 2) filecopy ; 3) reparent snapshot (filesystem operation), so I wanted to see how it worked on iSCSI based VMFS volumes and NFS datastores. I could ask someone to do it for me - but what's the fun in that! Using the home lab (which I can access anywhere in the world), I can quickly see how it works.
Another key thing is it enables me to be hands on with EMC technologies and stay current - a hard task. Most software (and even a couple of arrays now - the Celerra, LifeLine, and arrays of the future) are available internally as VMs. I can just use VMware converter and import them to the cluster, and play with just about everything EMC produces, from backup tools, next-generation management tools, and arrays.
That's the why....
Oh - and one more thing - occassionally (it happens every few months), I start "jonesing" to build a new rig, and this is an outlet. I find something very satisfying and relaxing about building the stuff.
Posted by: Chad Sakac | June 08, 2008 at 09:54 AM
Love the setup. I have the initial components for a similar build out but on a smaller scale. Using 2 4U Chenbro cases I have configured a similar setup and I am working on getting a purpose built chenbro to through disks at and then run FreeNAS in a VM to act as my home NAS/Share.
Going with the home desktop cases is cheaper but I am not a fan for a long term deployment. If I could convince the guys at Dell to send me an R900 like I run in my infrastructure I would be a very happy kid. Without that kind of pull I have to make my own.
Glad to see other folks are doing this at home although your visitation stats must have gone off the chart when Twomey gave you a whole post. More than a little bit jealous.
Posted by: John Bergin | June 09, 2008 at 10:14 AM
Hi Chad,
Good Article!!
I am starting to build my ESX servers in home. I would download virtual Celerra so I can play around with my ESX 3.5, but just wondering if it is also worth it to spend the time in getting Openfiler as another virtual iSCSI flavor? Or should I make the efforts to get an external array as well?
Thanks
Posted by: Tomas Mieres | June 14, 2008 at 04:58 PM
Chad,
Thanks for putting this together, and thanks for letting me finally pay proper tribute to your work on my blog at http://vmetc.com!
Posted by: Rich | June 15, 2008 at 02:52 PM
Rich, my pleasure - http://vmetc.com is on my watchlist, along with http://www.virtualization.info, the always useful Scott Lowe's blog at blog.scottlowe.org and many others...
Posted by: Chad Sakac | June 15, 2008 at 07:01 PM
Hi Chad,
Great article! I am in the process of building a lab at home, so this article prove very timely for me. I was very interested in your mention of a Virtual Celerra. I would very much like to tinker with one of those. How do I go about getting my hands on one? Thanks!
Posted by: Drew | June 17, 2008 at 06:01 AM
Nice lab! i'am also interested in the Virtual Celerra.
Keep up the great postings.
Posted by: Arne Fokkema | June 17, 2008 at 06:56 AM
I'm interested in the virtual Celerra too. Where can I get the VM? Thanks.
Posted by: Ed Grigson | June 17, 2008 at 09:40 AM
Chad,
Excellent article. Glad to finally find the source of the shopping list. I'm very close to setting up my ESX3.5 lab. Eventually I'll build two servers so I'd like to go the AMD route because of cost. However, in reading around I get the feeling, compared to the Intel system, it's a greater risk of configuration gotchas. Can you recommend a current, under $100 motherboard from newegg.com that can still be purchased? The posted recommendation has been deactivated from newegg. It needs to be have its SATA, NICs, and Video to be ESX 3.5 compatible. Of the three, it seems that getting SATA to work off of the motherboard is the trickest. I need it to be capable of RAID0 across two or three disks. ESX & VMFS would need to be installed on the same set of disks. Also, the motherboard should be capable of going to at least 8GB. Thanks ahead for any suggestions.
Posted by: Jsmith | June 18, 2008 at 01:27 AM
just moved into a new house, hope i can build a home-lab like this. although the neighborhood will probably notice the power-dip when I switch on 4 SAN's at the same time :-) Looks amazing.
Posted by: Duncan | June 18, 2008 at 04:51 AM
I'd be interested in seeing the Virtual Celerra app. I've played with Xtravirt's san app, and openfiler but it would be nice to see the EMC app since we have been looking at the celerra line as a replacement for our current SAN (older dell/EMC branded FC box).
I setup a 2 box HP D530 "cluster" using the Xtravirt SAN VMs and its pretty impressive for what its running on. The funny part was ESX 3.5 installed on those boxes quicker and easier then Windows XP will!
Thanks!
Posted by: Ben | June 18, 2008 at 04:48 PM
Hi Chad,
Just discovered your blog and am enjoying it a lot. You mentioned the Celerra Virtual Appliance - is the offer for a copy still on? Of so, can I please have one? :-)
Posted by: Ole André Schistad | June 19, 2008 at 01:06 PM
Hi!
Thanks for your post. One more interest for Virtual Celerra, can I also have one? :-)
Thank you very much.
Posted by: ab | June 20, 2008 at 07:16 AM
Hi There.
Fantastic article. I can't believe two things, that you had the time to build it all and that you have an understanding enough family to let you get so deeply involved with a home rig.
Nothing by halves!!
Please count me in for a copy of Virtual Celerra please.
Thanks in advance
Paul Shannon
Posted by: Paul Shannon | June 20, 2008 at 07:54 PM
Great blog!
I am waiting for the housing market in the UK and then i will be actively seeking for my own datacentre (box room) :)
Shout me in for a cellera if possible
Posted by: Daniel Eason | June 21, 2008 at 01:26 PM
Hi Chad,
Excellent work, this is exactly the inspiration/excuse I need to setup my ESX lab without the Mrs frowning at me :)
I've used OpenFiler in the past but I'm trying to keep the box count down so the Virtual Celerra app sounds perfect - is it still available? If so, please could you fire a copy over?
Many thanks,
Stuart.
Posted by: Stuart Mycock | June 22, 2008 at 12:11 PM
Good setup.
I built a rig on the weekend ;
used a Gigayte GA-P35-DS3R board. $139 from MSY
Once u get a case, vid card, 4gb ram, CPU ect you'll be all up $499
u could do it a bit cheaper but that works
plus u need an intel NIC as the on board is not recognised (disable in bios)
the vi3.5 build goes thru OK but you have to boot off a USB DVD/CD rom and not the onboard, thats also not supported
i used port 1 (orange connector) on the mobo for my SATA drive.
all boots up fine and works great
for storage, on my core machine (not vi3 box) i have vmware workstation (thanks VCP) which i downloaded freenas and setup a couple of NFS shares. this worked just fine.
i then built a dedicated gbe connected NAS box using freenas again with a spare 80gb SATA drive formatted up as NFS - the physical box is HEAPS better performance but nothing like FC SAN - a CX3-40 is not in the budget and my missus is not understanding.
Posted by: Grega | June 23, 2008 at 08:20 AM
Oh FYi.........the EMC Celerra appliance is available on Powerlink for all those ppl who have access...
Cheers
Posted by: Grega | June 23, 2008 at 08:29 AM
Thanks to all those posting comments. The Celerra sim is available now - see this post here:
http://virtualgeek.typepad.com/virtual_geek/2008/06/get-yer-celerra.html
Also - for those looking for a new motherboard (the original ones were a bit dated - I'm going to do a new shopping list eventually, but already some stuff is updated.
For Intel, I've gone from their G33 based motherboard to a newer P35 - no special reason, just that you can't find ATX form factor G33 boards anymore. Here's the one I used:
http://www.newegg.com/Product/Product.aspx?Item=N82E16813121314R
This is on sale right now for $65 USD. Nice, 8GB support, newest CPU support, stable, fast. Confirmed the SATA to work out of the box with ESX 3.5.0.64067 and later. Onboard NIC doesn't work, so you need to get the intel NIC I posted in the shopping list. the other bummer is that it doesn't have onboard video, but any cheap video card will work fine.
UPDATE: Chad here... The P35 Intel motherboard worked GREAT. No video is a bummer, and it has no PS/2 keyboard/mouse, which sucks in the sense that my KVM doesn't work, but that's irrelevant anyway :-)
Re AMD motherboards, I can't say from personal experience (my old ASUS A8N-VM CSM continue to plug away (and you've GOT to love that they have "VM" in the product name :-) ). I would bet you $80 (the price of the board) that this board would work like a charm: http://www.newegg.com/Product/Product.aspx?Item=N82E16813138117
It has onboard video, so you're good there. Note that it has a realtek MAC/PHY, so the LAN won't work, you need the Intel card I spec'ed earlier. What's crazy is it supports 16GB of RAM!!!! It would be madness to do that at this point (will cost more than $1000 just for the RAM) - but you've GOT to know that the ram prices are going to drop.
It's amazing how cheap the AMD configs are.
UPDATE: Chad here... We haven't been able to get this BIOSTAR TForce TF720 motherboard's SATA controller recognized, so for now, take it off the list, use the older 680 motherboards we've gotten to work.. I'm going to keep poking at it, but for now - stick with the older boards in the original config.
Posted by: Chad Sakac | June 24, 2008 at 11:23 AM
Chad,
After reading this article http://it.anandtech.com/IT/showdoc.aspx?i=3263 I figured it was best to go with a newer cpu. It seems every year the virtualization extensions get more efficient. So, I opted for your Intel Quad Core recommendation + EVGA nVidia 780i. After downloading the ESX 3.5 Update 1 82663, installing it, reboot....the system reports "Mounting root failed". I've tried it with and without RAID, twiddled numerous BIOS settings, upgraded the bios but all to no avail. You mentioned that people have been successful with this motherboard and sata drives. Would it be possible post their configuration settings and setup tips? I would like to avoid buying any more hardware (i.e scsi/sata controllers) or booting of a memory stick. Any help would be much appreciated!
Posted by: jsmith | June 27, 2008 at 12:58 AM
Jsmith - sometimes SATA controllers need a little extra love when you find yourself far, far from the HCL :-)
From an EMC'er using a config much like yours:
"1 ESX on an EVGA 780i, dual-booting Vista (which runs on a RAID5), running on a SATA drive native, 8G ram, q6600 quad core. This board has 2 nvidia nics.
1 ESX on a XFX 680i LT board, 8G ram, q6600 quad core, running on a SATA drive native. This board has one nvidia nic, and I added a PCIE Intel Gig nic.
I ran into the "mounting root failed" issue after install. If this happens, follow these instructions (stolen from http://www.vm-help.com/esx/esx3.5/SATA_mounting_root_failed.html):
1) Setup a single SATA with no RAID.
2) Install ESX
4) On the first reboot, select the Service Console only (troubleshooting mode) boot option.
5) Let the host boot up, then log in.
6) Edit the file /etc/vmware/pciid/sata_nv.xml - change the last entry to use the device ID of your SATA controller (mine was 037f) and update the device name appropriately (mine was MCP55 SATA Controller, see note below)
7) After saving the file, run esxcfg-pciid
8) Reboot, and voila!
To find your device ID, us lspci. You'll have to use your best judgment on which one it is, but both of my motherboards (780i and 680i) were the same. Once you figure that out, visit this site to determine the device name:
http://pci-ids.ucw.cz/iii/?i=10de"
Posted by: Chad Sakac | July 07, 2008 at 08:42 AM
Chad,
Any updates regarding the new shopping list that you mentioned in your 06.24.2008 post? I'd also like to know if you have received any feedback on the replacement AMD motherboard (http://www.newegg.com/Product/Product.aspx?Item=N82E16813138117) that you referenced? Any problems with this motherboard's SATA controller?
I look forward to your reply. Thanks for all of your contributions to the virtualization community. Best regards.
UPDATE: Chad here... We haven't been able to get this BIOSTAR TForce TF720 motherboard's SATA controller recognized, so for now, take it off the list, use the older 680 motherboards we've gotten to work.. I'm going to keep poking at it
Posted by: RichardB | July 21, 2008 at 12:07 PM
Richard, sorry it's taken a while to respond - no one has told me if it has worked, but here's a proposal:
1) you buy it.
2) I will help you if you run into trouble
3) if we can't get it to work, I paypal you the $$, you ship me the motherboard :-)
I'm THAT confident it will work :-)
Posted by: Chad Sakac | August 08, 2008 at 09:23 PM
Hey Chad,
If your proposal to Richard is universal, I will likely take you up on the offer this week. I sold my old ESX lab to a coworker who is preparing for his VCP exam, so I need to build 2 more servers. I like the motherboard you spec'd as it has enough slots for me to put 4 NICS in it for more flexibility. I would be putting 8GB of RAM in per server to start off.
Thanks for being a great resource. Are you going to be at the EMC booth during VMworld 2008?
Brad
Posted by: BLKJAK | August 18, 2008 at 02:32 PM
One question - is there an big performance difference between AMD and Intel dual/quad CPUs for a test environment? I plan on testing Exchange 2007 and System Center on my new ESX lab. Should I lean toward AMD or Intel?
Thanks,
Brad
Posted by: BLKJAK | August 18, 2008 at 02:39 PM
Hi Chad,
I agree completely with your comment;
"I find something very satisfying and relaxing about building the stuff."
Putting together a home lab where you can fiddle around with technology is a must, even if your job is more business focused.
The ESX lab in my basement should be nominated for "the cheapest lab" award, but it works all the same.
1) ESX Server
Dell PowerEdge 400SC - single 2.7-P4 (about 5 years old)
1.5GB RAM
2) Open-E SAN
Dell Inspiron 8200 Laptop (about 6 years old) It only has 2x40GB drives and can't boot from USB. I had to turn one drive into the Open-E boot drive.
BTW - built in a couple days with stuff laying around on the floor.
Are you willing to sell your MAME arcade plans? ;)
Posted by: Pete | August 20, 2008 at 08:04 AM
Chad,
I purchased components to make two ESX rigs using the Biostar TF720 you suggested. I have run into a few snags. First issue is that the ESX install hangs at "running /sbin/loader". I am able to get around this by installing using the "esx noapic" command. From there it doesn't see my SATA drive. I am currently trying to get around that one. I am trying to download the latest ESX build but VMware is doing maintenance today and I can't get anywhere.
Brad
Posted by: BLKJAK | August 23, 2008 at 01:20 PM
Update:
System board used: BIOSTAR TF720 A2+, 8GB of DDR2 800 Dual Channel RAM, AMD Phenon 9600 Quad core CPU, 3 Intel GB NICs (1 PCI-E and 2 PCI)
I have gotten the system to boot up and install the latest build of ESX to an IDE hard drive. No SATA so far. I had flash the BIOS and then disable APIC in the BIOS. I also had to use the NOAPIC switch to load ESX otherwise the install hangs when the Intel NICs load e1000.o. I am getting red messeges on the console saying that the "AMD Family 10h stepping B2 is not supported" and "Using PIC, make sure that if 'noapic' is used, it is on purpose".
I don't think either are hindering performance much. Both may be repaired with future builds of ESX that could support both. That may also resolve the SATA issues as well, unless Chad can help me out.
Brad
Posted by: BLKJAK | August 24, 2008 at 10:30 PM
BLKJAK - I bet you just need to follow the "extra SATA lovin'" steps in the earlier comments:
"From an EMC'er using a config much like yours:
"1 ESX on an EVGA 780i, dual-booting Vista (which runs on a RAID5), running on a SATA drive native, 8G ram, q6600 quad core. This board has 2 nvidia nics.
1 ESX on a XFX 680i LT board, 8G ram, q6600 quad core, running on a SATA drive native. This board has one nvidia nic, and I added a PCIE Intel Gig nic.
I ran into the "mounting root failed" issue after install. If this happens, follow these instructions (stolen from http://www.vm-help.com/esx/esx3.5/SATA_mounting_root_failed.html):
1) Setup a single SATA with no RAID.
2) Install ESX
4) On the first reboot, select the Service Console only (troubleshooting mode) boot option.
5) Let the host boot up, then log in.
6) Edit the file /etc/vmware/pciid/sata_nv.xml - change the last entry to use the device ID of your SATA controller (mine was 037f) and update the device name appropriately (mine was MCP55 SATA Controller, see note below)
7) After saving the file, run esxcfg-pciid
8) Reboot, and voila!
To find your device ID, us lspci. You'll have to use your best judgment on which one it is, but both of my motherboards (780i and 680i) were the same. Once you figure that out, visit this site to determine the device name: http://pci-ids.ucw.cz/iii/?i=10de
Let me know if that works. You shouldn't need a newer ESX build.
Posted by: Chad Sakac | August 24, 2008 at 11:20 PM
Thanks for the tips Chad. Here is the issue...maybe it is my misunderstanding. If I attach an 80GB SATA drive to the controller and boot to the ESX cdrom for setup, ESX doesn't see a hard disk to install to. I can install to an IDE drive, but there isn't a partition for VMFS then. After an installation to IDE, the storage tab in the VI client doesn't show the sata controller or drive. I think this is a different scenario to what you suggest. Or am I wrong?
Brad
Posted by: BLKJAK | August 25, 2008 at 12:31 PM
Chad,
From the ESX console I issued the lspci command. It does not list a SATA controller anywhere in the list. Thoughts?
Brad
Posted by: BLKJAK | August 25, 2008 at 07:02 PM
Brad - you are safe, my friend, my commits when stated publicly, are a commit - I'll cover your cost if we can't make this work :-)
Ok, that said - can you try the MCP55 setting (037e and 037f)? Not sure if Nvidia updated the SATA controller from the 680 to the 720, but I doubt it.
I will be at VMworld - if we don't have this working by then, bring it if you can! Meet me in my booth or in my session - it's a joint VMware/Cisco/EMC session(KN EMC)
Posted by: Chad Sakac | August 25, 2008 at 08:33 PM
One new piece of information...I changed the SATA controller to use AHCI in the BIOS. Now I see 0ad4 showing up which is a MCP78S GeForce 8200 AHCI controller, which is SATA I believe. I am going to follow the instructions above to see what happens. :)
Posted by: BLKJAK | August 25, 2008 at 09:21 PM
Brad, you're almost there!
Posted by: Chad Sakac | August 25, 2008 at 09:26 PM
Chad, what is this?
"Class 0106: nVidia Corporation: Unknown device 0ad4 (rev a2)"
Brad
Posted by: BLKJAK | August 26, 2008 at 12:05 AM
Here's another discovery. If I set the BIOS to RAID, the lspci shows this:
"00:09.0 RAID bus controller: nVidia Corporation: Unknown device 0ad8 (rev a2)"
0ad8 isn't in the list.
Posted by: BLKJAK | August 26, 2008 at 01:06 AM
More of the ongoing saga of the BIOSTAR TF720...
I was able to successfully install Kubuntu on box #1 on SATA. I opened a terminal session and ran "lspic" and printed out the results. The print out showed two interesting lines:
00:06.0 IDE interface: nVidia Corporation Unknown device 0759 (rev a1)
00:09.0 IDE interface: nVidia Corporation Unknown device 0ad0 (rev a2)
I then disabled the onboard IDE controller and rebooted and ran lspci again. This time I only see:
00:09.0 IDE interface: nVidia Corporation Unknown device 0ad0 (rev a2)
I am assuming this is my SATA controller. I edited the sata_nv.xml and changed the last entry to be MCP55 with the address of 0ad0 and then ran esxcfg-pciid, then rebooted. ESX saw the controller, but no targets were seen. I then changed the name to MCP785 but there was no love.
My concern is that ESX doesn't see the SATA controller on install...or it is not seeing my drive. I am not sure which one. I am happy that Kubuntu sees the SATA components fine, which leads me to believe that the ESX kernel isn't new enough to see my hardware.
Posted by: BLKJAK | August 26, 2008 at 06:37 PM
I should have noted in the above post that I built my second ESX server whitebox and ran the tests on ESX installed to an IDE hard drive. Thanks.
Posted by: BLKJAK | August 26, 2008 at 08:45 PM
Brad - are you going to be at VMworld? I'll take that board of your hands at your cost - I'm a man of my word! Then I'll plug away at it - I remain convinced there is a way....
Also, I do have future ESX builds....
Posted by: Chad Sakac | August 27, 2008 at 09:00 PM
I will indeed be at VMworld. I am coming in on Sunday and heading to a Vizioncore welcome event :) I appreciate your offer, but these are two screaming machines for what I paid and I am happy with them. I can run IDE for now.
I could test those ESX builds for you :)
Let me know when you will be at the EMC booth and I will stop by with my thumb drive!
Brad
Posted by: BLKJAK | August 27, 2008 at 09:21 PM
Very nice rig I have a simialr setup also. Also, can't beat Wonder Boy, Rygar or Bubble Bobble on Mame;)
Just to note www.etherboot.org has roms which can be flashed onto nics (works a treat for the Intel Pro/1000's $30-$50 on ebay) and enable boot from san (like a HBA). EMBoot have similar but costs $100 plus, I believe. I setup an OpenSolaris x86 box with loads of 500gb Sata drives to behave like a SAN. ZFS and RaidZ are interesting to play with especially when trying to snapshot vmdk's (and raw os images also). Have you any PXE setups? Would appreciate some feedback when you have time.
Thank you,
Keith
Posted by: Keith | August 27, 2008 at 11:27 PM
Hey Chad,
Fellow EMC (RSA) geek here with my new home config.
. Intel BOXDQ35JOE LGA 775 Intel Q35 Micro ATX motherboard
(bought because it supports vPro so I can do remote power on/off if needed)
. Intel Q6600 Quad-core Kentsfield 2.4GHz CPU
. ZEROtherm ZEN FZ120 120mm CPU Cooler
. 4GB Corsair memory
. 4 Seagate Barracuda 500GB 7200.11 32MB cache SATA drives
. 1 LSI Logic SATA hardware RAID card.
. 1 Corsair 520W power supply
. Cheap case (Gonna change that in a few months)
So far, so great. I converted my old Linux server to a VM. Moved my VPN and ESVA spam filter VM's over. Installed a FreeNAS VM.
The system boots ESXi off a USB stick. VirtualCenter runs in a VM on a laptop that runs VMware Server 1.x.
I've retired two systems and am down to the laptop and the ESXi server. I need to put a Kill-A-Watt on the line to see how it's doing. :)
mike
RSA/EMC employee/technical marketing guy
Resident VMware geek at RSA
Collector of hardware
I used to debug/QA Scott Davis' code when we worked in the VMS group at DEC.
Posted by: Mike Foley | September 15, 2008 at 02:15 PM
It would like much simpler to use, vmware workstation and openfiler to set up vmware esx infrustructure. Loads of tutorial online, like xtravirt.
VMKing.
Posted by: VMKing | September 16, 2008 at 05:39 PM
Was there ever any successful resolution to using the Biostar TF720, or is this simply a victim of esxi not having the proper SATA drivers for the MCP78S controller?
Sorry for dredging up an old post, but google isn't very forthcoming on esxi biostar tf720 information!
Posted by: jhoefker | December 10, 2008 at 05:50 PM
Unbelievable, other sites never went into detail on the http://pci-ids.ucw.cz/ which was crucial, they farted around on the sata_nv without explaining how to add the correct label for the sata controller (ok, I'm new to ESX and I can figure 2 an 2, but still give abit more detail right guys:). You made my day virtual geek, whoever you are. Everyone, follow this guys advice to the T, I did and it worked for me. Just so I'm not a hypocrit, it looks kind of like this at the end of the file, I'll let you tab yourself. Mine is a ECS GeForce6100PM-M2, so my PCIID on that link is:
http://pci-ids.ucw.cz/v2.2/pci.ids
03f6 MCP61 SATA Controller
1849 03f6 939NF6G-VSTA Board
Good luck.
sata_nv.xml should look like this at the end:
sata_nv
MCP61 SATA Controller
Posted by: mr mel | February 12, 2009 at 03:07 AM
oops, wrote too much, was cut off by formatting as well, just add the slash and arrow signs appropriately.
device id="03f6"
vmware label="scsi"
driver sata_nv
driver
vmware
name MCP61 SATA Controller name
device
vendor
pcitable
Posted by: mr mel | February 12, 2009 at 03:10 AM
You'd posted an updated "record cheap" dual core 8GB config on Jan. 5, 09. That link isn't working anymore. I'd love to see the hardware list if you have it.
Thanks! Jim
Posted by: Jim Tomiser | April 15, 2009 at 01:57 AM
Chad, what success have you had with these "servers" running vSphere4. Also if you want to remotely rebuild the ESX hosts - what are using in lieu of ILO/RAC board - an IP KVM?
Posted by: Mike Laverick | April 16, 2009 at 06:24 AM
Hi Mike: Today I successfully installed and confirmed working vSphere on my newest whitebox. No issues at all on the following HW:
CPU: Intel Core 2 Quad Q6600 Kentsfield 2.4GHz LGA 775
Board: ASUS P5BV-M LGA 775 Intel 3200 Micro ATX Intel
Drive: Seagate ST3640323AS 640GB 7200 RPM SATA
Memory: CORSAIR 4GB (2 x 2GB) 240-Pin DDR2 SDRAM DDR2
CD-ROM: ASUS 22X DVD Burner Black SATA Model DRW-22B1S
install went w/o a glitch.
Posted by: Paul Wegiel | May 05, 2009 at 09:43 PM