« Goodbye 2012, Welcome 2013Happy New Year! | Main | More, and still free to play, learn, try! Prosphere 1.7 is out. »

January 04, 2013

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00e552e53bd28833017c354e8112970b

Listed below are links to weblogs that reference vCloud 5.1 Suite Home Lab–circa 2013:

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Dave Silvestri

Very cool... Quite impressive in my mind. I think you'll be able to do a ton with that gear.

Don't know where you find the time :)

Chappy

Chad looks awesome! Let us know how it turns out!

A couple of things. I think you might mean the DX79SR.

I checked the compatibility list for vSphere for you (you probably already did that though). The second LAN chipset on the board is supported (Intel 82574L) but the main chipset is not. :( (Intel 82579L). Still should be a solid lab to play in. Don't let that NIC thing get ya :). Although there is a quad port intel for only 255 on newegg. HEHE!

Also let me know if you want a REAL switch. ;)

MichaelRyom

Cool... got any performance stats from the ZFS array ?

Craig

Well Chad, I am glad your overall experince with Nexenta was good. You just need to remember that Nexenta is hardware independandant other than they do have a hardware supported list and that I am sure had your hardware been on the certified list that you would not have suffered any problems. Unlike EMC and NetApp that only deliver product with hardware. Oh, wait NetApp think they provide a software only solution, sh1t, it only runs in a VM and supports 5TB, that can't be right. Seems that the finacial market thinks that NetApp is a better option than EMC at the moment. I am sure they haven't heard of Nexenta, they will soon. Just like NetApp, Nexenta has a unix kernal at the source, a sophistacated filesystem ZFS as compared with WAFL and a management application NexentaStor versus DataOntap. All the functianlity and capability of NetApp but at a dramatically reduced cost point compared to the NetApp street price. Yes that is the street price, already at least 50% of list. Not to consider the difference in price with EMC.

Hope it goes well with the home lab.

Signed ex EMC and NetApp employee and loving my new role.

RickWalsworth

Chad, i really enjoyed the post.- you are the definition of the ultimate uber geek and your lab makes the rest of us wannabe geeks green with envy. Have fun at leadership.

Calle

So have you found a solution for the issue with eight drives?

Nicholas York

Awesome post as usual, I am in the process of doing the home lab update myself.

I went with ZFS too (primarily as an excuse to get some hands-on time with it), and from reading up on it, if you want to use all that SSD as L2ARC (cache) you're going to need a ton more RAM .. 1-2GB of ARC is used just for pointers per 100GB of L2ARC .. So even if you mirror your SSDs you're going to need almost all of your RAM just to provide pointers.

I went the "all-in-one" route, just to say I'd done it .. Put a IBM M1015 in IT mode (HBA mode) in my ESX server, spec'd out everything for VT-d and passed the controller through to my OpenIndiana ZFS box. Then present the NFS storage through an internal vSwitch back to ESX for datastores. Then duplicated that in the pair ESX server and running zfs replication between them. Practical? Maybe, maybe not .. But it was fun.

Mike Sheehy

Nice lab Chad! Can you elaborate a bit more on the software config? I'm curious if you're running nested ESXi...etc. I'm in the process of upgrading my lab as well. While it's a bit older hardware, Supermicro dual SKT 1366 w/48GB RAM and a Supermicro single skt 1156 w/32GB RAM. I want to add another dual SKT system, probably another Supermicro 1366 w/48GB RAM, buy it used cheaper, then relegate the single SKT 1156 system to Nexentastor since it as onboard LSI SAS2008 controller. I'm thinking of going 6 x 2TB WD Reds and 2 x SSD's for ZIL and L2ARC cache.

Can you also provide more detail on your storage setup? Are the SSD's just for cache or a separate volume and utilize SDRS within vSphere?

Thanks!

Steve Ballmer

Nice post and as always great site.

Tom Halligan

I couldn't agree more with the first sentence in your post.

I came to VCE from HP and though HP is being consumed by people that do resemble your comment, I feel VCE is starting to have the same problems.

There are so many people need to complete even the simplest of workflows and so many people that need to be included on phone calls that nothing actually gets done. There are WAY to many people that can't actually DO anything. Everyone has to go back to their "guy" or talk to the professional services guys to see when we can get someone to address whatever issue we are facing. It is very frustrating.

My goal for 2013 was to build a home lab and this article will be invaluable.

I find it odd when I hear people say "I want this training" or "I want a dev box", etc but they aren't willing to pay for it out of their own pocket. Seems crazy to spend all that money on undergrad and grad school and think that spending $1,000 on a training class and/or certification or a $200 on a SSD for their machine.

To be successful in this ever-changing space that we work in, we need to continue to invest time and money in ourselves.

Webinars and certifications are great but you need experience to be able to talk with customers and coworkers about what works and what doesn't, no exceptions.

Thank you again for the post. I look forward to your posts in 2013.

Regards
Tom Halligan
Sr. vArchitect
VCE

Paul Braren

Hands-down, one of my favorite lab build write-ups. Ever. Minimal cruft, maximal value, with rationale explained each and every step of the way.

I still remember the sounds of the crowd back at VMworld 2012, when you outlined your home's datacenter for the hundreds of us in the room. I know I was now alone, thinking about all those watts, noise, and heat. So glad you're turning the corner on this.

Of course, such endeavors are often hard and time consuming, but also the most rewarding. The very same reasons I started blogging about my bargain basement build back in 2011.

Thanks for the best article I've read all year!

Daniel Klemz

I did a similar style build, but mine was using AMD FX-8120 cpus. The Iomega PX6-300d is a fantastic little box to split between 4 spindles and 2 SSDs. I use the spindles for my bulk storage, and the SSD for my initial VM deployments before I move them over.

Great write-up! It's nice to not have to do too much garbage to get vSphere running on whitebox hardware. I just wish that some of the sensors would be visible, but I'll take the cost savings.

Joshuagwyther

Awesome home lab. I just refreshed mine as well. Similar in many ways. Moving to SSD shared storage was a game changer for sure. Let me know if you need any of the rest of the stack from VMware. I left my wires messy, with the new alien server cases it looked cool that way.

https://twitter.com/joshuagwyther/status/245663119991529472/photo/1

Peter Marelas

Hi Chad,

I am an EMCer and you have inspired me to spend my hard earned cash to upgrade my home lab. However, I am guessing my pay cheque is not as big as yours :) so I am thinking of making a couple of compromises.

First off I only want to deploy one server instead of three. Rational is energy. My bills are already far too high so I don't want to add to them. And so I am thinking a dual socket motherboard with a Xeon E5-2600 series. The Intel S2600COE looks good and it supports 128 GB of non-ECC memory. It has 4xGbe, 14 SATA ports and VGA onboard.

As a second server I plan on running ESX under VMware workstation on my desktop (I already do this). This will only run when I want to test something that needs two servers. Similarly, I don't want another server for shared storage and so I was thinking of running the shared storage out of a VM on the physical box.

What are your thoughts? Is this going to limit the effectiveness of the lab and my ability to master VMware or is this good enough?

Regards
Peter Marelas
Architect
EMC BRS

Charles A. Windom Sr.

Thanks for the post Chad. You do all us nerds proud. Time to update my 44TB home-made storage array. Want to put either 3 or 4TB SATA/6 drives in the unit. Also time to build new servers and power down the Dell PE-2950 servers.

Charles Windom

Mirko

Hi,
Congrats for your new lab. Wish I have that much money to built myself my personal home lab :)
Looking at your choice I can't really keep me to ask you some questionable IMHO choices :
Why use 3930K and 3820 instead Xeon E5-1620 and E5-1650 ?
same price, ECC options, server grade cpus.
I think that you're not really looking to overclock the boxes right ?
No ECC DDR3 on storage server is strange, even more after you have invested $3,000 on SSD and HDD. Consumer gaming RAM are not the greatest move to keep all your lab running, it can go bad at anytime and it can silently trash your VM/iSCSI target, the ram is so cheap, and ECC have a reasonable premium. ZFS use the RAM heavily like cache.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Working...
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.

Working...

Post a comment

Comments are moderated, and will not appear until the author has approved them.

  • BlogWithIntegrity.com

Disclaimer

  • The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC. This is my blog, it is not an EMC blog.

Enter your email address:

Delivered by FeedBurner