Home Lab for 2014

The first project from my 2014 list that I chose to tackle is the design and build of a home lab system. I've been using my company-issued Retina MacBook Pro and VirtualBox, but the 8GB RAM limitation is slowing down progress significantly. Carefully managing and planning resource allocation and shutting down all but essential VMs is wasting precious free time. When resource contention becomes an issue Mac OS X slows down to a crawl which makes research and hacking take even longer. I really need an external dedicated host(s).

For the past week I've been researching as many home lab designs as I can find. I even evaluated using Amazon Web Services or Rackspace to host my lab, but nested hypervisor limitations and lack of low-level visibility quickly eliminated public cloud hosting from contention. The plan is to load ESX and host VMs that will accelerate linux, OpenStack, SDN, configuration management, and other learning goals. The two options I kept debating were either multiple small, energy efficient nodes with average CPU and memory or a single host that can support 64GB+ of RAM.

The small footprint options I evaluated included Intel NUCs and several models of Shuttle XPCs including SH67H3 and XH61V. The shortcomings that kept me leaning towards the single, high performance host were the maximum RAM capacity of the NUCs/XH61V and SH67H3 (16GB and 32GB, respectively), single NIC on the NUCs, spotty VT-x/VT-d/vPro support, and Realtek NICs on the Shuttle XPCs. I also evaluated several LGA 1150 socket options like the SuperMicro X10SLH-F, but they are all limited to 32GB RAM. For those interested in learning more about these kits here are just a few of the many posts I reviewed:

The more I researched these and similar models the more convinced I became that a single host with VMware HCL supported hardware that can be upgraded to 64GB RAM and beyond was the best route. By creating nested ESXi instances I can simulate Hadoop clusters, an OpenStack cloud, and other multi-node environments. The system I ordered was based on the posts below by @ErikBussink and @FrankDenneman:

Bill of Materials

Component Product Info Ordered From
Motherboard SuperMicro X9SRH-7TF SuperBiiz
Processor Intel Xeon E5-1650 v2 SuperBiiz
CPU Cooler Noctua NH-U9DX i4 Newegg
Memory Kingston 16GB Reg ECC SuperBiiz
NIC Intel I350-T2 Newegg
RAID Drive Bay RAIDON iR2420-2S-S2 Amazon
Boot SSD Kingston SS200 30GB B&H
VM SSD Crucial 240GB M500 B&H
Storage SSD Seagate 2TB ST2000VN000 B&H
Case Fractal Design Define R4 Overstock
Power Supply Corsair RM550 Newegg

The rationale for the particular motherboard, processor, case, and power supply selected are detailed in the posts by Erik and Frank linked above.

I got the idea of using a pair of inexpensive 30GB SSDs as a boot drive in the RAIDON enclosure from the Napp-in-One installation manual. Since I only have one host I prefer this method to the common practice of booting from a USB flash drive. I realize that this enclosure is a single point of failure, but it works for this non-production lab environment.

The Intel Ethernet Server Adapter I350-T2 NIC will be used for the vSphere Console management configured using a Standard vSwitch.

I fully intend to add more SSDs, but I haven't decided which storage solution to use. I'll be testing Napp-it, Nexentastor, and potentially a couple more options. Until I've had a change to complete the evaluation I'll use the 240GB SSD as a VMFS Datastore and 2TB hard drive for bulk storage of ISOs, VMDKs, etc.

I realize this is quite an investment for a home lab. If you are evaluating less expensive or multi-node options take a look at these home lab roundups:

For my purposes this design will work well and as resource demands increase I can add more RAM. In future posts I'll cover the build process.