Virtualization @Home

By Blue2k on Monday 04 February 2013 04:46 - Comments (11)
Category: -, Views: 4.000

Being exposed to cloud-based web application development and server virtualization on a daily basis at work, it made me think about virtualization at home. Even though my spell-check has never heard of virtualization, it's becoming the standard approach to providing IT services within businesses. The idea that you can utilize idling hardware more efficiently by provisioning it with multiple services (or virtual servers) is so straightforward that it's surprising it hasn't caught on earlier.

I've been running some network services on an old Dell XPS M1710 for a while after the graphics card died in it. While it's more than capable of running Gentoo with Apache and some other services, running a virtual stack on it seemed like a bridge too far for the 5 or 6 year old machine. Instead, I decided it was time to upgrade a number of things in our home network, including the 5 port gigabit switch and old WRT54g router. The ultimate goal was to create a home network solution that would allow for a more serious server setup with a full set of virtual servers.

For the network I ended up with this configuration:
  • Netgear WNDR N7500 Router
  • TRENDnet 24-Port Gigabit GREENnet Switch
  • TRENDnet 5-Port Gigabit GREENnet Switc
The 24-port switch is a rack-mountable switch that would fit very well into the cabinet I bought that would house the server:
  • Tripp Lite SRW9U 9U Wall Mount Rack Enclosure Cabinet
  • Cyberpower CPS-1215RMS Rackmount PDU Power/Surge Strip

Being Dutch and all meant I wanted everything at a reasonable price. I spent a long time making sure the final price tag would be very reasonable (and had me split the order between Newegg and Amazon). Because the cabinet is not a full-depth rack (and only has the front posts) I required a half depth server case that would not be too heavy. Just in case though I applied the age old trick of screwing an upside down shelf inside the rack to support the 1U server. The server contains the following hardware:
  • ASUS RS100-E7/PI2 1U Server Barebone LGA 1155 Intel C204 DDR3 1600/1333/1066
  • Intel Xeon E3-1230 V2 Ivy Bridge 3.3GHz
  • Kingston 16GB (4 x 4GB) 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1333
  • 2x Seagate Barracuda 7200.14 ST3000DM001 3TB 7200 RPM 64MB Cache SATA 6.0Gb/s

With 16GB of memory and 8 virtual cores available it is capable of hosting 10 virtual servers without issue. The server case only has a couple of low noise fans and is therefore very reasonable in noise production (it actually makes less noise than my desktop machine). Even though the case is half depth it can house 2 3.5 inch hard drives and (the included) DVD-ROM drive. The current setup leaves plenty of room for another server and a 1U or 2U NAS in the future.

I'll discuss my choice of hyper-visor software in a separate post, but suffice it to say that I finally landed on VMWare ESXi. The above setup currently hosts 5 virtual servers, including a Windows 2008 domain controller, an Ubuntu-Server Samba file server, an Ubuntu-Server instance providing VPN and SVN services, an Ubuntu-Server instance for torrent downloads and a Solaris instance for experimentation. Memory usage is at 5GB of the 16GB available and it's generally at consuming about 100 Mhz of the 13Ghz available. In the end the CPU appears to be somewhat overkill for home usage... :)

This whole setup is currently used to backup and store data, stream media to wireless devices and TVs around the house and provide DNS, SVN, VPN and Apache services. In the future I intend to segment the network into multiple vlans to ensure certain services and access can be quarantined.

So, was all the effort worth it? Not only is the new functionality being used on a daily basis, the amount of fun I got from building the server, configuring the instances and experimenting with different hyper-visors was more than worth it. Even though it might be overkill for most people, for anyone interested in learning more about virtualization this is the most fun you can have for a reasonable budget. In total I spent about $1600 for the server, cabinet and networking equipment (including patch cables.)

Here are some photos from the completed cabinet:

Volgende: Switchblade UI 01-'13 Switchblade UI


By Tweakers user yeehaw, Monday 04 February 2013 10:51

Interesting case, would it be possible to install SSD's in it? For instance with a 2,5 to 3,5 converter?

By Tweakers user X-DraGoN, Monday 04 February 2013 13:09

As I read your text it seems to me that only 100 MHz of the in total 13 GHz remain unused and it's fair to assume that you mean it exactly the other way around.

I like your overall approach, I would have done the same thing but alas, I ended up reusing the same old P4 1.7 GHz to host my little linux server. Maybe someday i'll put everything into a rack and clean it up a bit.

By Tweakers user Blue2k, Monday 04 February 2013 18:35

SSDs would definitely fit, but I'm not sure how useful that would be as it would severely limit the available storage. With 2x 3TB drives in RAID mirroring mode you effectively have 3TB. If you don't run them in RAID you have up to 6TB in space. With an SSD taking up a slot you can't do RAID mirroring anymore to protect your data. This is an assumption, so correct me if I'm wrong, but the hyper-visor is residing in RAM preferably, so running the hyper-visor from an SSD would not increase overall operation speeds. It would probably increase the boot time, but I very rarely reboot the hyper-visor anyway.

It's worth pointing out that I've had a mighty difficult time finding a half-depth server chassis though, and now finding a NAS that is half-depth and priced reasonably is even more difficult it appears. While a full-size server rack is just out of the question for me (I'd be divorced by my wife probably), it is severely limiting your options. Even with this short server chassis I had to move the two posts in the cabinet forward to not have the server stick out in the back.

I've clarified the CPU usage a bit by rephrasing that sentence, thanks

[Comment edited on Monday 04 February 2013 18:38]

By Tweakers user yeehaw, Monday 04 February 2013 22:36

The reason I would go for SSD's is not the storage capacity, but the virtualization benefits.
With 2 x 3TB storage you will hit IO limits relatively fast. This is not an issue with SSD's, and I do not need a lot of storage, only the Operating Systems and software will be running on them.

If you have 10 virtual servers on RAID1, it will feel very slow when doing a lot of things simultaneously.

The hypervisor is indeed fully loaded in ram (at least with Esxi), so you could theoretically run it from usb without any issues.

Thanks for the information :)

By Wouter, Tuesday 05 February 2013 11:24

You can run ESXi from usb indeed. (I'm running ESXI 5.1.0 from a 8GB kingston usb drive).

"With 16GB of memory and 8 virtual cores available it is capable of hosting 10 virtual servers without issue."
How you calculated this? For your load or was it just the advertisement for this server ?

But very nice server, also nice it's so quiet.
I'm using an older Q9400/8GB RAM, ESXi server. The CPU has enough power for me. I mostly miss disk I/O and some more RAM.

So an 256GB SSD would be nice. And when machines need more storage just some extra frrom an HDD (or mirror).

Just put the SSD somewhere in the free space of the server:
(I guess there is some space for a small ssd :) )

By Tweakers user analog_, Thursday 07 February 2013 02:25

You could just buy two SSDs or make sure you regulary backup your VM to say an external drive (hint: veeam).

By Tweakers user Blue2k, Thursday 07 February 2013 20:06

'How you calculated this? For your load or was it just the advertisement for this server ? '

Since this is intended as a home server I figured I'd assign 1GB of memory on average to VMs (the Windows 2008 instance has 2GB assigned, some of the Linux instances have 512 MB or 1GB assigned.) Most VMs don't need a lot of disk space, except for the data-store VM (which has 1.75TB assigned right now.) I wasn't worried about CPU usage as 99.99% of the time they'd be idling, and on rare occasions would more than 1 VM require CPU power.

I did consider using a VM as a Gentoo dist-cc node, which could possibly strain the CPU. But since I work on laptops most of the time the WiFi latency was basically destroying any gains from using dist-cc. I think it works better with a low latency cable connection, or with laptops / tablets that don't have enough processing power at all (my laptops are all i7 quad cores).

Oh, and this is not a pre-built server, it's a bare-bone. I selected the CPU, Memory and hard drives. Any pre-built server with the same specifications was WAY more expensive (and generally bigger).

[Comment edited on Thursday 07 February 2013 20:09]

By Tweakers user Patriot, Friday 08 February 2013 03:25

Do you have any pics of the completed server? I'd like to see them.

By Tweakers user Blue2k, Friday 08 February 2013 04:21

Patriot wrote on Friday 08 February 2013 @ 03:25:
Do you have any pics of the completed server? I'd like to see them.
I've added some photos of the finished cabinet. Once the cabling is final I'll have to bundle and tie them up neatly...

[Comment edited on Friday 08 February 2013 04:23]

By Tweakers user dipje2, Friday 08 February 2013 17:48

Would things like the Silverstone 3,5" -> 2x 2.5" bay-convertor work? So can you place 4 SSD's in the two 3.5" bays? I understand that for home NAS use storage space is more important, but looking at the cost and the noise-results this might be interesting as a developing / testing box in my office :).

By banlin mithra, Friday 15 February 2013 13:00

Blue2k wrote on Friday 08 February 2013 @ 04:21:

I've added some photos of the finished cabinet. Once the cabling is final I'll have to bundle and tie them up neatly...
These pics are very useful.Thank you.Virtualization Services

Comments are closed