Proxmox Lab: Intel NUC

Thanks for reading this first post in a new series I am putting together titled “Proxmox Lab”. In this blog series I will be covering various things related to Proxmox and the various hardware I have tested things on.

In this installment we will discuss a small foot print low power build that you can carry in your pocket, well, if you wear cargo shorts with the big pockets on the side.

Around two years ago I had purchased one of the earlier all black Intel NUC systems and a 32GB Crucial mSATA disk to run Proxmox 3.1 or 3.2, I forget the version at the time. Anyhow, around the same time that I attempted to complete the build of the device a client called up and expressed an immediate need for a small PC that could hide behind a conference room wall mount TV. Just like that my Intel NUC disappeared…

Months later I was able to find enough free time to get a new NUC, this time it was the more modern, current as of the time of this post, silver and black version. I went with the Core i3 variant as I didn’t want to go Celeron and the i5 was out of stock. Armed with 8GB of low voltage RAM (1.35V s required), I installed Proxmox to a 32GB Crucial mSATA drive and off I went. I strictly used the local storage for ISOs and the Proxmox system itself. This system ran excellently and never gave me so much as a hiccup. The combination of super fast BIOS and the SSD boot volume meant that this thing would boot or reboot so fast that I had to double check that I actually shut it down, quite a nice problem to have.

As so often happens here at the home office lab, I change hardware pretty frequently. Often times it is due to client needs or desires, other times it’s simply that I see something new and shiny. Regardless of the reasons, I rarely regret the money spent as the investment always comes back many times over in the form of education and experience gained.

This i3 system was eventually replaced about a year later when someone I knew really wanted the i3 NUC so I sent it packing to a new and better home. Since I had grown accustom to the silence, low power/heat, and wonderfully small size of the NUC, I had to find something like it to replace the one I had just gotten rid of. Well, after little debate, I ordered a new Intel NUC again, this time armed with a higher clock speed and the wondrous Core i5 badge. System memory was boosted to the full 16GB allowed by the board and off I went. Just like its Core i3 counterpart, this NUC performed flawlessly in all regards. Stable, fast, and truly affordable.

If you are looking for low power (as in electricity consumption), high performance, and physically small and beautiful package for your home lab test machine/hypervisor, be sure to take a serious look at the Intel NUC. Just imagine a shoe box full of Intel NUCs acting as a full on Proxmox cluster! Aside from the physical memory constraints inherent to this platform, I have seriously considered putting a handful of these into client networks as small foot print Proxmox clusters.

Pros: Tiny system, low energy usage, high performance.
Cons: Usually a tad more expensive than a comparable i3/i5 SFF desktop PC with the same specs. Requires mSATA and low voltage memory, both of which you probably do not have laying around.

A final note, unless you are doing CPU intensive tasks, which you probably are not, then skip the i5 variant. While it works great, I noticed zero performance increase over my Core i3 NUC. Obviously, this varies from workload to workload so be sure you know what you need.

I hope this helps any perspective home lab enthusiasts out there and be sure to stay tuned for my next build which I just finished ordering…


Migrating from VMware ESXi to QEMU/KVM

For a myriad of reasons, I have been looking at alternatives to VMware ESXi for a few months. Virtualizing a few machines here and there has proven educational. Learning the ropes of working with qemu/kvm, libvirt, and virsh has been challenging at times, but overall a pleasure to work with. Working with kvm is great although it takes some getting use to coming from a VMware/ESXi centric environment.

Up to this point all of the virtual machines that I had worked with were new systems. After some research and a few backups of my current VMs running on one of my ESXi hosts, I decided to migrate a few production VMs. Here are the steps that I used to move virtual machines over from a licensed vSphere 4.1 installation to a Linux host running qemu/kvm.

For starters, be sure that you have full backups of any VMs that you plan on working with. With that out of the way, you are ready to start:

1. Remove all snapshots from the virtual machine across all virtual disks.

2. Uninstall VMware Tools and then perform a clean shutdown of the guest operating system.

3. Copy the virtual hard disk(s) over to the qemu/kvm host. The virtual disk is typically the largest file within a VM’s directory and will usually be named something like ‘guestname-flat.vmdk’

4. On the qemu/kvm host, change to the directory containing the .vmdk file. Assuming you are using qcow2 disk images, run the following command to convert the .vmdk: kvm-img convert -O qcow2 guestname-flat.vmdk newguestname.qcow2

5. Create a new VM on the qemu/kvm host and choose the recently converted disk image as your existing drive/image. It is important that you create your new guest with the same or similar settings as it had before. I recommend cloning the MAC address over to the new guest for added simplicity with NIC detection, assignment, and third party software licensing.

6. Attempt to boot the system. Depending upon your guests virtual disk settings and other factors, the system may hang during boot. Edit your virtual machine and set the controller type to SCSI assuming that was the controller type back on ESXi.

At this point your system should be up and running on the new host. I did find notes and suggestions that qemu/kvm can run vmdk files/disk images, but there seemed to be a handful of caveats so I decided to convert the vmdk’s over to a native format.


HP ProLiant MicroServer Flexibility

I’ve been meaning to put some of my thoughts on the HP MicroServer N40L for quite some time and just haven’t made the time to do it, so here goes.

Long ago I was searching for a reason to purchase and play with HP’s MicroServers and got my chance when a client asked for an affordable backup device. I jumped at the chance and ordered one of the N40L’s. These units are listed as part of the ProLiant family of servers which sounded promising, but being the skeptic that I am, I didn’t expect much for the seemingly measly $350 price tag.

The unit comes with an AMD dual core CPU, 2GB of RAM, 250GB HDD, and a 1Gbps NIC. The system has a mini-PCIe slot for a remote access/iLO/DRAC type card, and a second standard PCIe slot. Although the system ships with only a single drive, all four bays have “hot swap” trays/carriers, making adding additional disks no problem. I say “hot swap” because I am pretty sure that the backplane/controller do not allow actual hot swapping in its true sense, YMMV. Another note on the hardware; the motherboard can be easily removed from the system by disconnecting a few cables and backing out two thumb screws. The board is on a simple and quite brilliant tray assembly which makes removal, upgrade, and insertion simple. Do yourself a favor when you purchase the system by maxing out the RAM at 8GB(DDR3/ECC) and adding the optional iLO/remote access card. For basic NAS and low end Linux server duties the 2GB will work fine and you will have no regrets, but going to 8GB really opens the doors, more on that next.

Before I jump into exactly what it can do, it is worth mentioning what YOU should not do with it. For instance, don’t try and be a hero to your clients by touting this as an ultra affordable server solution. I have read of several people putting SBS on this box and then using it as the primary file and mail server for 20+ users. Don’t be a dummy, if you’re trying to service your clients properly get them a truly redundant system with hardware RAID, dual PSU’s and things of that nature. You are providing a disservice to your clients if you use this in a place it should not be used. Responsibility rant over…

With the remote access card, 8GB of RAM, and a couple of SATA drives, you are ready to play. This is the little server that could and it shows. The thing runs VMware ESXi5, Linux, Windows, FreeBSD(FreeNAS) and many other things. An important thing to remember is that the included disk controller uses fake RAID/driver assisted RAID so don’t expect RAID support outside of Windows. With that limitation in mind, this makes the ideal small business backup device, home virtualization lab, or any other number of roles you can through at it.

Fast forward to today and the device has served me and many others quite nicely. Although not a comprehensive list of installs, I can confirm successful installation on the following operating systems:

  • Debian Lenny (i386/AMD64)
  • Debian Squeeze (i386/AMD64) Currently Debian stable release
  • Debian Wheezy (i386/AMD64) Currently Debian testing release
  • Ubuntu 10.04 (i386/AMD64)
  • FreeNAS 0.7 (i386)
  • FreeNAS 8 (i386/AMD64)
  • VMware ESXi 4.1
  • VMware ESXi 5.0
  • Windows Server 2008 R2
  • Windows Small Business Server 2011
  • Whew! What a list and that just touches the surface of what you can run. Those just happen to be the configurations that I have tested with success. My current configuration consists of the base system running 8GB of RAM, iLO card, 1x64GB SSD and 4x1TB RAID edition drives. I’ve got Debian stable AMD64 running on / and have 4x1TB RE drives using Linux md RAID in level 5 mounted on /home. This acts as my internal NFS server and virtualization lab. The system runs vm guests well through KVM although you will have to watch the CPU. Being a dual core 1.5GHz, the system will usually run out of CPU before you hit any other bottlenecks.

    In conclusion, if you need a flexible and affordable storage device for most small business or home needs, a cheap virtualization lab in a box, or similar configuration, you will not be disappointed by this device.


    Windows Server Licensing for Virtual Environments

    I prefer Linux to Windows for a handful of reasons. One of the obvious benefits is licensing, and with all of the virtualizing I do in production and testing, its nice to never have to think about licensing.  Meanwhile, back in the real world, most of my clients use Windows based servers for their day to day tasks. The Windows OS license is generally licensed per install/server; the notable exception being Data Center Edition which is licensed per CPU.

    With consolidation ratios ever increasing, we are always on the lookout for bottlenecks in systems. What about licensing? If you are running numerous Windows guests, are there ways to make smarter licensing moves? In a nutshell, yes.

    Instead of trying to reinvent the wheel, I will steer you to a well written and very informative article detailing some of things you can do. This is well worth the read.


    XEN vs. VMware ESXi

    I use server Virtualization to make money. With the new licensing model that VMware has announced with vSphere 5 it appears that a typical setup will now cost more. Times are tough! What is a sysadmin to do?
    vSphere 4 will clearly remain viable for at least the near future. I have not taken the time to fully understand what v5 will offer that is better. Our current environment is two ESXi 4.1 hosts managed by vCenter. Each host has 32GB RAM and the guest RAM is probably over subscribed, but not by much.
    In the next week I plan to load the VMware tool that will provide indications as to what the new licensing will look like for the current environment. Should be interesting…
    All that as it is, I think it is time to seriously look at XEN Virtualization. Loaded it up on descent hardware today (right before the power went out!) So more later on the testing.
    Question: anyone using DTC-XEN for ‘managing’ XEN guests?


    09.26.2011 Update: loaded VMware ESXi 5.0.0 over the weekend. Installation is as straight forwarded as expected. Quickly installed two MS Windows Server 2008 Enterprise 64-bit servers and a MS Windows 7 Enterprise 64-bit desktop. Will be digging into new license model limitations this week. So far, for a single host, I can’t find a reason to load 5.0.

    I did get XEN loaded on Debian Squeeze, then wrecked the install. I will be rebuilding shortly for a comparison.

    VMware Workstation 8 has been release a few weeks ago. One nice feature is a much easier migration path from Workstation to vCenter and vSphere. VMware is claiming over 50 new features with this release.


    VMware ESX, NIC Teaming, and VLAN Trunking with HP ProCurve

    Found this great article and thought I would share it with RH readers:

    Original text from

    In an earlier article about VMware ESX, NIC teaming, and VLAN trunking, I described what the configuration should look like if one were using these features with Cisco switch hardware. It’s been a quite popular post, one I will probably need to update soon.

    In this article, I’d like to discuss how to do the same thing, but using HP ProCurve switch hardware. The article is broken into three sections: using VLANs, using link aggregation (NIC teaming), and using both together.
    Using VLAN Trunking

    To my Cisco-oriented mind, VLANs with ProCurve switches are handled quite differently. Port-based VLANs, in which individual ports are assigned to one or more VLANs, allow a switch port to participate in that VLAN as either an untagged fashion or in a tagged fashion.

    The difference here is really simpler than it may seem: the untagged VLAN can be considered the “native VLAN” from the Cisco world, meaning that the VLAN tags are not added to packets traversing that port. Putting a port in a VLAN in untagged mode is essentially equivalent to making that port an access port in the Cisco IOS world. Only one VLAN can be marked as untagged, which makes sense if you think about it.

    Any port groups that should receive traffic from the untagged VLAN need to have VLAN ID 0 (no VLAN ID, in other words) assigned.

    A tagged VLAN, on the other hand, adds the 802.1q VLAN tags to traffic moving through the port, like a VLAN trunk. If a user wants to use VST (virtual switch tagging) to host multiple VLANs on a single VMware ESX host, then the ProCurve ports need to have those VLANs marked as tagged. This will ensure that the VLAN tags are added to the packets and that VMware ESX can direct the traffic to the correct port group based on those VLAN tags.

    In summary:

    * Assign VLAN ID 0 to all port groups that need to receive traffic from the untagged VLAN (remember that a port can only be marked as untagged for a single VLAN). This correlates to the discussion about VMware ESX and the native VLAN, in which I reminded users that port groups intended to receive traffic for the native VLAN should not have a VLAN ID specified.
    * Be sure that ports are marked as tagged for all other VLANs that VMware ESX should see. This will enable the use of VST and multiple port groups, each configured with an appropriate VLAN ID. (By the way, if users are unclear on VST vs. EST vs. VGT, see this article.)
    * VLANs that VMware ESX should not see at all should be marked as “No” in the VLAN configuration of the ProCurve switch for those ports.

    Using Link Aggregation

    There’s not a whole lot to this part. In the ProCurve configuration, users will mark the ports that should participate in link aggregation as part of a trunk (say, Trk1) and then set the trunk type. Here’s the only real gotcha: the trunk must be configured as type “Trunk” and not type “LACP”.

    In this context, LACP refers to dynamic LACP, which allows the switch and the server to dynamically negotiate the number of links in the bundle. VMware ESX doesn’t support dynamic LACP, only static LACP. To do static LACP, users will need to set the trunk type to Trunk.

    Then, as has been discussed elsewhere in great depth, configure the VMware ESX vSwitch’s load balancing policy to “Route based on ip hash”. Once that’s done, everything should work as expected. This blog entry gives the CLI command to set the vSwitch load balancing policy, which would be necessary if configuring vSwitch0. For all other vSwitches, the changes can be made via VirtualCenter.

    That’s really all there is to making link aggregation work between an HP ProCurve switch and VMware ESX.
    Using VLANs and Link Aggregation Together

    This section exists only to point out that when a trunk is created, the VLAN configuration for the members of that trunk disappears, and the trunk must be configured directly for VLAN support. In fact, users will note that the member ports don’t even appear in the list of ports to be configured for VLANs; only the trunks themselves appear.

    Key point to remember: apply your VLAN configurations after your trunking configuration, or else you’ll just have to do it all over again.

    With this information, users should now be pretty well prepared to configure HP ProCurve switches in a VMware ESX environment. Feel free to post any questions, clarifications, or corrections in the comments below, and thanks for reading!

    Running a “datacenter”

    It has been way too long since my last post.

    In my spare time, along with two other partners, I run a virtualization datacenter in a collocation facility. In addition to generating income, it also serves as an excellent test lab for open source products. It is my conviction that the small business market can seriously benefit from open source applications.

    Pfsense, for example, will match up to any mid-level commercial router/firewall for a fraction of the cost (hardware needed to run pf). Virtualization further reduces this expense. Much of the current server hardware in use today will run VMware ESXi (not open source, but no charge). VMware estimates that most desktop and server hardware is only in use 5-15% of the time. Server virtulization reduces physical storage space, cooling requirements, energy consumption, all part of total cost of ownership. VMware certainly is not the only server virtualization host available. Check out offerings from MS, Citrix or Ubuntu. There are others as well.

    Note: yes, ESXi is 100% $FREE. There is no requirement to purchase support.

    For data storage, we have successfully proven (many times over) that linux (Ubuntu server) NFS is a solid, valid option for shared storage. Production MS Windows and *nix-based virtual machines run flawlessly. MS Windows Server 2008 (all versions) runs particularly well in a virtual environment.

    Note: MS Enterprise license allows four instances of server to be installed on a virtual host. Further reduction of expense.

    Virtualization is also excellent as a test platform. Windows or *nix servers and desktops can be spun up very rapidly on a single host. I will frequently load a server to install an application evaluation. There is no fear of corrupting existing production servers. If I choose not to use the app, I simply delete the vm and resources are recovered.

    I am also successfully using an open source VoIP PBX. Various distributions based on asterisk are very strong and offer all of the common phone system features found in key systems and even pbx.

    Today’s take-away: check out virtualization. Check out open source. The benefits will  be clear. The savings will be immediate.

    VMware Best Practices

    I have come across a few useful KB articles from VMware recently and thought I would stick them here for future reference.

    Installing ESX 4.1 and vCenter Server 4.1 best practices

    If you are planning to or have recently deployed vCenter Server 4.1 or ESX(i) 4.1, this KB article from VMware may be of some value to you: Installing ESX 4.1 and vCenter Server 4.1 best practices

    Best practices for virtual machine snapshots in the VMware environment

    Most of us work with snapshots and snapshot tech every day. In fact, I for one cannot imagine doing some of things I do without snapshots any longer. I use to do everything without snapshots, but technology is good for making all lazy. Either way, this best practices article is worth a read.