Proxmox Lab: Intel NUC

Thanks for reading this first post in a new series I am putting together titled “Proxmox Lab”. In this blog series I will be covering various things related to Proxmox and the various hardware I have tested things on.

In this installment we will discuss a small foot print low power build that you can carry in your pocket, well, if you wear cargo shorts with the big pockets on the side.

Around two years ago I had purchased one of the earlier all black Intel NUC systems and a 32GB Crucial mSATA disk to run Proxmox 3.1 or 3.2, I forget the version at the time. Anyhow, around the same time that I attempted to complete the build of the device a client called up and expressed an immediate need for a small PC that could hide behind a conference room wall mount TV. Just like that my Intel NUC disappeared…

Months later I was able to find enough free time to get a new NUC, this time it was the more modern, current as of the time of this post, silver and black version. I went with the Core i3 variant as I didn’t want to go Celeron and the i5 was out of stock. Armed with 8GB of low voltage RAM (1.35V s required), I installed Proxmox to a 32GB Crucial mSATA drive and off I went. I strictly used the local storage for ISOs and the Proxmox system itself. This system ran excellently and never gave me so much as a hiccup. The combination of super fast BIOS and the SSD boot volume meant that this thing would boot or reboot so fast that I had to double check that I actually shut it down, quite a nice problem to have.

As so often happens here at the home office lab, I change hardware pretty frequently. Often times it is due to client needs or desires, other times it’s simply that I see something new and shiny. Regardless of the reasons, I rarely regret the money spent as the investment always comes back many times over in the form of education and experience gained.

This i3 system was eventually replaced about a year later when someone I knew really wanted the i3 NUC so I sent it packing to a new and better home. Since I had grown accustom to the silence, low power/heat, and wonderfully small size of the NUC, I had to find something like it to replace the one I had just gotten rid of. Well, after little debate, I ordered a new Intel NUC again, this time armed with a higher clock speed and the wondrous Core i5 badge. System memory was boosted to the full 16GB allowed by the board and off I went. Just like its Core i3 counterpart, this NUC performed flawlessly in all regards. Stable, fast, and truly affordable.

If you are looking for low power (as in electricity consumption), high performance, and physically small and beautiful package for your home lab test machine/hypervisor, be sure to take a serious look at the Intel NUC. Just imagine a shoe box full of Intel NUCs acting as a full on Proxmox cluster! Aside from the physical memory constraints inherent to this platform, I have seriously considered putting a handful of these into client networks as small foot print Proxmox clusters.

Pros: Tiny system, low energy usage, high performance.
Cons: Usually a tad more expensive than a comparable i3/i5 SFF desktop PC with the same specs. Requires mSATA and low voltage memory, both of which you probably do not have laying around.

A final note, unless you are doing CPU intensive tasks, which you probably are not, then skip the i5 variant. While it works great, I noticed zero performance increase over my Core i3 NUC. Obviously, this varies from workload to workload so be sure you know what you need.

I hope this helps any perspective home lab enthusiasts out there and be sure to stay tuned for my next build which I just finished ordering…

–himuraken

Migrating from VMware ESXi to QEMU/KVM

For a myriad of reasons, I have been looking at alternatives to VMware ESXi for a few months. Virtualizing a few machines here and there has proven educational. Learning the ropes of working with qemu/kvm, libvirt, and virsh has been challenging at times, but overall a pleasure to work with. Working with kvm is great although it takes some getting use to coming from a VMware/ESXi centric environment.

Up to this point all of the virtual machines that I had worked with were new systems. After some research and a few backups of my current VMs running on one of my ESXi hosts, I decided to migrate a few production VMs. Here are the steps that I used to move virtual machines over from a licensed vSphere 4.1 installation to a Linux host running qemu/kvm.

For starters, be sure that you have full backups of any VMs that you plan on working with. With that out of the way, you are ready to start:

1. Remove all snapshots from the virtual machine across all virtual disks.

2. Uninstall VMware Tools and then perform a clean shutdown of the guest operating system.

3. Copy the virtual hard disk(s) over to the qemu/kvm host. The virtual disk is typically the largest file within a VM’s directory and will usually be named something like ‘guestname-flat.vmdk’

4. On the qemu/kvm host, change to the directory containing the .vmdk file. Assuming you are using qcow2 disk images, run the following command to convert the .vmdk: kvm-img convert -O qcow2 guestname-flat.vmdk newguestname.qcow2

5. Create a new VM on the qemu/kvm host and choose the recently converted disk image as your existing drive/image. It is important that you create your new guest with the same or similar settings as it had before. I recommend cloning the MAC address over to the new guest for added simplicity with NIC detection, assignment, and third party software licensing.

6. Attempt to boot the system. Depending upon your guests virtual disk settings and other factors, the system may hang during boot. Edit your virtual machine and set the controller type to SCSI assuming that was the controller type back on ESXi.

At this point your system should be up and running on the new host. I did find notes and suggestions that qemu/kvm can run vmdk files/disk images, but there seemed to be a handful of caveats so I decided to convert the vmdk’s over to a native format.

–himuraken

HP ProLiant MicroServer Flexibility

I’ve been meaning to put some of my thoughts on the HP MicroServer N40L for quite some time and just haven’t made the time to do it, so here goes.

Long ago I was searching for a reason to purchase and play with HP’s MicroServers and got my chance when a client asked for an affordable backup device. I jumped at the chance and ordered one of the N40L’s. These units are listed as part of the ProLiant family of servers which sounded promising, but being the skeptic that I am, I didn’t expect much for the seemingly measly $350 price tag.

The unit comes with an AMD dual core CPU, 2GB of RAM, 250GB HDD, and a 1Gbps NIC. The system has a mini-PCIe slot for a remote access/iLO/DRAC type card, and a second standard PCIe slot. Although the system ships with only a single drive, all four bays have “hot swap” trays/carriers, making adding additional disks no problem. I say “hot swap” because I am pretty sure that the backplane/controller do not allow actual hot swapping in its true sense, YMMV. Another note on the hardware; the motherboard can be easily removed from the system by disconnecting a few cables and backing out two thumb screws. The board is on a simple and quite brilliant tray assembly which makes removal, upgrade, and insertion simple. Do yourself a favor when you purchase the system by maxing out the RAM at 8GB(DDR3/ECC) and adding the optional iLO/remote access card. For basic NAS and low end Linux server duties the 2GB will work fine and you will have no regrets, but going to 8GB really opens the doors, more on that next.

Before I jump into exactly what it can do, it is worth mentioning what YOU should not do with it. For instance, don’t try and be a hero to your clients by touting this as an ultra affordable server solution. I have read of several people putting SBS on this box and then using it as the primary file and mail server for 20+ users. Don’t be a dummy, if you’re trying to service your clients properly get them a truly redundant system with hardware RAID, dual PSU’s and things of that nature. You are providing a disservice to your clients if you use this in a place it should not be used. Responsibility rant over…

With the remote access card, 8GB of RAM, and a couple of SATA drives, you are ready to play. This is the little server that could and it shows. The thing runs VMware ESXi5, Linux, Windows, FreeBSD(FreeNAS) and many other things. An important thing to remember is that the included disk controller uses fake RAID/driver assisted RAID so don’t expect RAID support outside of Windows. With that limitation in mind, this makes the ideal small business backup device, home virtualization lab, or any other number of roles you can through at it.

Fast forward to today and the device has served me and many others quite nicely. Although not a comprehensive list of installs, I can confirm successful installation on the following operating systems:

  • Debian Lenny (i386/AMD64)
  • Debian Squeeze (i386/AMD64) Currently Debian stable release
  • Debian Wheezy (i386/AMD64) Currently Debian testing release
  • Ubuntu 10.04 (i386/AMD64)
  • FreeNAS 0.7 (i386)
  • FreeNAS 8 (i386/AMD64)
  • VMware ESXi 4.1
  • VMware ESXi 5.0
  • Windows Server 2008 R2
  • Windows Small Business Server 2011
  • Whew! What a list and that just touches the surface of what you can run. Those just happen to be the configurations that I have tested with success. My current configuration consists of the base system running 8GB of RAM, iLO card, 1x64GB SSD and 4x1TB RAID edition drives. I’ve got Debian stable AMD64 running on / and have 4x1TB RE drives using Linux md RAID in level 5 mounted on /home. This acts as my internal NFS server and virtualization lab. The system runs vm guests well through KVM although you will have to watch the CPU. Being a dual core 1.5GHz, the system will usually run out of CPU before you hit any other bottlenecks.

    In conclusion, if you need a flexible and affordable storage device for most small business or home needs, a cheap virtualization lab in a box, or similar configuration, you will not be disappointed by this device.

    –himuraken

    Debian Squeeze & Broadcom b43 etc

    So you like Debian, and why wouldn’t you, it is great after all. Unfortunately, many laptops come from the factory sporting Broadcom-based chipsets. So inevitably I complete a Debian install and Broadcom takes the wind out of my sales. I then trudge over to http://wiki.debian.org/wl#Squeeze and go through the paces. Why? I do it over and over. Well enough is enough, I mean this isn’t a tricky script to write. So for your enjoyment, I have put it all together into a small bash script to simplify things for future installs. First, be sure to add the non-free repo to your /etc/apt/sources.list file.
    Then create and run a .sh file containting:

    #!/bin/bash
    aptitude update
    aptitude install module-assistant wireless-tools
    m-a a-i broadcom-sta
    echo blacklist brcm80211 >> /etc/modprobe.d/broadcom-sta-common.conf
    update-initramfs -u -k $(uname -r)
    modprobe -r b44 b43 b43legacy ssb brcm80211
    modprobe wl
    iwconfig

    Enjoy!

    –himuraken

    Got old-buntu? Ubuntu EOL 9.10 to 10.04 Upgrade Mini HowTo

    So several months ago, I like the rest of the world, was notified that end of life (EOL) for Ubuntu 9.10 Karmic Koala would happening. In the news blurb/mailing list, wherever I found it, I walked away thinking that security updates would cease to exist.

    In preparation for the upgrade, I went ahead and cloned the 9.10 server and proceeded to upgrade the server to Ubuntu 10.04 Lucid Lynx. This went off without a hitch from what I could tell and I scheduled the upgrade of the production server with my last client running 9.10.

    Without fail, life happens, clients have things come up, and the upgrade never happened. Fast forward to present day and time, and my client tried installing a package using apt-get and received a slew of errors. Looking into the issue a bit further and I found the repositories gone. Interestingly enough, when EOL occurs for an Ubuntu release, it really ends, and not just for the security patches.

    So one is left wondering, “how can I sudo apt-get install update-manager-core & sudo do-release-upgrade when I can’t even do a simple sudo apt-get update?” Solution: EOL upgrade. There are several different ways to go about this, the best are detailed here. At the time of this writing, the link is a little unclear about how to get 9.10 to 10.04 so here is the quick and easy way:

    1. Backup your current sources.list:
    sudo mv /etc/apt/sources.list ~/sources.list

    2. Create a new sources.list:
    sudo vim /etc/apt/sources.list

    3. Add/paste in archive release repositories substituting CODENAME for release jaunty, karmic, etc:

    ## EOL upgrade sources.list
    # Required
    deb http://old-releases.ubuntu.com/ubuntu/ CODENAME main restricted universe multiverse
    deb http://old-releases.ubuntu.com/ubuntu/ CODENAME-updates main restricted universe multiverse
    deb http://old-releases.ubuntu.com/ubuntu/ CODENAME-security main restricted universe multiverse

    # Optional
    #deb http://old-releases.ubuntu.com/ubuntu/ CODENAME-backports main restricted universe multiverse

    4. Update repositories and install update manager
    sudo apt-get update
    sudo apt-get install update-manager-core

    5. Initiate the upgrade
    sudo do-release-upgrade

    6. Enjoy!

    –Himuraken

    XEN vs. VMware ESXi

    I use server Virtualization to make money. With the new licensing model that VMware has announced with vSphere 5 it appears that a typical setup will now cost more. Times are tough! What is a sysadmin to do?
    vSphere 4 will clearly remain viable for at least the near future. I have not taken the time to fully understand what v5 will offer that is better. Our current environment is two ESXi 4.1 hosts managed by vCenter. Each host has 32GB RAM and the guest RAM is probably over subscribed, but not by much.
    In the next week I plan to load the VMware tool that will provide indications as to what the new licensing will look like for the current environment. Should be interesting…
    All that as it is, I think it is time to seriously look at XEN Virtualization. Loaded it up on descent hardware today (right before the power went out!) So more later on the testing.
    Question: anyone using DTC-XEN for ‘managing’ XEN guests?

    -habanero_joe

    09.26.2011 Update: loaded VMware ESXi 5.0.0 over the weekend. Installation is as straight forwarded as expected. Quickly installed two MS Windows Server 2008 Enterprise 64-bit servers and a MS Windows 7 Enterprise 64-bit desktop. Will be digging into new license model limitations this week. So far, for a single host, I can’t find a reason to load 5.0.

    I did get XEN loaded on Debian Squeeze, then wrecked the install. I will be rebuilding shortly for a comparison.

    VMware Workstation 8 has been release a few weeks ago. One nice feature is a much easier migration path from Workstation to vCenter and vSphere. VMware is claiming over 50 new features with this release.

    -habanero_joe

    Alternatives to MS Windows (desktop)

    I have worked with and supported just about every version of MS desktop operating systems. That is life in the corporate environment. No matter your opinion of Microsoft OS, it gets the job done for the business user. However, there are many less than desirable “features”, such as licensing costs, hardware requirements, resources used by the OS, and the list goes on…

    Some of my early experience was as a sys admin for a SCO UNIX network. I was impressed with the stability and reliability of the system. If I remember correctly the server had 128MB RAM to support 100-plus users. The serial network certainly had no bells and whistles, but it was easy to maintain. Server uptimes were measured in months instead of days for a typical Windows server. Adding a NIC and TCP/IP made the server very versatile and improved performance. As is common in the corporate environment, after a few years, new software required Windows servers and the UNIX box was retired. I pretty much lost touch with *nix in general.

    Fast forward five years…

    At some point I began loading various Linux distros on older laptops to check it out. Red Hat, Fedora, SUSE and all the other usual suspects were checked out. CentOS seemed to be one of the more popular distros supported. Along the way I met some more-serious Linux users and began loading Ubuntu starting with the 8 series releases. By now I had converted a personal laptop and the main home computer to Ubuntu. Stayed involved and moved along with updates until 10.04 LTS. Was very pleased with how it all worked. The variety of supported hardware is excellent (except maybe audio) and the stability was always very good.

    Fast forward to Thursday June 16…

    My Lenovo laptop with an XP install was finally experiencing some OS corruption and general performance degradation. Time for a reload with Linux. Off to Ubuntu.com to download 11.04 ISO. I loaded it up and the process was the same simple install I had come to expect with Ubuntu. Rebooted and logged in. I immediately noted the new UI. Complete crap was my first thought. I poked around for the rest of the day and my opinion got worse. It was not intuitive at all. Played around a couple more hours and tossed it in the bag for the night.

    Fast forward to the next day…

    Thoroughly disappointed with Ubuntu 11.04 I looked up Debian and did a little research. Seemed pretty solid and why use a derivative when you can get the original? A couple answers later (thank you himuraken!) I was installing Debian 6. (look for a future post on using ISO files and a USB drive)
    Knowing that Ubuntu is based on Debian, I expected it to be familiar and it was. Two days later and I am very happy with the decision to replace Ubuntu 11.04. I look forward to using it daily.

    I strongly encourage anyone interested in Linux to check out Debian. You will not be disappointed.

    http://en.wikipedia.org/wiki/Debian_GNU/Linux

    – habanero_joe