Migrating from VMware ESXi to QEMU/KVM

For a myriad of reasons, I have been looking at alternatives to VMware ESXi for a few months. Virtualizing a few machines here and there has proven educational. Learning the ropes of working with qemu/kvm, libvirt, and virsh has been challenging at times, but overall a pleasure to work with. Working with kvm is great although it takes some getting use to coming from a VMware/ESXi centric environment.

Up to this point all of the virtual machines that I had worked with were new systems. After some research and a few backups of my current VMs running on one of my ESXi hosts, I decided to migrate a few production VMs. Here are the steps that I used to move virtual machines over from a licensed vSphere 4.1 installation to a Linux host running qemu/kvm.

For starters, be sure that you have full backups of any VMs that you plan on working with. With that out of the way, you are ready to start:

1. Remove all snapshots from the virtual machine across all virtual disks.

2. Uninstall VMware Tools and then perform a clean shutdown of the guest operating system.

3. Copy the virtual hard disk(s) over to the qemu/kvm host. The virtual disk is typically the largest file within a VM’s directory and will usually be named something like ‘guestname-flat.vmdk’

4. On the qemu/kvm host, change to the directory containing the .vmdk file. Assuming you are using qcow2 disk images, run the following command to convert the .vmdk: kvm-img convert -O qcow2 guestname-flat.vmdk newguestname.qcow2

5. Create a new VM on the qemu/kvm host and choose the recently converted disk image as your existing drive/image. It is important that you create your new guest with the same or similar settings as it had before. I recommend cloning the MAC address over to the new guest for added simplicity with NIC detection, assignment, and third party software licensing.

6. Attempt to boot the system. Depending upon your guests virtual disk settings and other factors, the system may hang during boot. Edit your virtual machine and set the controller type to SCSI assuming that was the controller type back on ESXi.

At this point your system should be up and running on the new host. I did find notes and suggestions that qemu/kvm can run vmdk files/disk images, but there seemed to be a handful of caveats so I decided to convert the vmdk’s over to a native format.

–himuraken

HP ProLiant MicroServer Flexibility

I’ve been meaning to put some of my thoughts on the HP MicroServer N40L for quite some time and just haven’t made the time to do it, so here goes.

Long ago I was searching for a reason to purchase and play with HP’s MicroServers and got my chance when a client asked for an affordable backup device. I jumped at the chance and ordered one of the N40L’s. These units are listed as part of the ProLiant family of servers which sounded promising, but being the skeptic that I am, I didn’t expect much for the seemingly measly $350 price tag.

The unit comes with an AMD dual core CPU, 2GB of RAM, 250GB HDD, and a 1Gbps NIC. The system has a mini-PCIe slot for a remote access/iLO/DRAC type card, and a second standard PCIe slot. Although the system ships with only a single drive, all four bays have “hot swap” trays/carriers, making adding additional disks no problem. I say “hot swap” because I am pretty sure that the backplane/controller do not allow actual hot swapping in its true sense, YMMV. Another note on the hardware; the motherboard can be easily removed from the system by disconnecting a few cables and backing out two thumb screws. The board is on a simple and quite brilliant tray assembly which makes removal, upgrade, and insertion simple. Do yourself a favor when you purchase the system by maxing out the RAM at 8GB(DDR3/ECC) and adding the optional iLO/remote access card. For basic NAS and low end Linux server duties the 2GB will work fine and you will have no regrets, but going to 8GB really opens the doors, more on that next.

Before I jump into exactly what it can do, it is worth mentioning what YOU should not do with it. For instance, don’t try and be a hero to your clients by touting this as an ultra affordable server solution. I have read of several people putting SBS on this box and then using it as the primary file and mail server for 20+ users. Don’t be a dummy, if you’re trying to service your clients properly get them a truly redundant system with hardware RAID, dual PSU’s and things of that nature. You are providing a disservice to your clients if you use this in a place it should not be used. Responsibility rant over…

With the remote access card, 8GB of RAM, and a couple of SATA drives, you are ready to play. This is the little server that could and it shows. The thing runs VMware ESXi5, Linux, Windows, FreeBSD(FreeNAS) and many other things. An important thing to remember is that the included disk controller uses fake RAID/driver assisted RAID so don’t expect RAID support outside of Windows. With that limitation in mind, this makes the ideal small business backup device, home virtualization lab, or any other number of roles you can through at it.

Fast forward to today and the device has served me and many others quite nicely. Although not a comprehensive list of installs, I can confirm successful installation on the following operating systems:

  • Debian Lenny (i386/AMD64)
  • Debian Squeeze (i386/AMD64) Currently Debian stable release
  • Debian Wheezy (i386/AMD64) Currently Debian testing release
  • Ubuntu 10.04 (i386/AMD64)
  • FreeNAS 0.7 (i386)
  • FreeNAS 8 (i386/AMD64)
  • VMware ESXi 4.1
  • VMware ESXi 5.0
  • Windows Server 2008 R2
  • Windows Small Business Server 2011
  • Whew! What a list and that just touches the surface of what you can run. Those just happen to be the configurations that I have tested with success. My current configuration consists of the base system running 8GB of RAM, iLO card, 1x64GB SSD and 4x1TB RAID edition drives. I’ve got Debian stable AMD64 running on / and have 4x1TB RE drives using Linux md RAID in level 5 mounted on /home. This acts as my internal NFS server and virtualization lab. The system runs vm guests well through KVM although you will have to watch the CPU. Being a dual core 1.5GHz, the system will usually run out of CPU before you hit any other bottlenecks.

    In conclusion, if you need a flexible and affordable storage device for most small business or home needs, a cheap virtualization lab in a box, or similar configuration, you will not be disappointed by this device.

    –himuraken

    Debian Squeeze & Broadcom b43 etc

    So you like Debian, and why wouldn’t you, it is great after all. Unfortunately, many laptops come from the factory sporting Broadcom-based chipsets. So inevitably I complete a Debian install and Broadcom takes the wind out of my sales. I then trudge over to http://wiki.debian.org/wl#Squeeze and go through the paces. Why? I do it over and over. Well enough is enough, I mean this isn’t a tricky script to write. So for your enjoyment, I have put it all together into a small bash script to simplify things for future installs. First, be sure to add the non-free repo to your /etc/apt/sources.list file.
    Then create and run a .sh file containting:

    #!/bin/bash
    aptitude update
    aptitude install module-assistant wireless-tools
    m-a a-i broadcom-sta
    echo blacklist brcm80211 >> /etc/modprobe.d/broadcom-sta-common.conf
    update-initramfs -u -k $(uname -r)
    modprobe -r b44 b43 b43legacy ssb brcm80211
    modprobe wl
    iwconfig

    Enjoy!

    –himuraken

    XEN vs. VMware ESXi

    I use server Virtualization to make money. With the new licensing model that VMware has announced with vSphere 5 it appears that a typical setup will now cost more. Times are tough! What is a sysadmin to do?
    vSphere 4 will clearly remain viable for at least the near future. I have not taken the time to fully understand what v5 will offer that is better. Our current environment is two ESXi 4.1 hosts managed by vCenter. Each host has 32GB RAM and the guest RAM is probably over subscribed, but not by much.
    In the next week I plan to load the VMware tool that will provide indications as to what the new licensing will look like for the current environment. Should be interesting…
    All that as it is, I think it is time to seriously look at XEN Virtualization. Loaded it up on descent hardware today (right before the power went out!) So more later on the testing.
    Question: anyone using DTC-XEN for ‘managing’ XEN guests?

    -habanero_joe

    09.26.2011 Update: loaded VMware ESXi 5.0.0 over the weekend. Installation is as straight forwarded as expected. Quickly installed two MS Windows Server 2008 Enterprise 64-bit servers and a MS Windows 7 Enterprise 64-bit desktop. Will be digging into new license model limitations this week. So far, for a single host, I can’t find a reason to load 5.0.

    I did get XEN loaded on Debian Squeeze, then wrecked the install. I will be rebuilding shortly for a comparison.

    VMware Workstation 8 has been release a few weeks ago. One nice feature is a much easier migration path from Workstation to vCenter and vSphere. VMware is claiming over 50 new features with this release.

    -habanero_joe

    ZFS Performance Testing: P4 Clone using WD Raptors

    In my previous post on ZFS performance testing I ran through various tests on a particular test system that I had running at the time. This time round, we are testing a Intel motherboard with a Pentium 4 CPU, 2GB of RAM, and 6x74GB drives. This system was literally thrown together with spare parts laying around the home office. This is the ultimate in home made, commodity parts, down and dirty NAS setups. Using seven (6) Western Digital Raptor drives attached to three different SATA II non-RAID controllers. For the uninitiated, Raptors are SATA drives that spin at 10K RPM. So here we go….

    *Note*
    My intention was to run this with seven (7) disks so that it would be a WD Raptor vs WD Blue test. Unfortunately, my seventh, and last WD Raptor died during configuration. With that in mind, it is interesting nonetheless to compare the 7 disk WD Blue results with the 6 WD Raptor results.

    Test rig:

    • Custom Build
    • Intel Pentium 4 @ 3.0GHz Hyper Threading Enabled
    • 2GB RAM
    • Onboard SATA II
    • 6x74GB SATA II 10000 RPM – Six (6) independent drives with no RAID. Model: Western Digital Raptor
    • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

    GNU dd:
    Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

    dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

    Results:
    Results are listed as configuration, write, then read.

    • ZFS stripe pool utilizing six (6) SATA disks
      • 150 MB/s
      • 239 MB/s

    • ZFS stripe pool utilizing six (6) SATA disks with dataset compression set to “On”
      • 302 MB/s
      • 515 MB/s

      • ZFS raidz pool utilizing six (6) SATA disks
        • 99 MB/s
        • 165 MB/s

      • ZFS raidz pool utilizing six (6) SATA disks with dataset compression set to “On”
        • 299 MB/s
        • 516 MB/s

      • ZFS raidz2 pool utilizing six (6) SATA disks
        • 76 MB/s
        • 164 MB/s

      • ZFS raidz2 pool utilizing six (6) SATA disks with dataset compression set to “On”
        • 301 MB/s
        • 514 MB/s

        Notes, Thoughts & Mentionables:
        There are a few things worth mentioning about this system:
        This is a truly down and dirty quick and ugly build using used parts. As such, you get what you pay for and the performance data here proves that. Possibly more influential to the performance scores here could and likely is the storage controllers. The motherboard only has four ports so used/cheap/old SATA II PCI controller was used to gain the additional three ports.

        As always, the tests involving compression provide interesting insight into the limitations of various processors. While running the compression off tests, the CPU load was relatively low and the systems audible sound was unchanged. While running compression on tests, the CPU was of course showing a heavy load, but it also prompted the CPU cooler to spin at a higher (more audible) rate. Guess those old P4’s still cook 🙂

        –himuraken

    ZFS Performance Testing: Intel P4 Clone using WD Blue Drives

    In my previous post on ZFS performance testing I ran through various tests on a particular test system that I had running at the time. This time round, we are testing a Intel motherboard with a Pentium 4 CPU, 2GB of RAM, and 7x500GB drives. This system was literally thrown together with spare parts laying around the home office. This is the ultimate in home made, commodity parts, down and dirty NAS setups. Using seven (7) Western Digital WD5000AAKS (Known as WD Blue) drives attached to three different SATA II non-RAID controllers. Historically, these drives have been used in low cost and all around cheap builds up until the WD Green drives came out. Essentially, better than Green, worse than Black, but a good mix of price/GB. So here we go….

    Test rig:

    • Custom Build
    • Intel Pentium 4 @ 3.0GHz Hyper Threading Enabled
    • 2GB RAM
    • Onboard SATA II
    • 7x500GB SATA II 7200 RPM – Seven (7) independent drives with no RAID. Model: Western Digital WD5000AAKS
    • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

    GNU dd:
    Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

    dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

    Results:
    Results are listed as configuration, write, then read.

    • ZFS stripe pool utilizing seven (7) SATA disks
      • 130 MB/s
      • 228 MB/s

    • ZFS stripe pool utilizing seven (7) SATA disks with dataset compression set to “On”
      • 301 MB/s
      • 508 MB/s

      • ZFS raidz pool utilizing seven (7) SATA disks
        • 81 MB/s
        • 149 MB/s

      • ZFS raidz pool utilizing seven (7) SATA disks with dataset compression set to “On”
        • 302 MB/s
        • 512 MB/s

      • ZFS raidz2 pool utilizing seven (7) SATA disks
        • 66 MB/s
        • 144 MB/s

      • ZFS raidz2 pool utilizing seven (7) SATA disks with dataset compression set to “On”
        • 298 MB/s
        • 515 MB/s

        Notes, Thoughts & Mentionables:
        There are a few things worth mentioning about this system:
        This is a truly down and dirty quick and ugly build using used parts. As such, you get what you pay for and the performance data here proves that. Possibly more influential to the performance scores here could and likely is the storage controllers. The motherboard only has four ports so used/cheap/old SATA II PCI controller was used to gain the additional three ports.

        As always, the tests involving compression provide interesting insight into the limitations of various processors. While running the compression off tests, the CPU load was relatively low and the systems audible sound was unchanged. While running compression on tests, the CPU was of course showing a heavy load, but it also prompted the CPU cooler to spin at a higher (more audible) rate. Guess those old P4’s still cook 🙂

        –himuraken

    ZFS Performance Testing: AMD Dual Core w/ 6GB DDR2 & 5x2TB SATA in raidz

    In my previous post on ZFS performance testing I ran through various tests on a particular test system that I had running at the time. This time round, I have a MSI motherboard with an AMD dual core CPU, 6GB of DDR2, and 5x2TB drives. With this build we went with the HITACHI Deskstar 7K3000 HDS723020BLA642 drive which is currently available on NewEgg for $119.99 plus shipping. These drives have been strong performers and are slowly making me forget the “DeathStar” era, but only time will tell… These are in fact SATA III drives but the onboard controller that we tested through only supports SATA II. So here we go….

    Test rig:

    • Custom Build
    • AMD Athlon Phenom II Dual Core
    • 6GB DDR2 RAM
    • Onboard SATA II
    • 5x2TB SATA II 7200 RPM – Five (5) independent drives with no RAID. Model: HITACHI Deskstar 7K3000
    • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

    GNU dd:
    Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

    dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

    Results:
    Results are listed as configuration, write, then read.

    • ZFS raidz pool utilizing five (5) SATA disks
      • 232 MB/s
      • 336 MB/s

    • ZFS raidz pool utilizing five (5) SATA disks with dataset compression set to “On”
      • 455 MB/s
      • 582 MB/s

      Notes, Thoughts & Mentionables:
      There are a few things worth mentioning about this system:
      Due to time restrictions, I was only able to test on the raidz vdev type. I look forward to testing again with varying vdev types/configs if and when possible.

      –himuraken

    Initial Thoughts: Netgate Hamakua

    In this post I would like to share some of my initial thoughts of the Netgate Hamakua. We were looking for a 1U half depth rack mount system to run pfSense 1.2.3 on. Although we haven’t mentioned it much here on the blog, we love working with pfSense. PfSense is a fork of the m0n0wall project and is based on FreeBSD. I have pfSense running on everything from discard eMachine workstations to multi-thousand dollar rack mount servers, and everything in between.

    We have pfSense embedded running on a number of ALIX based Netgate m1n1wall 2D3’s and it is an excellent combination of lower power and stable performance. When it came time to migrate from a VM based install to hardware in our rack we looked to Netgate. We went with the rack-mount version of the Hamakua and purchased the optional VGA break-out cable. The Hamakua has room for a 2.5″ HDD drive which is an excellent option if you need that sort of thing. It is important to note that the embedded installation of pfSense does not output any data to the VGA port. So if you are running embedded you will see the initial POST / BIOS phase of the boot and then thats it. This is due to the fact that the embedded install is targeted mainly for lower power devices that use serial for display output.

    From what I have been able to gather from books, forums, and #pfsense on Freenode, it is obvious that key developers of the pfSense project test on this hardware extensively. And for good reason, its a great platform: 1U, 16W power consumption, 1GHz CPU, and 5 NIC’s/interfaces. You can find great documentation on the pfSense site regarding embedded and full installations for this unit. Long story short, they use it and develop on it, it will be around for awhile.

    We are anticipating an upgrade from our current DS3 connectivity to 1Gbps and wanted to have something that make at least some use of the new line. For this reason we did some basic performance testing using our 2900 ZFS test box and another similarly spec’d server. While running large data transfers between two individual 1Gbps interfaces we were able to max the system at roughly 250Mbps throughput. This is right inline with the sizing guide in the pfSense book. The appears to be a limitation of the 1Ghz processor. Be sure to take a look at the pfSense book for sizing and throughput requirements, it is quite helpful in this regard in addition to others.

    One thing that is worth mentioning is the heat that this thing generates. During heavy testing and average daily use, a large amount of heat was being displaced. The top of the unit is basically a heatsink and it does its job well. Because of this, it will certainly be going on the top of our rack due to ventilation concerns. I beleive that this design is pretty solid and it would most likely take the abuse without batting an eye, but I didn’t want to risk burning this one out.

    To conclude, if you need a rack mount system that will run pfSense, is well supported by the community, and you don’t need to cross the 250Mbps barrier, this may be the unit for you. This is the second model of device that we have purchased from Netgate, and as always we weren’t disappointed. If you need something a bit less performant and easier on the budget, be sure to check out the Netgate m1n1wall 2D3/2D13. It has 3x100Mbps ports and gets the job done well.

    –Himuraken