Migrating from VMware ESXi to QEMU/KVM

For a myriad of reasons, I have been looking at alternatives to VMware ESXi for a few months. Virtualizing a few machines here and there has proven educational. Learning the ropes of working with qemu/kvm, libvirt, and virsh has been challenging at times, but overall a pleasure to work with. Working with kvm is great although it takes some getting use to coming from a VMware/ESXi centric environment.

Up to this point all of the virtual machines that I had worked with were new systems. After some research and a few backups of my current VMs running on one of my ESXi hosts, I decided to migrate a few production VMs. Here are the steps that I used to move virtual machines over from a licensed vSphere 4.1 installation to a Linux host running qemu/kvm.

For starters, be sure that you have full backups of any VMs that you plan on working with. With that out of the way, you are ready to start:

1. Remove all snapshots from the virtual machine across all virtual disks.

2. Uninstall VMware Tools and then perform a clean shutdown of the guest operating system.

3. Copy the virtual hard disk(s) over to the qemu/kvm host. The virtual disk is typically the largest file within a VM’s directory and will usually be named something like ‘guestname-flat.vmdk’

4. On the qemu/kvm host, change to the directory containing the .vmdk file. Assuming you are using qcow2 disk images, run the following command to convert the .vmdk: kvm-img convert -O qcow2 guestname-flat.vmdk newguestname.qcow2

5. Create a new VM on the qemu/kvm host and choose the recently converted disk image as your existing drive/image. It is important that you create your new guest with the same or similar settings as it had before. I recommend cloning the MAC address over to the new guest for added simplicity with NIC detection, assignment, and third party software licensing.

6. Attempt to boot the system. Depending upon your guests virtual disk settings and other factors, the system may hang during boot. Edit your virtual machine and set the controller type to SCSI assuming that was the controller type back on ESXi.

At this point your system should be up and running on the new host. I did find notes and suggestions that qemu/kvm can run vmdk files/disk images, but there seemed to be a handful of caveats so I decided to convert the vmdk’s over to a native format.

–himuraken

HP ProLiant MicroServer Flexibility

I’ve been meaning to put some of my thoughts on the HP MicroServer N40L for quite some time and just haven’t made the time to do it, so here goes.

Long ago I was searching for a reason to purchase and play with HP’s MicroServers and got my chance when a client asked for an affordable backup device. I jumped at the chance and ordered one of the N40L’s. These units are listed as part of the ProLiant family of servers which sounded promising, but being the skeptic that I am, I didn’t expect much for the seemingly measly $350 price tag.

The unit comes with an AMD dual core CPU, 2GB of RAM, 250GB HDD, and a 1Gbps NIC. The system has a mini-PCIe slot for a remote access/iLO/DRAC type card, and a second standard PCIe slot. Although the system ships with only a single drive, all four bays have “hot swap” trays/carriers, making adding additional disks no problem. I say “hot swap” because I am pretty sure that the backplane/controller do not allow actual hot swapping in its true sense, YMMV. Another note on the hardware; the motherboard can be easily removed from the system by disconnecting a few cables and backing out two thumb screws. The board is on a simple and quite brilliant tray assembly which makes removal, upgrade, and insertion simple. Do yourself a favor when you purchase the system by maxing out the RAM at 8GB(DDR3/ECC) and adding the optional iLO/remote access card. For basic NAS and low end Linux server duties the 2GB will work fine and you will have no regrets, but going to 8GB really opens the doors, more on that next.

Before I jump into exactly what it can do, it is worth mentioning what YOU should not do with it. For instance, don’t try and be a hero to your clients by touting this as an ultra affordable server solution. I have read of several people putting SBS on this box and then using it as the primary file and mail server for 20+ users. Don’t be a dummy, if you’re trying to service your clients properly get them a truly redundant system with hardware RAID, dual PSU’s and things of that nature. You are providing a disservice to your clients if you use this in a place it should not be used. Responsibility rant over…

With the remote access card, 8GB of RAM, and a couple of SATA drives, you are ready to play. This is the little server that could and it shows. The thing runs VMware ESXi5, Linux, Windows, FreeBSD(FreeNAS) and many other things. An important thing to remember is that the included disk controller uses fake RAID/driver assisted RAID so don’t expect RAID support outside of Windows. With that limitation in mind, this makes the ideal small business backup device, home virtualization lab, or any other number of roles you can through at it.

Fast forward to today and the device has served me and many others quite nicely. Although not a comprehensive list of installs, I can confirm successful installation on the following operating systems:

  • Debian Lenny (i386/AMD64)
  • Debian Squeeze (i386/AMD64) Currently Debian stable release
  • Debian Wheezy (i386/AMD64) Currently Debian testing release
  • Ubuntu 10.04 (i386/AMD64)
  • FreeNAS 0.7 (i386)
  • FreeNAS 8 (i386/AMD64)
  • VMware ESXi 4.1
  • VMware ESXi 5.0
  • Windows Server 2008 R2
  • Windows Small Business Server 2011
  • Whew! What a list and that just touches the surface of what you can run. Those just happen to be the configurations that I have tested with success. My current configuration consists of the base system running 8GB of RAM, iLO card, 1x64GB SSD and 4x1TB RAID edition drives. I’ve got Debian stable AMD64 running on / and have 4x1TB RE drives using Linux md RAID in level 5 mounted on /home. This acts as my internal NFS server and virtualization lab. The system runs vm guests well through KVM although you will have to watch the CPU. Being a dual core 1.5GHz, the system will usually run out of CPU before you hit any other bottlenecks.

    In conclusion, if you need a flexible and affordable storage device for most small business or home needs, a cheap virtualization lab in a box, or similar configuration, you will not be disappointed by this device.

    –himuraken

    XEN vs. VMware ESXi

    I use server Virtualization to make money. With the new licensing model that VMware has announced with vSphere 5 it appears that a typical setup will now cost more. Times are tough! What is a sysadmin to do?
    vSphere 4 will clearly remain viable for at least the near future. I have not taken the time to fully understand what v5 will offer that is better. Our current environment is two ESXi 4.1 hosts managed by vCenter. Each host has 32GB RAM and the guest RAM is probably over subscribed, but not by much.
    In the next week I plan to load the VMware tool that will provide indications as to what the new licensing will look like for the current environment. Should be interesting…
    All that as it is, I think it is time to seriously look at XEN Virtualization. Loaded it up on descent hardware today (right before the power went out!) So more later on the testing.
    Question: anyone using DTC-XEN for ‘managing’ XEN guests?

    -habanero_joe

    09.26.2011 Update: loaded VMware ESXi 5.0.0 over the weekend. Installation is as straight forwarded as expected. Quickly installed two MS Windows Server 2008 Enterprise 64-bit servers and a MS Windows 7 Enterprise 64-bit desktop. Will be digging into new license model limitations this week. So far, for a single host, I can’t find a reason to load 5.0.

    I did get XEN loaded on Debian Squeeze, then wrecked the install. I will be rebuilding shortly for a comparison.

    VMware Workstation 8 has been release a few weeks ago. One nice feature is a much easier migration path from Workstation to vCenter and vSphere. VMware is claiming over 50 new features with this release.

    -habanero_joe

    VMware ESX, NIC Teaming, and VLAN Trunking with HP ProCurve

    Found this great article and thought I would share it with RH readers:

    Original text from http://blog.scottlowe.org/

    In an earlier article about VMware ESX, NIC teaming, and VLAN trunking, I described what the configuration should look like if one were using these features with Cisco switch hardware. It’s been a quite popular post, one I will probably need to update soon.

    In this article, I’d like to discuss how to do the same thing, but using HP ProCurve switch hardware. The article is broken into three sections: using VLANs, using link aggregation (NIC teaming), and using both together.
    Using VLAN Trunking

    To my Cisco-oriented mind, VLANs with ProCurve switches are handled quite differently. Port-based VLANs, in which individual ports are assigned to one or more VLANs, allow a switch port to participate in that VLAN as either an untagged fashion or in a tagged fashion.

    The difference here is really simpler than it may seem: the untagged VLAN can be considered the “native VLAN” from the Cisco world, meaning that the VLAN tags are not added to packets traversing that port. Putting a port in a VLAN in untagged mode is essentially equivalent to making that port an access port in the Cisco IOS world. Only one VLAN can be marked as untagged, which makes sense if you think about it.

    Any port groups that should receive traffic from the untagged VLAN need to have VLAN ID 0 (no VLAN ID, in other words) assigned.

    A tagged VLAN, on the other hand, adds the 802.1q VLAN tags to traffic moving through the port, like a VLAN trunk. If a user wants to use VST (virtual switch tagging) to host multiple VLANs on a single VMware ESX host, then the ProCurve ports need to have those VLANs marked as tagged. This will ensure that the VLAN tags are added to the packets and that VMware ESX can direct the traffic to the correct port group based on those VLAN tags.

    In summary:

    * Assign VLAN ID 0 to all port groups that need to receive traffic from the untagged VLAN (remember that a port can only be marked as untagged for a single VLAN). This correlates to the discussion about VMware ESX and the native VLAN, in which I reminded users that port groups intended to receive traffic for the native VLAN should not have a VLAN ID specified.
    * Be sure that ports are marked as tagged for all other VLANs that VMware ESX should see. This will enable the use of VST and multiple port groups, each configured with an appropriate VLAN ID. (By the way, if users are unclear on VST vs. EST vs. VGT, see this article.)
    * VLANs that VMware ESX should not see at all should be marked as “No” in the VLAN configuration of the ProCurve switch for those ports.

    Using Link Aggregation

    There’s not a whole lot to this part. In the ProCurve configuration, users will mark the ports that should participate in link aggregation as part of a trunk (say, Trk1) and then set the trunk type. Here’s the only real gotcha: the trunk must be configured as type “Trunk” and not type “LACP”.

    In this context, LACP refers to dynamic LACP, which allows the switch and the server to dynamically negotiate the number of links in the bundle. VMware ESX doesn’t support dynamic LACP, only static LACP. To do static LACP, users will need to set the trunk type to Trunk.

    Then, as has been discussed elsewhere in great depth, configure the VMware ESX vSwitch’s load balancing policy to “Route based on ip hash”. Once that’s done, everything should work as expected. This blog entry gives the CLI command to set the vSwitch load balancing policy, which would be necessary if configuring vSwitch0. For all other vSwitches, the changes can be made via VirtualCenter.

    That’s really all there is to making link aggregation work between an HP ProCurve switch and VMware ESX.
    Using VLANs and Link Aggregation Together

    This section exists only to point out that when a trunk is created, the VLAN configuration for the members of that trunk disappears, and the trunk must be configured directly for VLAN support. In fact, users will note that the member ports don’t even appear in the list of ports to be configured for VLANs; only the trunks themselves appear.

    Key point to remember: apply your VLAN configurations after your trunking configuration, or else you’ll just have to do it all over again.

    With this information, users should now be pretty well prepared to configure HP ProCurve switches in a VMware ESX environment. Feel free to post any questions, clarifications, or corrections in the comments below, and thanks for reading!

    VMware Best Practices

    I have come across a few useful KB articles from VMware recently and thought I would stick them here for future reference.


    Installing ESX 4.1 and vCenter Server 4.1 best practices

    If you are planning to or have recently deployed vCenter Server 4.1 or ESX(i) 4.1, this KB article from VMware may be of some value to you: Installing ESX 4.1 and vCenter Server 4.1 best practices


    Best practices for virtual machine snapshots in the VMware environment

    Most of us work with snapshots and snapshot tech every day. In fact, I for one cannot imagine doing some of things I do without snapshots any longer. I use to do everything without snapshots, but technology is good for making all lazy. Either way, this best practices article is worth a read.

    –Himuraken

    VMUG: Tech Data in Clearwater, FL.

    Attended my second VMware User Group Meeting (VMUG) today. The first meeting that I attended was held at the Southwest Florida Water Management District’s office a couple of month back. Today’s meeting was sponsored by Tech Data and held at their Clearwater, FL headquarters.

    First up was VMware which gave a presentation of upcoming features in View 4.5. Numerous alpha/beta screenshots were displayed which showcased numerous improvements and new features. We were asked as group not to share the exact features as this is still a work in progress so I will end that there.

    Next up was IBM with a more hardware CAPEX/OPEX oriented presentation. Some of the hardware that was discussed was really impressive, things like 2TB of system RAM, blade enclosures, and 1.8″ SSD “flash packs” that cut costs and push IOPS through the roof.

    The final presenter Qlogic, was pretty interesting. The presentation included 8Gb FC, fibre channel over ethernet, and 10Gb copper.

    After the presentations everyone broke up into groups and went on tours of the Tech Data Solutions Center and then into discussions with each of the three presenters.

    See you at the next VMUG.

    –Himuraken

    Hyper9 GuessMyOS

    This falls under the category of Fun Apps. I recently installed GuessMyOS by Hyper9 (who make several virtualization management apps). This is a plugin for VI3 and vSphere4 client. GuessMyOS replaces the generic VM icons with OS-specific for Linux and Windows. Personally I like seeing the penguin.
    This is tied to the client so must be enabled for each client instance.

    Hyper9

    – habanero_joe

    3ware 9650se and ESXi 4.0

    Of course the first thing that I did tonight after receiving my 3ware 9650se was to install it in my ESXi 4.0 server and get it going. There are a few caveats that I expected and a few that I didn’t. Hopefully this post will help a few of you out there.

    Here are the steps that I performed:
    1. Physically installed card and drives.
    2. Built the logical unit / volume on the card.
    3. Booted ESXi and noticed the card / array not showing up.

    I expected #3 from my pre-sales madness research, yeah I’m that bad about pre-sales. There is a driver for ESX 4 that enables the hypervisor to be installed onto the array. For the rest of us with ESXi 4.0 however, the array can only be used as a datastore. This isn’t an issue anyways since the critical files are on the array not the single disk or USB device that we install the ESXi OS onto.

    This led me to 3ware’s support site to find the exact article regarding this. You can find the article titled “I need support for VMware ESX/ESXi 4.0 and ESX/ESXi 4.0 update 1 for 9650SE and 9690SA. Is a certified driver available?” here. Basically, the top half of the document applies to ESX while the lower portion is dedicated to ESXi.

    First off, the fact that 3ware has this and other great articles is excellent, they seem like the “get it”. On the other hand I found something difficult to do, and not for technical reasons. They instruct you to find the file “offline-bundle.zip” on the included driver CD. I searched all over the provided driver CD and had no luck locating it; I couldn’t help but think that there ought to have been a download link as well***. After locating the file I proceeded down the list of the well written how-to. Unfortunately for me, after running the perl vihostupdate.pl -server x.x.x.x -username root -password "" -b c:\offline-bundle.zip -i command I received zero feedback from the CLI. I restarted the ESXi server per the documents recommendations and upon reboot, no RAID array. Hrmm, I didn’t get any errors or feedback. After shorting the name of the original file to offline-bundle.zip and re-running the command, I did get positive feedback in the form of this message: The update completed successfully, but the system needs to be rebooted for the changes to be effective. Delicious! Now were are working the way we ought to. Restart the ESXi server and enjoy.

    *** – Clarification and special note: At the bottom of the page you will see a download link for a file named: vmware-esx-drivers-scsi-3w-9xxx_400.2.26.08.035vm40-1.0.4.00000.179560.iso.. This ISO image contains the offline-bundle folder. The file that you need for the upgrade/upload is named: AMCC_2.26.08.035vm40-offline_bundle-179560.zip. The process would not succeed until I renamed AMCC_2.26.08.035vm40-offline_bundle-179560.zip to offline-bundle.zip.

    Happy virtualizing!

    –Himuraken