Converting VMWare Workstation images to VMWare ESXi 3.5

After getting my Dell PowerEdge loaded up with ESXi 3.5, I decided to try out one of the Ubuntu JEOS loads from the VMWare Virtual Appliance Marketplace. After unzipping the 7zip file I was left with the normal VMWare file types such as .vmx and .vmdk, but ESXi only gives me the option to bring in a Virtual Appliance from a .ovf file. After a little research I determined that a conversion would be required.

I found two different ways to get this done, from the command line or using VMWare vCenter Converter:

Use SCP to copy your Workstation files over to the ESX server. The path is /vmfs/volumes/YourDataStore. Next, ssh into your ESX server and run the following command:

vmkfstools -i sourcefile.vmdk /vmfs/vmfsname/destinationfile.vmdk

Once that process is complete you can point ESX at that file and start it up. Make sure you delete the source files so you aren’t wasting space on the ESX server.

The second and possibly easier way to do this is to download and install the vCenter Converter from VMWare. Once you have the application installed, simply follow the steps in the conversion wizard. It takes you from start to finish, including the transfer of files to the ESX server itself.

–Himuraken

Device or resource busy when using mdadm

After my Buffalo NAS physically failed, I decided to go back to the good ole’ home brew files server / NAS. So I stripped out the 250GB drives from the NAS and set out to setup Linux software RAID which is also known as md RAID. I went this route because my hardware RAID controller currently in use by another system, and I wanted a little weekend project.

After loading Ubuntu 8.10 to a stand alone drive (/dev/sda) I went ahead and prepped the system for the RAID array. After getting the disks installed and formatted, I dropped to the command line and ran sudo apt-get install mdadm. Once that was complete I attempted to create a RAID 10 array using the following command:

sudo mdadm –verbose –create /dev/md0 –level raid10 –raid-devices=4 /dev/sdb /dev/sdc /dev/sdd /dev/sde

which failed and gave the following errors:
mdadm: Cannot open /dev/sdb1: Device or resource busy
mdadm: Cannot open /dev/sdc1: Device or resource busy
mdadm: Cannot open /dev/sdd1: Device or resource busy
mdadm: Cannot open /dev/sde1: Device or resource busy
mdadm: create aborted

After scouring Google and using every possible search term I could not find anything pointing the cause of the problem. I made sure the disks were not mounted, verified they were not in use, made sure that swapoff had been run on each disk, and generally tried everything. After playing around with different things it just kind of dawned on me, my onboard RAID controller might be interfering. Most people in the Linux world refer to onboard RAID’s as FakeRAID or FRAID. It appears that the Ubuntu installer detected the onboard Nvidia RAID controller and installed dmraid into the running kernel. If you must use FakeRAID, dmraid may be useful to you, if you want true Linux software RAID, you need mdadm.

To resolve this, make sure that you go into the BIOS for your FakeRAID and disable / remove / delete any arrays that may be configured. Afterwards, reboot and go into your motherboard BIOS and disable the onboard RAID controller. Make sure the port are still enabled, you just want the RAID functionality disabled. Lastly, go to the command line and run
sudo apt-get remove dmraid. This will remove the dmraid modules and update your boot image to not include the dmraid software.

Once you have completed the above, rerun your mdadm command and you should be off and running in no time.

–Himuraken