Got old-buntu? Ubuntu EOL 9.10 to 10.04 Upgrade Mini HowTo

So several months ago, I like the rest of the world, was notified that end of life (EOL) for Ubuntu 9.10 Karmic Koala would happening. In the news blurb/mailing list, wherever I found it, I walked away thinking that security updates would cease to exist.

In preparation for the upgrade, I went ahead and cloned the 9.10 server and proceeded to upgrade the server to Ubuntu 10.04 Lucid Lynx. This went off without a hitch from what I could tell and I scheduled the upgrade of the production server with my last client running 9.10.

Without fail, life happens, clients have things come up, and the upgrade never happened. Fast forward to present day and time, and my client tried installing a package using apt-get and received a slew of errors. Looking into the issue a bit further and I found the repositories gone. Interestingly enough, when EOL occurs for an Ubuntu release, it really ends, and not just for the security patches.

So one is left wondering, “how can I sudo apt-get install update-manager-core & sudo do-release-upgrade when I can’t even do a simple sudo apt-get update?” Solution: EOL upgrade. There are several different ways to go about this, the best are detailed here. At the time of this writing, the link is a little unclear about how to get 9.10 to 10.04 so here is the quick and easy way:

1. Backup your current sources.list:
sudo mv /etc/apt/sources.list ~/sources.list

2. Create a new sources.list:
sudo vim /etc/apt/sources.list

3. Add/paste in archive release repositories substituting CODENAME for release jaunty, karmic, etc:

## EOL upgrade sources.list
# Required
deb http://old-releases.ubuntu.com/ubuntu/ CODENAME main restricted universe multiverse
deb http://old-releases.ubuntu.com/ubuntu/ CODENAME-updates main restricted universe multiverse
deb http://old-releases.ubuntu.com/ubuntu/ CODENAME-security main restricted universe multiverse

# Optional
#deb http://old-releases.ubuntu.com/ubuntu/ CODENAME-backports main restricted universe multiverse

4. Update repositories and install update manager
sudo apt-get update
sudo apt-get install update-manager-core

5. Initiate the upgrade
sudo do-release-upgrade

6. Enjoy!

–Himuraken

Windows Server Licensing for Virtual Environments

I prefer Linux to Windows for a handful of reasons. One of the obvious benefits is licensing, and with all of the virtualizing I do in production and testing, its nice to never have to think about licensing.  Meanwhile, back in the real world, most of my clients use Windows based servers for their day to day tasks. The Windows OS license is generally licensed per install/server; the notable exception being Data Center Edition which is licensed per CPU.

With consolidation ratios ever increasing, we are always on the lookout for bottlenecks in systems. What about licensing? If you are running numerous Windows guests, are there ways to make smarter licensing moves? In a nutshell, yes.

Instead of trying to reinvent the wheel, I will steer you to a well written and very informative article detailing some of things you can do. This is well worth the read.

–himuraken

Cheap Network Attached Storage Is Expensive.

The bottom line: you get what you pay for…

In my limited IT experience (~20 years), I have seen it proven over and over that low-cost hardware costs a lot in the not so long run. This post is about NAS, but this premise is not limited to that particular device. In the course of various consulting engagements, I have come across many devices that on first glance seem to be a great value. Until you really get it in production…

Before I start making sweeping generalizations, let me be clear: many manufacturers have a wide breadth of products, many of which are truly commercial-grade, enterprise ready. This is not what the common small business or non-profit chooses the first time around. For example, Iomega delivers several wonderful, relatively low-cost devices that work very well as a disk-disk backup target. The ix4-200 comes to mind.

The real problem comes from the low end hardware in the low-cost NAS devices. This includes home PC quality hard drives, incorrectly sized power supplies, improper cooling, lack of standard storage protocols available, no redundant drives, no hot-spare, and generally poor user management interfaces. A “good” device will provide a non-proprietary way to access the data if the rest of the network (servers/PCs) fail.

When purchasing any type of network storage (or rolling your own), carefully consider all of the components before making a purchase. Look at the MTBF (mean time between failure) or CDL (component design life) of the drives. The difference between a true RAID-quality drive and a PC drive can be staggering. Remember that your NAS runs 24/7-365. That Is 8760 hours per year. Often the better equipment is only a few dollars more and will save you considerably less in the long-run.

How much is your company’s data worth to you? Industry statistics have shown that 60% of companies that lose their data shut down within six months. Over 90% of companies that lost their data center for ten days or more file for bankruptcy within one year. Every week there are over 14,000 hard drive crashes in the United States. Don’t let your company become a statistic…

– habanero_Joe

ZFS Performance Testing: P4 Clone using WD Raptors

In my previous post on ZFS performance testing I ran through various tests on a particular test system that I had running at the time. This time round, we are testing a Intel motherboard with a Pentium 4 CPU, 2GB of RAM, and 6x74GB drives. This system was literally thrown together with spare parts laying around the home office. This is the ultimate in home made, commodity parts, down and dirty NAS setups. Using seven (6) Western Digital Raptor drives attached to three different SATA II non-RAID controllers. For the uninitiated, Raptors are SATA drives that spin at 10K RPM. So here we go….

*Note*
My intention was to run this with seven (7) disks so that it would be a WD Raptor vs WD Blue test. Unfortunately, my seventh, and last WD Raptor died during configuration. With that in mind, it is interesting nonetheless to compare the 7 disk WD Blue results with the 6 WD Raptor results.

Test rig:

  • Custom Build
  • Intel Pentium 4 @ 3.0GHz Hyper Threading Enabled
  • 2GB RAM
  • Onboard SATA II
  • 6x74GB SATA II 10000 RPM – Six (6) independent drives with no RAID. Model: Western Digital Raptor
  • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

GNU dd:
Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

Results:
Results are listed as configuration, write, then read.

  • ZFS stripe pool utilizing six (6) SATA disks
    • 150 MB/s
    • 239 MB/s

  • ZFS stripe pool utilizing six (6) SATA disks with dataset compression set to “On”
    • 302 MB/s
    • 515 MB/s

    • ZFS raidz pool utilizing six (6) SATA disks
      • 99 MB/s
      • 165 MB/s

    • ZFS raidz pool utilizing six (6) SATA disks with dataset compression set to “On”
      • 299 MB/s
      • 516 MB/s

    • ZFS raidz2 pool utilizing six (6) SATA disks
      • 76 MB/s
      • 164 MB/s

    • ZFS raidz2 pool utilizing six (6) SATA disks with dataset compression set to “On”
      • 301 MB/s
      • 514 MB/s

      Notes, Thoughts & Mentionables:
      There are a few things worth mentioning about this system:
      This is a truly down and dirty quick and ugly build using used parts. As such, you get what you pay for and the performance data here proves that. Possibly more influential to the performance scores here could and likely is the storage controllers. The motherboard only has four ports so used/cheap/old SATA II PCI controller was used to gain the additional three ports.

      As always, the tests involving compression provide interesting insight into the limitations of various processors. While running the compression off tests, the CPU load was relatively low and the systems audible sound was unchanged. While running compression on tests, the CPU was of course showing a heavy load, but it also prompted the CPU cooler to spin at a higher (more audible) rate. Guess those old P4’s still cook 🙂

      –himuraken

ZFS Performance Testing: Intel P4 Clone using WD Blue Drives

In my previous post on ZFS performance testing I ran through various tests on a particular test system that I had running at the time. This time round, we are testing a Intel motherboard with a Pentium 4 CPU, 2GB of RAM, and 7x500GB drives. This system was literally thrown together with spare parts laying around the home office. This is the ultimate in home made, commodity parts, down and dirty NAS setups. Using seven (7) Western Digital WD5000AAKS (Known as WD Blue) drives attached to three different SATA II non-RAID controllers. Historically, these drives have been used in low cost and all around cheap builds up until the WD Green drives came out. Essentially, better than Green, worse than Black, but a good mix of price/GB. So here we go….

Test rig:

  • Custom Build
  • Intel Pentium 4 @ 3.0GHz Hyper Threading Enabled
  • 2GB RAM
  • Onboard SATA II
  • 7x500GB SATA II 7200 RPM – Seven (7) independent drives with no RAID. Model: Western Digital WD5000AAKS
  • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

GNU dd:
Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

Results:
Results are listed as configuration, write, then read.

  • ZFS stripe pool utilizing seven (7) SATA disks
    • 130 MB/s
    • 228 MB/s

  • ZFS stripe pool utilizing seven (7) SATA disks with dataset compression set to “On”
    • 301 MB/s
    • 508 MB/s

    • ZFS raidz pool utilizing seven (7) SATA disks
      • 81 MB/s
      • 149 MB/s

    • ZFS raidz pool utilizing seven (7) SATA disks with dataset compression set to “On”
      • 302 MB/s
      • 512 MB/s

    • ZFS raidz2 pool utilizing seven (7) SATA disks
      • 66 MB/s
      • 144 MB/s

    • ZFS raidz2 pool utilizing seven (7) SATA disks with dataset compression set to “On”
      • 298 MB/s
      • 515 MB/s

      Notes, Thoughts & Mentionables:
      There are a few things worth mentioning about this system:
      This is a truly down and dirty quick and ugly build using used parts. As such, you get what you pay for and the performance data here proves that. Possibly more influential to the performance scores here could and likely is the storage controllers. The motherboard only has four ports so used/cheap/old SATA II PCI controller was used to gain the additional three ports.

      As always, the tests involving compression provide interesting insight into the limitations of various processors. While running the compression off tests, the CPU load was relatively low and the systems audible sound was unchanged. While running compression on tests, the CPU was of course showing a heavy load, but it also prompted the CPU cooler to spin at a higher (more audible) rate. Guess those old P4’s still cook 🙂

      –himuraken

ZFS Performance Testing: AMD Dual Core w/ 6GB DDR2 & 5x2TB SATA in raidz

In my previous post on ZFS performance testing I ran through various tests on a particular test system that I had running at the time. This time round, I have a MSI motherboard with an AMD dual core CPU, 6GB of DDR2, and 5x2TB drives. With this build we went with the HITACHI Deskstar 7K3000 HDS723020BLA642 drive which is currently available on NewEgg for $119.99 plus shipping. These drives have been strong performers and are slowly making me forget the “DeathStar” era, but only time will tell… These are in fact SATA III drives but the onboard controller that we tested through only supports SATA II. So here we go….

Test rig:

  • Custom Build
  • AMD Athlon Phenom II Dual Core
  • 6GB DDR2 RAM
  • Onboard SATA II
  • 5x2TB SATA II 7200 RPM – Five (5) independent drives with no RAID. Model: HITACHI Deskstar 7K3000
  • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

GNU dd:
Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

Results:
Results are listed as configuration, write, then read.

  • ZFS raidz pool utilizing five (5) SATA disks
    • 232 MB/s
    • 336 MB/s

  • ZFS raidz pool utilizing five (5) SATA disks with dataset compression set to “On”
    • 455 MB/s
    • 582 MB/s

    Notes, Thoughts & Mentionables:
    There are a few things worth mentioning about this system:
    Due to time restrictions, I was only able to test on the raidz vdev type. I look forward to testing again with varying vdev types/configs if and when possible.

    –himuraken

.NK2 file locations

The location of the NK2 AutoCompelete file created by Outlook might be different from one computer to another, depending on the operating system and the version of Outlook.

  • For Outlook 2003/2007 with Windows 2000, Windows XP, or Windows Server 2003:
    The location of the nk2 file is C:\Documents and Settings\[User Profile]\Application Data\Microsoft\Outlook
    The name of the NK2 file is identical to the Outlook profile name, with .nk2 extension.
  • For Outlook 2003/2007 with Windows Vista, Windows 7, or Windows server 2008:
    The location of the nk2 file is C:\Users\[User Profile]\AppData\Roaming\Microsoft\Outlook
    The name of the NK2 file is identical to the Outlook profile name, with .nk2 extension.
  • For Outlook 2010 with Windows Vista, Windows 7, or Windows server 2008:
    The nk2 file is located in C:\Users\[User Profile]\AppData\Local\Microsoft\Outlook\RoamCache
    The name of the NK2 file is in the following format: Stream_Autocomplete_X_AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA.dat The X is the file index (usually 0) and AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA is a random 16-byte key in hexadecimal format.
  • For Outlook 2010 with Windows XP:
    The nk2 file is located in C:\Documents and Settings\[User Profile]\Local Settings\Application Data\Microsoft\Outlook\RoamCache
    The name of the NK2 file is in the following format: Stream_Autocomplete_X_AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA.dat The X is the file index (usually 0) and AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA is a random 16-byte key in hexadecimal format.

Tethering WiFi Only Galaxy Tab To Motorola Droid

Bought my signifigant other a Galaxy Tablet for an upcoming trip and decided to get her up and running with WiFi tethering. I tether my laptops to my Droid’s frequently and love it. Long story short, it wasn’t as straight forward as I had hoped. After several hours of frustrating work trying to get a wifi only Samsung Galaxy tablet connected to an ad-hoc network, I finally have found a solution. The problem is that I needed to be able to connect the tablet to one of our rooted Droid 1 phones which are running wifi tether. This is quite difficult (thanks Google) to do because Android filters out and hides ad-hoc networks. After trying many things including switching out the wpa_supplicant for a different one via Root Explorer, I have found the following to work well. Keep in mind that simply allowing the OS to connect to ad-hoc networks would be ideal.

Be sure to have a working setup of wifi tether running on your phone if that is where you are sourcing the ad-hoc connection.

Step 1: Root your tablet using Z4root and reboot as required.
Step 2: Install ZT-180 on the tablet from the Android Market.
Step 3: Configure the ZT-180 application to connect to the SSID of your wifi tether application.
Step 4: Switch to ad-hoc mode within ZT-180 and enjoy 🙂

It appears to me that the ZT app acts as a proxy between the tablet and the ad-hoc peer.

Truly shame on Google here, there are so many user forums filled to the brim with requests and issues. I can only begin to think of the number of people who have to void warranties and/or for go support by rooting their devices to do something the iPad allows by default.

–himuraken