How To Leave GoDaddy

Whether you are participating in MoveYourDomainDay or just want to get away from the terrible user interface that GoDaddy uses, there are a few good things to know.

1. Make sure your whois info has a proper email address listed. DO NOT change anything else or you risk locking up that domain for an additional 30-60 days.

2. Unlock your domains with the GoDaddy DomainManager.

3. Send authorization codes via email to the administrative contact by choosing Send By Email under the Domain Info area of the DomainManager.

If you are switching over to NameCheap, they offer great instructions on their site here. Currently, to entice additional business and to fight SOPA, NameCheap and Gandi.net are donating a portion of each domain transfer to the EFF.

–himuraken

How To Test Inbound & Outbound Faxes

Ever needed to test your ability to send or receive faxes? Usually, no one is around to send you a test, or you’d prefer not to bother a client with testing your equipment. HP has a little known service that you can use to test faxing in both directions for free. Simply send a one page text only fax to 1-888-hpfaxme (1-888-473-2963) and wait a few minutes. After a short while, you should receive a fax back from HP.

The official HP page for this service can be found by clicking this link.

Got old-buntu? Ubuntu EOL 9.10 to 10.04 Upgrade Mini HowTo

So several months ago, I like the rest of the world, was notified that end of life (EOL) for Ubuntu 9.10 Karmic Koala would happening. In the news blurb/mailing list, wherever I found it, I walked away thinking that security updates would cease to exist.

In preparation for the upgrade, I went ahead and cloned the 9.10 server and proceeded to upgrade the server to Ubuntu 10.04 Lucid Lynx. This went off without a hitch from what I could tell and I scheduled the upgrade of the production server with my last client running 9.10.

Without fail, life happens, clients have things come up, and the upgrade never happened. Fast forward to present day and time, and my client tried installing a package using apt-get and received a slew of errors. Looking into the issue a bit further and I found the repositories gone. Interestingly enough, when EOL occurs for an Ubuntu release, it really ends, and not just for the security patches.

So one is left wondering, “how can I sudo apt-get install update-manager-core & sudo do-release-upgrade when I can’t even do a simple sudo apt-get update?” Solution: EOL upgrade. There are several different ways to go about this, the best are detailed here. At the time of this writing, the link is a little unclear about how to get 9.10 to 10.04 so here is the quick and easy way:

1. Backup your current sources.list:
sudo mv /etc/apt/sources.list ~/sources.list

2. Create a new sources.list:
sudo vim /etc/apt/sources.list

3. Add/paste in archive release repositories substituting CODENAME for release jaunty, karmic, etc:

## EOL upgrade sources.list
# Required
deb http://old-releases.ubuntu.com/ubuntu/ CODENAME main restricted universe multiverse
deb http://old-releases.ubuntu.com/ubuntu/ CODENAME-updates main restricted universe multiverse
deb http://old-releases.ubuntu.com/ubuntu/ CODENAME-security main restricted universe multiverse

# Optional
#deb http://old-releases.ubuntu.com/ubuntu/ CODENAME-backports main restricted universe multiverse

4. Update repositories and install update manager
sudo apt-get update
sudo apt-get install update-manager-core

5. Initiate the upgrade
sudo do-release-upgrade

6. Enjoy!

–Himuraken

Windows Server Licensing for Virtual Environments

I prefer Linux to Windows for a handful of reasons. One of the obvious benefits is licensing, and with all of the virtualizing I do in production and testing, its nice to never have to think about licensing.  Meanwhile, back in the real world, most of my clients use Windows based servers for their day to day tasks. The Windows OS license is generally licensed per install/server; the notable exception being Data Center Edition which is licensed per CPU.

With consolidation ratios ever increasing, we are always on the lookout for bottlenecks in systems. What about licensing? If you are running numerous Windows guests, are there ways to make smarter licensing moves? In a nutshell, yes.

Instead of trying to reinvent the wheel, I will steer you to a well written and very informative article detailing some of things you can do. This is well worth the read.

–himuraken

ZFS Performance Testing: P4 Clone using WD Raptors

In my previous post on ZFS performance testing I ran through various tests on a particular test system that I had running at the time. This time round, we are testing a Intel motherboard with a Pentium 4 CPU, 2GB of RAM, and 6x74GB drives. This system was literally thrown together with spare parts laying around the home office. This is the ultimate in home made, commodity parts, down and dirty NAS setups. Using seven (6) Western Digital Raptor drives attached to three different SATA II non-RAID controllers. For the uninitiated, Raptors are SATA drives that spin at 10K RPM. So here we go….

*Note*
My intention was to run this with seven (7) disks so that it would be a WD Raptor vs WD Blue test. Unfortunately, my seventh, and last WD Raptor died during configuration. With that in mind, it is interesting nonetheless to compare the 7 disk WD Blue results with the 6 WD Raptor results.

Test rig:

  • Custom Build
  • Intel Pentium 4 @ 3.0GHz Hyper Threading Enabled
  • 2GB RAM
  • Onboard SATA II
  • 6x74GB SATA II 10000 RPM – Six (6) independent drives with no RAID. Model: Western Digital Raptor
  • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

GNU dd:
Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

Results:
Results are listed as configuration, write, then read.

  • ZFS stripe pool utilizing six (6) SATA disks
    • 150 MB/s
    • 239 MB/s

  • ZFS stripe pool utilizing six (6) SATA disks with dataset compression set to “On”
    • 302 MB/s
    • 515 MB/s

    • ZFS raidz pool utilizing six (6) SATA disks
      • 99 MB/s
      • 165 MB/s

    • ZFS raidz pool utilizing six (6) SATA disks with dataset compression set to “On”
      • 299 MB/s
      • 516 MB/s

    • ZFS raidz2 pool utilizing six (6) SATA disks
      • 76 MB/s
      • 164 MB/s

    • ZFS raidz2 pool utilizing six (6) SATA disks with dataset compression set to “On”
      • 301 MB/s
      • 514 MB/s

      Notes, Thoughts & Mentionables:
      There are a few things worth mentioning about this system:
      This is a truly down and dirty quick and ugly build using used parts. As such, you get what you pay for and the performance data here proves that. Possibly more influential to the performance scores here could and likely is the storage controllers. The motherboard only has four ports so used/cheap/old SATA II PCI controller was used to gain the additional three ports.

      As always, the tests involving compression provide interesting insight into the limitations of various processors. While running the compression off tests, the CPU load was relatively low and the systems audible sound was unchanged. While running compression on tests, the CPU was of course showing a heavy load, but it also prompted the CPU cooler to spin at a higher (more audible) rate. Guess those old P4’s still cook 🙂

      –himuraken

ZFS Performance Testing: Intel P4 Clone using WD Blue Drives

In my previous post on ZFS performance testing I ran through various tests on a particular test system that I had running at the time. This time round, we are testing a Intel motherboard with a Pentium 4 CPU, 2GB of RAM, and 7x500GB drives. This system was literally thrown together with spare parts laying around the home office. This is the ultimate in home made, commodity parts, down and dirty NAS setups. Using seven (7) Western Digital WD5000AAKS (Known as WD Blue) drives attached to three different SATA II non-RAID controllers. Historically, these drives have been used in low cost and all around cheap builds up until the WD Green drives came out. Essentially, better than Green, worse than Black, but a good mix of price/GB. So here we go….

Test rig:

  • Custom Build
  • Intel Pentium 4 @ 3.0GHz Hyper Threading Enabled
  • 2GB RAM
  • Onboard SATA II
  • 7x500GB SATA II 7200 RPM – Seven (7) independent drives with no RAID. Model: Western Digital WD5000AAKS
  • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

GNU dd:
Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

Results:
Results are listed as configuration, write, then read.

  • ZFS stripe pool utilizing seven (7) SATA disks
    • 130 MB/s
    • 228 MB/s

  • ZFS stripe pool utilizing seven (7) SATA disks with dataset compression set to “On”
    • 301 MB/s
    • 508 MB/s

    • ZFS raidz pool utilizing seven (7) SATA disks
      • 81 MB/s
      • 149 MB/s

    • ZFS raidz pool utilizing seven (7) SATA disks with dataset compression set to “On”
      • 302 MB/s
      • 512 MB/s

    • ZFS raidz2 pool utilizing seven (7) SATA disks
      • 66 MB/s
      • 144 MB/s

    • ZFS raidz2 pool utilizing seven (7) SATA disks with dataset compression set to “On”
      • 298 MB/s
      • 515 MB/s

      Notes, Thoughts & Mentionables:
      There are a few things worth mentioning about this system:
      This is a truly down and dirty quick and ugly build using used parts. As such, you get what you pay for and the performance data here proves that. Possibly more influential to the performance scores here could and likely is the storage controllers. The motherboard only has four ports so used/cheap/old SATA II PCI controller was used to gain the additional three ports.

      As always, the tests involving compression provide interesting insight into the limitations of various processors. While running the compression off tests, the CPU load was relatively low and the systems audible sound was unchanged. While running compression on tests, the CPU was of course showing a heavy load, but it also prompted the CPU cooler to spin at a higher (more audible) rate. Guess those old P4’s still cook 🙂

      –himuraken

ZFS Performance Testing: AMD Dual Core w/ 6GB DDR2 & 5x2TB SATA in raidz

In my previous post on ZFS performance testing I ran through various tests on a particular test system that I had running at the time. This time round, I have a MSI motherboard with an AMD dual core CPU, 6GB of DDR2, and 5x2TB drives. With this build we went with the HITACHI Deskstar 7K3000 HDS723020BLA642 drive which is currently available on NewEgg for $119.99 plus shipping. These drives have been strong performers and are slowly making me forget the “DeathStar” era, but only time will tell… These are in fact SATA III drives but the onboard controller that we tested through only supports SATA II. So here we go….

Test rig:

  • Custom Build
  • AMD Athlon Phenom II Dual Core
  • 6GB DDR2 RAM
  • Onboard SATA II
  • 5x2TB SATA II 7200 RPM – Five (5) independent drives with no RAID. Model: HITACHI Deskstar 7K3000
  • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

GNU dd:
Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

Results:
Results are listed as configuration, write, then read.

  • ZFS raidz pool utilizing five (5) SATA disks
    • 232 MB/s
    • 336 MB/s

  • ZFS raidz pool utilizing five (5) SATA disks with dataset compression set to “On”
    • 455 MB/s
    • 582 MB/s

    Notes, Thoughts & Mentionables:
    There are a few things worth mentioning about this system:
    Due to time restrictions, I was only able to test on the raidz vdev type. I look forward to testing again with varying vdev types/configs if and when possible.

    –himuraken

Initial Thoughts: Netgate Hamakua

In this post I would like to share some of my initial thoughts of the Netgate Hamakua. We were looking for a 1U half depth rack mount system to run pfSense 1.2.3 on. Although we haven’t mentioned it much here on the blog, we love working with pfSense. PfSense is a fork of the m0n0wall project and is based on FreeBSD. I have pfSense running on everything from discard eMachine workstations to multi-thousand dollar rack mount servers, and everything in between.

We have pfSense embedded running on a number of ALIX based Netgate m1n1wall 2D3’s and it is an excellent combination of lower power and stable performance. When it came time to migrate from a VM based install to hardware in our rack we looked to Netgate. We went with the rack-mount version of the Hamakua and purchased the optional VGA break-out cable. The Hamakua has room for a 2.5″ HDD drive which is an excellent option if you need that sort of thing. It is important to note that the embedded installation of pfSense does not output any data to the VGA port. So if you are running embedded you will see the initial POST / BIOS phase of the boot and then thats it. This is due to the fact that the embedded install is targeted mainly for lower power devices that use serial for display output.

From what I have been able to gather from books, forums, and #pfsense on Freenode, it is obvious that key developers of the pfSense project test on this hardware extensively. And for good reason, its a great platform: 1U, 16W power consumption, 1GHz CPU, and 5 NIC’s/interfaces. You can find great documentation on the pfSense site regarding embedded and full installations for this unit. Long story short, they use it and develop on it, it will be around for awhile.

We are anticipating an upgrade from our current DS3 connectivity to 1Gbps and wanted to have something that make at least some use of the new line. For this reason we did some basic performance testing using our 2900 ZFS test box and another similarly spec’d server. While running large data transfers between two individual 1Gbps interfaces we were able to max the system at roughly 250Mbps throughput. This is right inline with the sizing guide in the pfSense book. The appears to be a limitation of the 1Ghz processor. Be sure to take a look at the pfSense book for sizing and throughput requirements, it is quite helpful in this regard in addition to others.

One thing that is worth mentioning is the heat that this thing generates. During heavy testing and average daily use, a large amount of heat was being displaced. The top of the unit is basically a heatsink and it does its job well. Because of this, it will certainly be going on the top of our rack due to ventilation concerns. I beleive that this design is pretty solid and it would most likely take the abuse without batting an eye, but I didn’t want to risk burning this one out.

To conclude, if you need a rack mount system that will run pfSense, is well supported by the community, and you don’t need to cross the 250Mbps barrier, this may be the unit for you. This is the second model of device that we have purchased from Netgate, and as always we weren’t disappointed. If you need something a bit less performant and easier on the budget, be sure to check out the Netgate m1n1wall 2D3/2D13. It has 3x100Mbps ports and gets the job done well.

–Himuraken