FreeNAS on the Lenovo TS440

With a recent build fresh on the brain I figured I’d share some thoughts on the hardware used as I hope it helps others. I googled and tried to find information ahead of time and found sparse info.

Recently, I set out to replace my current small office FreeNAS box. From a performance standpoint the box looked great on paper: AMD 8core @ 3.4Ghz, 32GB of “good” memory, an expensive Seasonic power supply, and 16 drives attached to a pair of Dell PERC H200 controllers packed into a high end Lian Li full tower. The tower had a SAS backplane and the 5.25″ bays had two 4 disk SATA enclosures installed. The towering behemoth worked like a champ for quite sometime. Day in and day out, the trusty homebrew served up NFS exports to ESXi, Proxmox, and numerous other LAN hosts ranging from RaspberryPi’s to FOG imaging VMs and things of that nature.

LIFE WAS GOOD AND IGNORANCE IS BLISS.

Once or twice while physically away from the box, meaning out of town, I received alerts from an external monitoring service that some of my VMs were down. Of course this only happens when you are away, and only to systems that DON’T have a hardware level remote access solution like IPMI, Intel vPro, HP iLO, or Dell DRAC. But I digress for surely it is OK for your entire FreeNAS box to just mysteriously power off. Not a UPS failure, just an old fashioned “who knows”. Take all that plus the frustration of not being able to power the box on remotely and you begin to see why the homebrew had to go.

Some cursory searches online and a quick check with the fine folk over in #freenas got me thinking about custom vs prebuilt boxes. After comparing prices of various boards and form factors I determined that the Lenovo TS series of towers servers might be a good fit. Several people on #freenas and the internet in general had info on the TS140 which is the smaller and cheaper of the two, but I wanted at least 8 drive bays. The TS140 looks nice if you only need 1-4 cabled drives, hot swap isn’t an option on the little guy.

Armed with what seemed like proper info at the time, I ordered the 4bay variant of the TS440 since it was on sale for a meager $299.99 with free shipping. My plan was to test the system as it came and then add the secondary drive cage and backplane for a grand total of 8 hard drives. As it turns out, my plan was ill-conceived as I could not locate any vendor selling the hardware I needed. I reached out to a well known IT and Lenovo vendor to get the info I needed. Much to my dismay, I was informed that Lenovo does not sell the parts needed to take the 4 bay all the way up to 8 drives. This detail is quite frustrating since the documentation I found stated that the system can be used with a 4 or 8 bay config. That is technically true, but only if you order the right SKU/Lenovo specific part number in the first place.

I am happy to report that the TS440 with XUX SKU is humming along happily now. The XUX model comes with the drive cage, backplane, and add-on controller necessary to run 8 drives. The RAID controller included in the system happily recognized my 4TB SATA disks and 3TB SAS disks. The controller supports RAID levels 0/1/10 out of the box but defaults to exporting disks as JBOD as long as you don’t manually set them up in a RAID array, perfect for ZFS. An option that I also decided to go with was the second power supply. The TS440 comes with a single hot swap supply and a spacer/blank slot for the second.

Hope this helps you with your small office NAS builds if you are condering a Lenovo TS440.

–himuraken

HP ProLiant MicroServer Flexibility

I’ve been meaning to put some of my thoughts on the HP MicroServer N40L for quite some time and just haven’t made the time to do it, so here goes.

Long ago I was searching for a reason to purchase and play with HP’s MicroServers and got my chance when a client asked for an affordable backup device. I jumped at the chance and ordered one of the N40L’s. These units are listed as part of the ProLiant family of servers which sounded promising, but being the skeptic that I am, I didn’t expect much for the seemingly measly $350 price tag.

The unit comes with an AMD dual core CPU, 2GB of RAM, 250GB HDD, and a 1Gbps NIC. The system has a mini-PCIe slot for a remote access/iLO/DRAC type card, and a second standard PCIe slot. Although the system ships with only a single drive, all four bays have “hot swap” trays/carriers, making adding additional disks no problem. I say “hot swap” because I am pretty sure that the backplane/controller do not allow actual hot swapping in its true sense, YMMV. Another note on the hardware; the motherboard can be easily removed from the system by disconnecting a few cables and backing out two thumb screws. The board is on a simple and quite brilliant tray assembly which makes removal, upgrade, and insertion simple. Do yourself a favor when you purchase the system by maxing out the RAM at 8GB(DDR3/ECC) and adding the optional iLO/remote access card. For basic NAS and low end Linux server duties the 2GB will work fine and you will have no regrets, but going to 8GB really opens the doors, more on that next.

Before I jump into exactly what it can do, it is worth mentioning what YOU should not do with it. For instance, don’t try and be a hero to your clients by touting this as an ultra affordable server solution. I have read of several people putting SBS on this box and then using it as the primary file and mail server for 20+ users. Don’t be a dummy, if you’re trying to service your clients properly get them a truly redundant system with hardware RAID, dual PSU’s and things of that nature. You are providing a disservice to your clients if you use this in a place it should not be used. Responsibility rant over…

With the remote access card, 8GB of RAM, and a couple of SATA drives, you are ready to play. This is the little server that could and it shows. The thing runs VMware ESXi5, Linux, Windows, FreeBSD(FreeNAS) and many other things. An important thing to remember is that the included disk controller uses fake RAID/driver assisted RAID so don’t expect RAID support outside of Windows. With that limitation in mind, this makes the ideal small business backup device, home virtualization lab, or any other number of roles you can through at it.

Fast forward to today and the device has served me and many others quite nicely. Although not a comprehensive list of installs, I can confirm successful installation on the following operating systems:

  • Debian Lenny (i386/AMD64)
  • Debian Squeeze (i386/AMD64) Currently Debian stable release
  • Debian Wheezy (i386/AMD64) Currently Debian testing release
  • Ubuntu 10.04 (i386/AMD64)
  • FreeNAS 0.7 (i386)
  • FreeNAS 8 (i386/AMD64)
  • VMware ESXi 4.1
  • VMware ESXi 5.0
  • Windows Server 2008 R2
  • Windows Small Business Server 2011
  • Whew! What a list and that just touches the surface of what you can run. Those just happen to be the configurations that I have tested with success. My current configuration consists of the base system running 8GB of RAM, iLO card, 1x64GB SSD and 4x1TB RAID edition drives. I’ve got Debian stable AMD64 running on / and have 4x1TB RE drives using Linux md RAID in level 5 mounted on /home. This acts as my internal NFS server and virtualization lab. The system runs vm guests well through KVM although you will have to watch the CPU. Being a dual core 1.5GHz, the system will usually run out of CPU before you hit any other bottlenecks.

    In conclusion, if you need a flexible and affordable storage device for most small business or home needs, a cheap virtualization lab in a box, or similar configuration, you will not be disappointed by this device.

    –himuraken

    ZFS Performance Testing: P4 Clone using WD Raptors

    In my previous post on ZFS performance testing I ran through various tests on a particular test system that I had running at the time. This time round, we are testing a Intel motherboard with a Pentium 4 CPU, 2GB of RAM, and 6x74GB drives. This system was literally thrown together with spare parts laying around the home office. This is the ultimate in home made, commodity parts, down and dirty NAS setups. Using seven (6) Western Digital Raptor drives attached to three different SATA II non-RAID controllers. For the uninitiated, Raptors are SATA drives that spin at 10K RPM. So here we go….

    *Note*
    My intention was to run this with seven (7) disks so that it would be a WD Raptor vs WD Blue test. Unfortunately, my seventh, and last WD Raptor died during configuration. With that in mind, it is interesting nonetheless to compare the 7 disk WD Blue results with the 6 WD Raptor results.

    Test rig:

    • Custom Build
    • Intel Pentium 4 @ 3.0GHz Hyper Threading Enabled
    • 2GB RAM
    • Onboard SATA II
    • 6x74GB SATA II 10000 RPM – Six (6) independent drives with no RAID. Model: Western Digital Raptor
    • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

    GNU dd:
    Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

    dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

    Results:
    Results are listed as configuration, write, then read.

    • ZFS stripe pool utilizing six (6) SATA disks
      • 150 MB/s
      • 239 MB/s

    • ZFS stripe pool utilizing six (6) SATA disks with dataset compression set to “On”
      • 302 MB/s
      • 515 MB/s

      • ZFS raidz pool utilizing six (6) SATA disks
        • 99 MB/s
        • 165 MB/s

      • ZFS raidz pool utilizing six (6) SATA disks with dataset compression set to “On”
        • 299 MB/s
        • 516 MB/s

      • ZFS raidz2 pool utilizing six (6) SATA disks
        • 76 MB/s
        • 164 MB/s

      • ZFS raidz2 pool utilizing six (6) SATA disks with dataset compression set to “On”
        • 301 MB/s
        • 514 MB/s

        Notes, Thoughts & Mentionables:
        There are a few things worth mentioning about this system:
        This is a truly down and dirty quick and ugly build using used parts. As such, you get what you pay for and the performance data here proves that. Possibly more influential to the performance scores here could and likely is the storage controllers. The motherboard only has four ports so used/cheap/old SATA II PCI controller was used to gain the additional three ports.

        As always, the tests involving compression provide interesting insight into the limitations of various processors. While running the compression off tests, the CPU load was relatively low and the systems audible sound was unchanged. While running compression on tests, the CPU was of course showing a heavy load, but it also prompted the CPU cooler to spin at a higher (more audible) rate. Guess those old P4’s still cook 🙂

        –himuraken

    ZFS Performance Testing: Intel P4 Clone using WD Blue Drives

    In my previous post on ZFS performance testing I ran through various tests on a particular test system that I had running at the time. This time round, we are testing a Intel motherboard with a Pentium 4 CPU, 2GB of RAM, and 7x500GB drives. This system was literally thrown together with spare parts laying around the home office. This is the ultimate in home made, commodity parts, down and dirty NAS setups. Using seven (7) Western Digital WD5000AAKS (Known as WD Blue) drives attached to three different SATA II non-RAID controllers. Historically, these drives have been used in low cost and all around cheap builds up until the WD Green drives came out. Essentially, better than Green, worse than Black, but a good mix of price/GB. So here we go….

    Test rig:

    • Custom Build
    • Intel Pentium 4 @ 3.0GHz Hyper Threading Enabled
    • 2GB RAM
    • Onboard SATA II
    • 7x500GB SATA II 7200 RPM – Seven (7) independent drives with no RAID. Model: Western Digital WD5000AAKS
    • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

    GNU dd:
    Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

    dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

    Results:
    Results are listed as configuration, write, then read.

    • ZFS stripe pool utilizing seven (7) SATA disks
      • 130 MB/s
      • 228 MB/s

    • ZFS stripe pool utilizing seven (7) SATA disks with dataset compression set to “On”
      • 301 MB/s
      • 508 MB/s

      • ZFS raidz pool utilizing seven (7) SATA disks
        • 81 MB/s
        • 149 MB/s

      • ZFS raidz pool utilizing seven (7) SATA disks with dataset compression set to “On”
        • 302 MB/s
        • 512 MB/s

      • ZFS raidz2 pool utilizing seven (7) SATA disks
        • 66 MB/s
        • 144 MB/s

      • ZFS raidz2 pool utilizing seven (7) SATA disks with dataset compression set to “On”
        • 298 MB/s
        • 515 MB/s

        Notes, Thoughts & Mentionables:
        There are a few things worth mentioning about this system:
        This is a truly down and dirty quick and ugly build using used parts. As such, you get what you pay for and the performance data here proves that. Possibly more influential to the performance scores here could and likely is the storage controllers. The motherboard only has four ports so used/cheap/old SATA II PCI controller was used to gain the additional three ports.

        As always, the tests involving compression provide interesting insight into the limitations of various processors. While running the compression off tests, the CPU load was relatively low and the systems audible sound was unchanged. While running compression on tests, the CPU was of course showing a heavy load, but it also prompted the CPU cooler to spin at a higher (more audible) rate. Guess those old P4’s still cook 🙂

        –himuraken

    ZFS Performance Testing: AMD Dual Core w/ 6GB DDR2 & 5x2TB SATA in raidz

    In my previous post on ZFS performance testing I ran through various tests on a particular test system that I had running at the time. This time round, I have a MSI motherboard with an AMD dual core CPU, 6GB of DDR2, and 5x2TB drives. With this build we went with the HITACHI Deskstar 7K3000 HDS723020BLA642 drive which is currently available on NewEgg for $119.99 plus shipping. These drives have been strong performers and are slowly making me forget the “DeathStar” era, but only time will tell… These are in fact SATA III drives but the onboard controller that we tested through only supports SATA II. So here we go….

    Test rig:

    • Custom Build
    • AMD Athlon Phenom II Dual Core
    • 6GB DDR2 RAM
    • Onboard SATA II
    • 5x2TB SATA II 7200 RPM – Five (5) independent drives with no RAID. Model: HITACHI Deskstar 7K3000
    • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

    GNU dd:
    Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

    dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

    Results:
    Results are listed as configuration, write, then read.

    • ZFS raidz pool utilizing five (5) SATA disks
      • 232 MB/s
      • 336 MB/s

    • ZFS raidz pool utilizing five (5) SATA disks with dataset compression set to “On”
      • 455 MB/s
      • 582 MB/s

      Notes, Thoughts & Mentionables:
      There are a few things worth mentioning about this system:
      Due to time restrictions, I was only able to test on the raidz vdev type. I look forward to testing again with varying vdev types/configs if and when possible.

      –himuraken

    Initial Thoughts: Netgate Hamakua

    In this post I would like to share some of my initial thoughts of the Netgate Hamakua. We were looking for a 1U half depth rack mount system to run pfSense 1.2.3 on. Although we haven’t mentioned it much here on the blog, we love working with pfSense. PfSense is a fork of the m0n0wall project and is based on FreeBSD. I have pfSense running on everything from discard eMachine workstations to multi-thousand dollar rack mount servers, and everything in between.

    We have pfSense embedded running on a number of ALIX based Netgate m1n1wall 2D3’s and it is an excellent combination of lower power and stable performance. When it came time to migrate from a VM based install to hardware in our rack we looked to Netgate. We went with the rack-mount version of the Hamakua and purchased the optional VGA break-out cable. The Hamakua has room for a 2.5″ HDD drive which is an excellent option if you need that sort of thing. It is important to note that the embedded installation of pfSense does not output any data to the VGA port. So if you are running embedded you will see the initial POST / BIOS phase of the boot and then thats it. This is due to the fact that the embedded install is targeted mainly for lower power devices that use serial for display output.

    From what I have been able to gather from books, forums, and #pfsense on Freenode, it is obvious that key developers of the pfSense project test on this hardware extensively. And for good reason, its a great platform: 1U, 16W power consumption, 1GHz CPU, and 5 NIC’s/interfaces. You can find great documentation on the pfSense site regarding embedded and full installations for this unit. Long story short, they use it and develop on it, it will be around for awhile.

    We are anticipating an upgrade from our current DS3 connectivity to 1Gbps and wanted to have something that make at least some use of the new line. For this reason we did some basic performance testing using our 2900 ZFS test box and another similarly spec’d server. While running large data transfers between two individual 1Gbps interfaces we were able to max the system at roughly 250Mbps throughput. This is right inline with the sizing guide in the pfSense book. The appears to be a limitation of the 1Ghz processor. Be sure to take a look at the pfSense book for sizing and throughput requirements, it is quite helpful in this regard in addition to others.

    One thing that is worth mentioning is the heat that this thing generates. During heavy testing and average daily use, a large amount of heat was being displaced. The top of the unit is basically a heatsink and it does its job well. Because of this, it will certainly be going on the top of our rack due to ventilation concerns. I beleive that this design is pretty solid and it would most likely take the abuse without batting an eye, but I didn’t want to risk burning this one out.

    To conclude, if you need a rack mount system that will run pfSense, is well supported by the community, and you don’t need to cross the 250Mbps barrier, this may be the unit for you. This is the second model of device that we have purchased from Netgate, and as always we weren’t disappointed. If you need something a bit less performant and easier on the budget, be sure to check out the Netgate m1n1wall 2D3/2D13. It has 3x100Mbps ports and gets the job done well.

    –Himuraken