FreeNAS on the Lenovo TS440

With a recent build fresh on the brain I figured I’d share some thoughts on the hardware used as I hope it helps others. I googled and tried to find information ahead of time and found sparse info.

Recently, I set out to replace my current small office FreeNAS box. From a performance standpoint the box looked great on paper: AMD 8core @ 3.4Ghz, 32GB of “good” memory, an expensive Seasonic power supply, and 16 drives attached to a pair of Dell PERC H200 controllers packed into a high end Lian Li full tower. The tower had a SAS backplane and the 5.25″ bays had two 4 disk SATA enclosures installed. The towering behemoth worked like a champ for quite sometime. Day in and day out, the trusty homebrew served up NFS exports to ESXi, Proxmox, and numerous other LAN hosts ranging from RaspberryPi’s to FOG imaging VMs and things of that nature.

LIFE WAS GOOD AND IGNORANCE IS BLISS.

Once or twice while physically away from the box, meaning out of town, I received alerts from an external monitoring service that some of my VMs were down. Of course this only happens when you are away, and only to systems that DON’T have a hardware level remote access solution like IPMI, Intel vPro, HP iLO, or Dell DRAC. But I digress for surely it is OK for your entire FreeNAS box to just mysteriously power off. Not a UPS failure, just an old fashioned “who knows”. Take all that plus the frustration of not being able to power the box on remotely and you begin to see why the homebrew had to go.

Some cursory searches online and a quick check with the fine folk over in #freenas got me thinking about custom vs prebuilt boxes. After comparing prices of various boards and form factors I determined that the Lenovo TS series of towers servers might be a good fit. Several people on #freenas and the internet in general had info on the TS140 which is the smaller and cheaper of the two, but I wanted at least 8 drive bays. The TS140 looks nice if you only need 1-4 cabled drives, hot swap isn’t an option on the little guy.

Armed with what seemed like proper info at the time, I ordered the 4bay variant of the TS440 since it was on sale for a meager $299.99 with free shipping. My plan was to test the system as it came and then add the secondary drive cage and backplane for a grand total of 8 hard drives. As it turns out, my plan was ill-conceived as I could not locate any vendor selling the hardware I needed. I reached out to a well known IT and Lenovo vendor to get the info I needed. Much to my dismay, I was informed that Lenovo does not sell the parts needed to take the 4 bay all the way up to 8 drives. This detail is quite frustrating since the documentation I found stated that the system can be used with a 4 or 8 bay config. That is technically true, but only if you order the right SKU/Lenovo specific part number in the first place.

I am happy to report that the TS440 with XUX SKU is humming along happily now. The XUX model comes with the drive cage, backplane, and add-on controller necessary to run 8 drives. The RAID controller included in the system happily recognized my 4TB SATA disks and 3TB SAS disks. The controller supports RAID levels 0/1/10 out of the box but defaults to exporting disks as JBOD as long as you don’t manually set them up in a RAID array, perfect for ZFS. An option that I also decided to go with was the second power supply. The TS440 comes with a single hot swap supply and a spacer/blank slot for the second.

Hope this helps you with your small office NAS builds if you are condering a Lenovo TS440.

–himuraken

HP ProLiant MicroServer Flexibility

I’ve been meaning to put some of my thoughts on the HP MicroServer N40L for quite some time and just haven’t made the time to do it, so here goes.

Long ago I was searching for a reason to purchase and play with HP’s MicroServers and got my chance when a client asked for an affordable backup device. I jumped at the chance and ordered one of the N40L’s. These units are listed as part of the ProLiant family of servers which sounded promising, but being the skeptic that I am, I didn’t expect much for the seemingly measly $350 price tag.

The unit comes with an AMD dual core CPU, 2GB of RAM, 250GB HDD, and a 1Gbps NIC. The system has a mini-PCIe slot for a remote access/iLO/DRAC type card, and a second standard PCIe slot. Although the system ships with only a single drive, all four bays have “hot swap” trays/carriers, making adding additional disks no problem. I say “hot swap” because I am pretty sure that the backplane/controller do not allow actual hot swapping in its true sense, YMMV. Another note on the hardware; the motherboard can be easily removed from the system by disconnecting a few cables and backing out two thumb screws. The board is on a simple and quite brilliant tray assembly which makes removal, upgrade, and insertion simple. Do yourself a favor when you purchase the system by maxing out the RAM at 8GB(DDR3/ECC) and adding the optional iLO/remote access card. For basic NAS and low end Linux server duties the 2GB will work fine and you will have no regrets, but going to 8GB really opens the doors, more on that next.

Before I jump into exactly what it can do, it is worth mentioning what YOU should not do with it. For instance, don’t try and be a hero to your clients by touting this as an ultra affordable server solution. I have read of several people putting SBS on this box and then using it as the primary file and mail server for 20+ users. Don’t be a dummy, if you’re trying to service your clients properly get them a truly redundant system with hardware RAID, dual PSU’s and things of that nature. You are providing a disservice to your clients if you use this in a place it should not be used. Responsibility rant over…

With the remote access card, 8GB of RAM, and a couple of SATA drives, you are ready to play. This is the little server that could and it shows. The thing runs VMware ESXi5, Linux, Windows, FreeBSD(FreeNAS) and many other things. An important thing to remember is that the included disk controller uses fake RAID/driver assisted RAID so don’t expect RAID support outside of Windows. With that limitation in mind, this makes the ideal small business backup device, home virtualization lab, or any other number of roles you can through at it.

Fast forward to today and the device has served me and many others quite nicely. Although not a comprehensive list of installs, I can confirm successful installation on the following operating systems:

  • Debian Lenny (i386/AMD64)
  • Debian Squeeze (i386/AMD64) Currently Debian stable release
  • Debian Wheezy (i386/AMD64) Currently Debian testing release
  • Ubuntu 10.04 (i386/AMD64)
  • FreeNAS 0.7 (i386)
  • FreeNAS 8 (i386/AMD64)
  • VMware ESXi 4.1
  • VMware ESXi 5.0
  • Windows Server 2008 R2
  • Windows Small Business Server 2011
  • Whew! What a list and that just touches the surface of what you can run. Those just happen to be the configurations that I have tested with success. My current configuration consists of the base system running 8GB of RAM, iLO card, 1x64GB SSD and 4x1TB RAID edition drives. I’ve got Debian stable AMD64 running on / and have 4x1TB RE drives using Linux md RAID in level 5 mounted on /home. This acts as my internal NFS server and virtualization lab. The system runs vm guests well through KVM although you will have to watch the CPU. Being a dual core 1.5GHz, the system will usually run out of CPU before you hit any other bottlenecks.

    In conclusion, if you need a flexible and affordable storage device for most small business or home needs, a cheap virtualization lab in a box, or similar configuration, you will not be disappointed by this device.

    –himuraken

    ZFS Performance Testing: P4 Clone using WD Raptors

    In my previous post on ZFS performance testing I ran through various tests on a particular test system that I had running at the time. This time round, we are testing a Intel motherboard with a Pentium 4 CPU, 2GB of RAM, and 6x74GB drives. This system was literally thrown together with spare parts laying around the home office. This is the ultimate in home made, commodity parts, down and dirty NAS setups. Using seven (6) Western Digital Raptor drives attached to three different SATA II non-RAID controllers. For the uninitiated, Raptors are SATA drives that spin at 10K RPM. So here we go….

    *Note*
    My intention was to run this with seven (7) disks so that it would be a WD Raptor vs WD Blue test. Unfortunately, my seventh, and last WD Raptor died during configuration. With that in mind, it is interesting nonetheless to compare the 7 disk WD Blue results with the 6 WD Raptor results.

    Test rig:

    • Custom Build
    • Intel Pentium 4 @ 3.0GHz Hyper Threading Enabled
    • 2GB RAM
    • Onboard SATA II
    • 6x74GB SATA II 10000 RPM – Six (6) independent drives with no RAID. Model: Western Digital Raptor
    • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

    GNU dd:
    Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

    dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

    Results:
    Results are listed as configuration, write, then read.

    • ZFS stripe pool utilizing six (6) SATA disks
      • 150 MB/s
      • 239 MB/s

    • ZFS stripe pool utilizing six (6) SATA disks with dataset compression set to “On”
      • 302 MB/s
      • 515 MB/s

      • ZFS raidz pool utilizing six (6) SATA disks
        • 99 MB/s
        • 165 MB/s

      • ZFS raidz pool utilizing six (6) SATA disks with dataset compression set to “On”
        • 299 MB/s
        • 516 MB/s

      • ZFS raidz2 pool utilizing six (6) SATA disks
        • 76 MB/s
        • 164 MB/s

      • ZFS raidz2 pool utilizing six (6) SATA disks with dataset compression set to “On”
        • 301 MB/s
        • 514 MB/s

        Notes, Thoughts & Mentionables:
        There are a few things worth mentioning about this system:
        This is a truly down and dirty quick and ugly build using used parts. As such, you get what you pay for and the performance data here proves that. Possibly more influential to the performance scores here could and likely is the storage controllers. The motherboard only has four ports so used/cheap/old SATA II PCI controller was used to gain the additional three ports.

        As always, the tests involving compression provide interesting insight into the limitations of various processors. While running the compression off tests, the CPU load was relatively low and the systems audible sound was unchanged. While running compression on tests, the CPU was of course showing a heavy load, but it also prompted the CPU cooler to spin at a higher (more audible) rate. Guess those old P4’s still cook πŸ™‚

        –himuraken

    ZFS Performance Testing: Intel P4 Clone using WD Blue Drives

    In my previous post on ZFS performance testing I ran through various tests on a particular test system that I had running at the time. This time round, we are testing a Intel motherboard with a Pentium 4 CPU, 2GB of RAM, and 7x500GB drives. This system was literally thrown together with spare parts laying around the home office. This is the ultimate in home made, commodity parts, down and dirty NAS setups. Using seven (7) Western Digital WD5000AAKS (Known as WD Blue) drives attached to three different SATA II non-RAID controllers. Historically, these drives have been used in low cost and all around cheap builds up until the WD Green drives came out. Essentially, better than Green, worse than Black, but a good mix of price/GB. So here we go….

    Test rig:

    • Custom Build
    • Intel Pentium 4 @ 3.0GHz Hyper Threading Enabled
    • 2GB RAM
    • Onboard SATA II
    • 7x500GB SATA II 7200 RPM – Seven (7) independent drives with no RAID. Model: Western Digital WD5000AAKS
    • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

    GNU dd:
    Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

    dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

    Results:
    Results are listed as configuration, write, then read.

    • ZFS stripe pool utilizing seven (7) SATA disks
      • 130 MB/s
      • 228 MB/s

    • ZFS stripe pool utilizing seven (7) SATA disks with dataset compression set to “On”
      • 301 MB/s
      • 508 MB/s

      • ZFS raidz pool utilizing seven (7) SATA disks
        • 81 MB/s
        • 149 MB/s

      • ZFS raidz pool utilizing seven (7) SATA disks with dataset compression set to “On”
        • 302 MB/s
        • 512 MB/s

      • ZFS raidz2 pool utilizing seven (7) SATA disks
        • 66 MB/s
        • 144 MB/s

      • ZFS raidz2 pool utilizing seven (7) SATA disks with dataset compression set to “On”
        • 298 MB/s
        • 515 MB/s

        Notes, Thoughts & Mentionables:
        There are a few things worth mentioning about this system:
        This is a truly down and dirty quick and ugly build using used parts. As such, you get what you pay for and the performance data here proves that. Possibly more influential to the performance scores here could and likely is the storage controllers. The motherboard only has four ports so used/cheap/old SATA II PCI controller was used to gain the additional three ports.

        As always, the tests involving compression provide interesting insight into the limitations of various processors. While running the compression off tests, the CPU load was relatively low and the systems audible sound was unchanged. While running compression on tests, the CPU was of course showing a heavy load, but it also prompted the CPU cooler to spin at a higher (more audible) rate. Guess those old P4’s still cook πŸ™‚

        –himuraken

    ZFS Performance Testing: AMD Dual Core w/ 6GB DDR2 & 5x2TB SATA in raidz

    In my previous post on ZFS performance testing I ran through various tests on a particular test system that I had running at the time. This time round, I have a MSI motherboard with an AMD dual core CPU, 6GB of DDR2, and 5x2TB drives. With this build we went with the HITACHI Deskstar 7K3000 HDS723020BLA642 drive which is currently available on NewEgg for $119.99 plus shipping. These drives have been strong performers and are slowly making me forget the “DeathStar” era, but only time will tell… These are in fact SATA III drives but the onboard controller that we tested through only supports SATA II. So here we go….

    Test rig:

    • Custom Build
    • AMD Athlon Phenom II Dual Core
    • 6GB DDR2 RAM
    • Onboard SATA II
    • 5x2TB SATA II 7200 RPM – Five (5) independent drives with no RAID. Model: HITACHI Deskstar 7K3000
    • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

    GNU dd:
    Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

    dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

    Results:
    Results are listed as configuration, write, then read.

    • ZFS raidz pool utilizing five (5) SATA disks
      • 232 MB/s
      • 336 MB/s

    • ZFS raidz pool utilizing five (5) SATA disks with dataset compression set to “On”
      • 455 MB/s
      • 582 MB/s

      Notes, Thoughts & Mentionables:
      There are a few things worth mentioning about this system:
      Due to time restrictions, I was only able to test on the raidz vdev type. I look forward to testing again with varying vdev types/configs if and when possible.

      –himuraken

    ZFS Performance Testing: Dell PowerEdge 2950 III

    In my previous post on ZFS performance testing I ran through various tests on a particular test system that I had running at the time. That system has come and gone to a better place in the proverbial cloud. This go round, I have a similar server with a different ZFS configuration. Lets dive in to the system and tests.

    Test rig:

    • Dell PowerEdge 2950
    • Xeon Quad Core 1.6GHz
    • 8GB RAM
    • PERC5 – Total of 5 logical drives with read ahead and write back enabled.
    • 2x160GB SATAII 7200 RPM – Hardware RAID1
    • 4x2TB SATAII 7200 RPM – Four (4) Hardware RAID0’s (Controller does not support JBOD mode)
    • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

    GNU dd:
    Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

    dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

    Results:
    Results are listed as configuration, write, then read.

    • Reference run: 2x160GB 7200 RPM SATAII RAID1
      • 85.6 MB/s
      • 92.5 MB/s

    • ZFS stripe pool utilizing two (2) SATA disks
      • 221 MB/s
      • 206 MB/s

    • ZFS stripe pool utilizing two (2) SATA disks with dataset compression set to “On”
      • 631 MB/s
      • 1074 MB/s

    • ZFS mirror pool utilizing two (2) SATA disks
      • 116 MB/s
      • 145 MB/s

    • ZFS mirror pool utilizing two (2) SATA disks with dataset compression set to β€œOn”
      • 631 MB/s
      • 1069 MB/s
    • Notes, Thoughts & Mentionables:
      There are a few things worth mentioning about this system:
      On the hard disk side of things, the hardware RAID1 was made up of Western Digital Blue disks while the other four (4) disks are Western Digital 2TB Green drives. If you have done your home work, you already know that the WD EARS disks use 4K sectors and masks this as 512byte sectors so that OS’ don’t complain. If disks are not properly formatted and/or sector aligned with this in mind, performance takes a tremendous hit. The reason for such inexpensive disks for this build is simple: This server is configured as a backup destination and as such, size is more important than the reliability that a SAS solution would provide.

      Compressions test results were, to say the least, quite interesting. It should be noted that the stripe and mirror pools performed quite similarly. Further testing of these results will be required, but it seems that the maximum score of 1074 MB/s was limited only by the CPU. During the read test all four cores of the quad core CPU were maxed. This becomes even more interesting when you compare the results of this two disk stripe pool with my previous findings on the six disk stripe pool running the same test. The earlier test rig scored much lower and it would appear to be the difference in CPUs that made such a strong difference.

      –himuraken

    ZFS Performance Testing: Dell PowerEdge 2900

    ***Update***
    This started a simple post to share a few results from various levels of zfs/array testing. Be sure to check back from time to time as I add additional configuration results.

    ***Begin Original Text***
    I have been playing around with ZFS on various operating systems lately and have been trying to compare performance. I figured that sharing some of my results would give others something to compare with. Plus, I am on borrowed time with this unit, it is big, loud, and taking up free space and spare time in the home office.

    Test rig:

    • Dell PowerEdge 2900
    • Xeon Dual Core 3.0GHz (HT Enabled-OS showing 4 cores)
    • 14GB RAM
    • PERC5 – Total of 7 logical drives with read ahead and write back enabled.
    • 2x146GB SAS 15K RPM – Hardware RAID1 for OS
    • 6x1TB SATAII 7200 RPM – Six (6) SATAII 7200 RPM Disks for testing. (Controller does not support JBOD mode)
    • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

    GNU dd:
    Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

    dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

    Results: Each disk configured as a separate RAID0 array on controller.
    Results are listed as configuration, write, then read.

    • ZFS raidz1 pool utilizing six (6) SATA disks
      • 133 MB/s
      • 311 MB/s

    • ZFS raidz1 pool utilizing six (6) SATA disks with dataset compression set to “On”
      • 414 MB/s
      • 359 MB/s

    • ZFS raidz2 pool utilizing six (6) SATA disks
      • 180 MB/s
      • 286 MB/s

    • ZFS raidz2 pool utilizing six (6) SATA disks with dataset compression set to “On”
      • 414 MB/s
      • 361 MB/s

    • ZFS stripe pool utilizing six (6) SATA disks
      • 190 MB/s
      • 263 MB/s

    • ZFS stripe pool utilizing six (6) SATA disks with dataset compression set to “On”
      • 429 MB/s
      • 381 MB/s
    • Results: Each disk configured as a member of a single RAID0 array.
      Results are listed as configuration, write, then read.

      • ZFS stripe pool utilizing six (6) SATA disks
        • 353.4 MB/s
        • 473.0 MB/s

      • ZFS stripe pool utilizing six (6) SATA disks with dataset compression set to “On”
        • 420.8 MB/s
        • 340.9 MB/s
      • Results: Each disk configured as a member of a single RAID5 array.
        Results are listed as configuration, write, then read.

        • ZFS stripe pool utilizing six (6) SATA disks
          • 322.0 MB/s
          • 325.9 MB/s

        • ZFS stripe pool utilizing six (6) SATA disks with dataset compression set to “On”
          • 438.8x MB/s
          • 371.8 MB/s
        • Results: Each disk configured as a member of a single RAID10 array.
          Results are listed as configuration, write, then read.

          • ZFS stripe pool utilizing six (6) SATA disks
            • 251.2 MB/s
            • 304.3 MB/s

          • ZFS stripe pool utilizing six (6) SATA disks with dataset compression set to “On”
            • 430.7 MB/s
            • 360.9 MB/s
          • Notes, Thoughts & Mentionables:
            It is worth noting that the results of the datasets with compression on can be a bit misleading. This is due to the source we are using with dd; /dev/zero. Feeding a string of zeroes into a compression algorithm is probably the best case scenario when it comes to compression. In real world conditions, data being read or written that is compressible would experience an increase in performance, while non-compressible data would likely suffer a penalty.

            I am hoping to conduct the same tests on the exact hardware in the near future. I will be switching the six (6) SATA disks over to varying hardware RAID levels and comparing them again.

            ***Update***
            In a follow-up post to this one, I concluded that compression read and write performance on this particular test rig was being limited by the CPU. I am hoping to swap out the current Intel Xeon 3.0GHz dual core for a quad core for additional comparison.

            –himuraken