In my previous post on ZFS performance testing I ran through various tests on a particular test system that I had running at the time. That system has come and gone to a better place in the proverbial cloud. This go round, I have a similar server with a different ZFS configuration. Lets dive in to the system and tests.
- Dell PowerEdge 2950
- Xeon Quad Core 1.6GHz
- 8GB RAM
- PERC5 – Total of 5 logical drives with read ahead and write back enabled.
- 2x160GB SATAII 7200 RPM – Hardware RAID1
- 4x2TB SATAII 7200 RPM – Four (4) Hardware RAID0’s (Controller does not support JBOD mode)
- FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13
Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:
dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M
Results are listed as configuration, write, then read.
- Reference run: 2x160GB 7200 RPM SATAII RAID1
- 85.6 MB/s
- 92.5 MB/s
- ZFS stripe pool utilizing two (2) SATA disks
- 221 MB/s
- 206 MB/s
- ZFS stripe pool utilizing two (2) SATA disks with dataset compression set to “On”
- 631 MB/s
- 1074 MB/s
- ZFS mirror pool utilizing two (2) SATA disks
- 116 MB/s
- 145 MB/s
- ZFS mirror pool utilizing two (2) SATA disks with dataset compression set to “On”
- 631 MB/s
- 1069 MB/s
Notes, Thoughts & Mentionables:
There are a few things worth mentioning about this system:
On the hard disk side of things, the hardware RAID1 was made up of Western Digital Blue disks while the other four (4) disks are Western Digital 2TB Green drives. If you have done your home work, you already know that the WD EARS disks use 4K sectors and masks this as 512byte sectors so that OS’ don’t complain. If disks are not properly formatted and/or sector aligned with this in mind, performance takes a tremendous hit. The reason for such inexpensive disks for this build is simple: This server is configured as a backup destination and as such, size is more important than the reliability that a SAS solution would provide.
Compressions test results were, to say the least, quite interesting. It should be noted that the stripe and mirror pools performed quite similarly. Further testing of these results will be required, but it seems that the maximum score of 1074 MB/s was limited only by the CPU. During the read test all four cores of the quad core CPU were maxed. This becomes even more interesting when you compare the results of this two disk stripe pool with my previous findings on the six disk stripe pool running the same test. The earlier test rig scored much lower and it would appear to be the difference in CPUs that made such a strong difference.