Dell PowerEdge 13th Gen Fan Noise

I recently came across the opportunity to assist a client with installing their new Dell PowerEdge R730XD. Quite the beefy server config, 2x10core CPUs, 128GB of RAM, 12x4TB NL-SAS, you know, all the goodies. This machine is slated to replace an aging T610 that has seen better days performance-wise.

I went ahead and put an Intel 10GbE card in the server since all other hosts in the server room including both backup boxes are 10GbE enabled and are connected to our new Netgear 10GbE switch. Keep in mind this was an industry standard PCIe 10GbE card, a particularly good one, the Intel TX540-2. After installing VMware ESXi, and later, Windows Server 2012 R2, users were complaining about the loud “jet sounding” noise coming from the server room. After logging into the Dell iDRAC Enterprise card I immediately noticed that the fans were running around 92% which was roughly 15K RPM or thereabouts. This was regardless of operating system mind you, so I couldn’t even blame Windows OR VMware this time.

After looking around online at various forums I realized that the system was running the fans near max speed/volume due to the presence of a non-certified PCIe card installed into the system. For all intents and purposes, non-certified means you didn’t pay through the nose to acquire the identical hardware from Dell. Essentially, since the Intel card doesn’t carry the Dell specific code/firmware to report back that “all is well over here in PCIe/temperature land”, the system defaults to running the fans in jet engine mode. For posterity’s sake and to clarify, this will happen on pretty much any non-Dell card that is inserted. In researching the issue I found numerous folks that put actively cooled GPUs, old school 4x1Gbps network cards, you name it, high speed fan noise.

Well no big deal, all you have to do is go into the Dell BIOS and modify a setting or two so that the system doesn’t run the fans at full steam when a card inserted right? Wrong! That would be the logical assumption and design choice to make so you know they didn’t make it that easy. Read on below to understand how I finally got this system to quiet down. The info below is compiled from many sources and some of my own figuring out, just though it would be helpful to have it all in one place.

Step 1: Enable IPMI
For this step enter your Dell servers setup/config screen and get to the remote access configuration/iDRAC setup. In the iDRAC setup you need to do all of the standard stuff like assigning an IP and setting user credentials etc, but you MUST also turn set “Enable IPMI over LAN” to yes. This setting is crucial to completing the steps below successfully.

Step 2: Get IPMI tools
Linux users can use their preferred package/distribution method to obtain ipmitool while Windows users will need to grab the Dell OpenManage BMC Utility and get it installed.

Next, open up and command prompt and navigate to the directory the BMC utility installed to, on my system this was: C:\Program Files (x86)\Dell\SysMgt\bmc\

From there go you will see several files, the program that we are using here is ipmitool.exe. Go ahead and run ipmitools.exe without any switches/arguments just to make sure its installed and working.

Step 3:
The third and final step is essentially ‘the fix’. This is where you can check the status, and then disable or enable the systems cooling response to third party cards that are installed on the PCIe bus. This part was a little frustrating at first because I was working in the right direction and was just about there but the commands weren’t being sent or interpreted the way the should have been.

You must use the lanplus option instead of lan but it is important to note that lanplus does NOT work unless you’ve enabled the “Enable IPMI over LAN” setting that I mentioned back in step 1. The non-intuitive part about that was that although I was running the right command aside from lan vs lanplus, I really didn’t get any clear feedback as to why the command wouldn’t “take”.

Anyhow, here is the base command which you need to acquaint yourself with:

ipmitool -I lanplus -H ipaddress -U root -P password raw

Obviously you will need to substitute your own iDRAC ip, user, and password. After that, just tack on one of the three commands below.

Disable Third-Party PCIe Card Default Cooling Response:
ipmitool -I lanplus -H ipaddress -U root -P password raw 0x30 0xce 0x00 0x16 0x05 0x00 0x00 0x00 0x05 0x00 0x01 0x00 0x00

Enable Third-Party PCIe Card Default Cooling Response:
ipmitool -I lanplus -H ipaddress -U root -P password raw 0x30 0xce 0x00 0x16 0x05 0x00 0x00 0x00 0x05 0x00 0x00 0x00 0x00

To check the current third party PCIe card default cooling setting:
ipmitool -I lanplus -H ipaddress -U root -P password raw 0x30 0xce 0x01 0x16 0x05 0x00 0x00 0x00

This response means disabed:
16 05 00 00 00 05 00 01 00 00

This response means enabled:
16 05 00 00 00 05 00 00 00 00

After disabling the third party cooling response my system went from the previously mentioned 15K RPM mark down to a user verified sane noise level/speed of around 6K RPM.

A key takeaway and disappointment for me is that in this day and age of widespread standards and simplicity, things are becoming increasingly proprietary and complex.

–himuraken

ZFS Performance Testing: Dell PowerEdge 2950 III

In my previous post on ZFS performance testing I ran through various tests on a particular test system that I had running at the time. That system has come and gone to a better place in the proverbial cloud. This go round, I have a similar server with a different ZFS configuration. Lets dive in to the system and tests.

Test rig:

  • Dell PowerEdge 2950
  • Xeon Quad Core 1.6GHz
  • 8GB RAM
  • PERC5 – Total of 5 logical drives with read ahead and write back enabled.
  • 2x160GB SATAII 7200 RPM – Hardware RAID1
  • 4x2TB SATAII 7200 RPM – Four (4) Hardware RAID0’s (Controller does not support JBOD mode)
  • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

GNU dd:
Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

Results:
Results are listed as configuration, write, then read.

  • Reference run: 2x160GB 7200 RPM SATAII RAID1
    • 85.6 MB/s
    • 92.5 MB/s

  • ZFS stripe pool utilizing two (2) SATA disks
    • 221 MB/s
    • 206 MB/s

  • ZFS stripe pool utilizing two (2) SATA disks with dataset compression set to “On”
    • 631 MB/s
    • 1074 MB/s

  • ZFS mirror pool utilizing two (2) SATA disks
    • 116 MB/s
    • 145 MB/s

  • ZFS mirror pool utilizing two (2) SATA disks with dataset compression set to “On”
    • 631 MB/s
    • 1069 MB/s
  • Notes, Thoughts & Mentionables:
    There are a few things worth mentioning about this system:
    On the hard disk side of things, the hardware RAID1 was made up of Western Digital Blue disks while the other four (4) disks are Western Digital 2TB Green drives. If you have done your home work, you already know that the WD EARS disks use 4K sectors and masks this as 512byte sectors so that OS’ don’t complain. If disks are not properly formatted and/or sector aligned with this in mind, performance takes a tremendous hit. The reason for such inexpensive disks for this build is simple: This server is configured as a backup destination and as such, size is more important than the reliability that a SAS solution would provide.

    Compressions test results were, to say the least, quite interesting. It should be noted that the stripe and mirror pools performed quite similarly. Further testing of these results will be required, but it seems that the maximum score of 1074 MB/s was limited only by the CPU. During the read test all four cores of the quad core CPU were maxed. This becomes even more interesting when you compare the results of this two disk stripe pool with my previous findings on the six disk stripe pool running the same test. The earlier test rig scored much lower and it would appear to be the difference in CPUs that made such a strong difference.

    –himuraken

ZFS Performance Testing: Dell PowerEdge 2900

***Update***
This started a simple post to share a few results from various levels of zfs/array testing. Be sure to check back from time to time as I add additional configuration results.

***Begin Original Text***
I have been playing around with ZFS on various operating systems lately and have been trying to compare performance. I figured that sharing some of my results would give others something to compare with. Plus, I am on borrowed time with this unit, it is big, loud, and taking up free space and spare time in the home office.

Test rig:

  • Dell PowerEdge 2900
  • Xeon Dual Core 3.0GHz (HT Enabled-OS showing 4 cores)
  • 14GB RAM
  • PERC5 – Total of 7 logical drives with read ahead and write back enabled.
  • 2x146GB SAS 15K RPM – Hardware RAID1 for OS
  • 6x1TB SATAII 7200 RPM – Six (6) SATAII 7200 RPM Disks for testing. (Controller does not support JBOD mode)
  • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

GNU dd:
Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

Results: Each disk configured as a separate RAID0 array on controller.
Results are listed as configuration, write, then read.

  • ZFS raidz1 pool utilizing six (6) SATA disks
    • 133 MB/s
    • 311 MB/s

  • ZFS raidz1 pool utilizing six (6) SATA disks with dataset compression set to “On”
    • 414 MB/s
    • 359 MB/s

  • ZFS raidz2 pool utilizing six (6) SATA disks
    • 180 MB/s
    • 286 MB/s

  • ZFS raidz2 pool utilizing six (6) SATA disks with dataset compression set to “On”
    • 414 MB/s
    • 361 MB/s

  • ZFS stripe pool utilizing six (6) SATA disks
    • 190 MB/s
    • 263 MB/s

  • ZFS stripe pool utilizing six (6) SATA disks with dataset compression set to “On”
    • 429 MB/s
    • 381 MB/s
  • Results: Each disk configured as a member of a single RAID0 array.
    Results are listed as configuration, write, then read.

    • ZFS stripe pool utilizing six (6) SATA disks
      • 353.4 MB/s
      • 473.0 MB/s

    • ZFS stripe pool utilizing six (6) SATA disks with dataset compression set to “On”
      • 420.8 MB/s
      • 340.9 MB/s
    • Results: Each disk configured as a member of a single RAID5 array.
      Results are listed as configuration, write, then read.

      • ZFS stripe pool utilizing six (6) SATA disks
        • 322.0 MB/s
        • 325.9 MB/s

      • ZFS stripe pool utilizing six (6) SATA disks with dataset compression set to “On”
        • 438.8x MB/s
        • 371.8 MB/s
      • Results: Each disk configured as a member of a single RAID10 array.
        Results are listed as configuration, write, then read.

        • ZFS stripe pool utilizing six (6) SATA disks
          • 251.2 MB/s
          • 304.3 MB/s

        • ZFS stripe pool utilizing six (6) SATA disks with dataset compression set to “On”
          • 430.7 MB/s
          • 360.9 MB/s
        • Notes, Thoughts & Mentionables:
          It is worth noting that the results of the datasets with compression on can be a bit misleading. This is due to the source we are using with dd; /dev/zero. Feeding a string of zeroes into a compression algorithm is probably the best case scenario when it comes to compression. In real world conditions, data being read or written that is compressible would experience an increase in performance, while non-compressible data would likely suffer a penalty.

          I am hoping to conduct the same tests on the exact hardware in the near future. I will be switching the six (6) SATA disks over to varying hardware RAID levels and comparing them again.

          ***Update***
          In a follow-up post to this one, I concluded that compression read and write performance on this particular test rig was being limited by the CPU. I am hoping to swap out the current Intel Xeon 3.0GHz dual core for a quad core for additional comparison.

          –himuraken

VMware View Launch Tour

Woke up nice and early this morning in lovely Tampa, FL for the VMware View Launch Tour. I have been playing around with various virtual technologies for quite awhile now. What I haven’t used at all is any form of VDI or Virtual Desktop Infrastructure. In my ever evolving quest to learn all that I can, I signed up for the View Launch Tour so that I could learn more. The event was informative overall, even though the presentations from VMware and Dell contained the usual sales pitches. VMware focused on View, ThinApp, and related technologies while Dell gave a full demo of the Dell EqualLogic SAN. This is the third or fourth time that I have seen a demo on the Equal Logic and I am always impressed with the simplicity of the system. Anyways, desktop virtualization is a rapidly expanding sector in the virtual world and is well worth a look.

You can find additional information on the tour here: VMware View Launch Tour

–himuraken

New Dell PowerEdge R610 on the way.

We recently purchased Kaseya and needed a better server to put it on. After deciding to setup Kaseya as a VM on a ESX host and a little capacity planning, I determined that a new server would be needed. After getting the corporate overlords to approve, I ordered up a new Dell PowerEdge R610 with 12GB of RAM and six HDD’s. I’m thinking RAID 10 for this box, but a little research will make the final call.

–Himuraken

Dell Control Point Connection Manager and Sprint Mobile Broadband

Many of my clients are getting the newer E series Dell Latitudes. The laptops seems pretty decent but they all come with the Dell ControlPoint software. The ControlPoint software aims to centralize the management of the systems settings.

While I am unfamiliar with the overall usefulness of the ControlPoint software, I do know that the ControlPoint Connection Manager is terrible. Just about every end user that I have worked with on these newer laptops asks me to uninstall ControlPoint.

Generally speaking, all that you need to do to get a decent running system is to uninstall the Connection Manager portion of ControlPoint. Just open up Add/Remove Programs and uninstall Connection Manager. After the uninstall and reboot you will notice that the WWAN card will no longer work as you no longer have an app to control the device. The link below is the for 5720 model of WWAN cards which are very common in these Dell systems. The download installs the Sprint Mobile Broadband utility.

Link is here.

–Himuraken