Nexus S OTA Update to 2.3.4

Just dropping a quick post as a time stamp. I woke up this morning to an alert on my Nexus S indicating that Gingerbread 2.3.4 was available.

I haven’t had anytime to dig in or read change logs yet, but I did notice that Gtalk now has front facing camera / video support.

***Update*** Looks like the Gmail app now has Priority Inbox features.

–himuraken

Initial Thoughts: Netgate Hamakua

In this post I would like to share some of my initial thoughts of the Netgate Hamakua. We were looking for a 1U half depth rack mount system to run pfSense 1.2.3 on. Although we haven’t mentioned it much here on the blog, we love working with pfSense. PfSense is a fork of the m0n0wall project and is based on FreeBSD. I have pfSense running on everything from discard eMachine workstations to multi-thousand dollar rack mount servers, and everything in between.

We have pfSense embedded running on a number of ALIX based Netgate m1n1wall 2D3’s and it is an excellent combination of lower power and stable performance. When it came time to migrate from a VM based install to hardware in our rack we looked to Netgate. We went with the rack-mount version of the Hamakua and purchased the optional VGA break-out cable. The Hamakua has room for a 2.5″ HDD drive which is an excellent option if you need that sort of thing. It is important to note that the embedded installation of pfSense does not output any data to the VGA port. So if you are running embedded you will see the initial POST / BIOS phase of the boot and then thats it. This is due to the fact that the embedded install is targeted mainly for lower power devices that use serial for display output.

From what I have been able to gather from books, forums, and #pfsense on Freenode, it is obvious that key developers of the pfSense project test on this hardware extensively. And for good reason, its a great platform: 1U, 16W power consumption, 1GHz CPU, and 5 NIC’s/interfaces. You can find great documentation on the pfSense site regarding embedded and full installations for this unit. Long story short, they use it and develop on it, it will be around for awhile.

We are anticipating an upgrade from our current DS3 connectivity to 1Gbps and wanted to have something that make at least some use of the new line. For this reason we did some basic performance testing using our 2900 ZFS test box and another similarly spec’d server. While running large data transfers between two individual 1Gbps interfaces we were able to max the system at roughly 250Mbps throughput. This is right inline with the sizing guide in the pfSense book. The appears to be a limitation of the 1Ghz processor. Be sure to take a look at the pfSense book for sizing and throughput requirements, it is quite helpful in this regard in addition to others.

One thing that is worth mentioning is the heat that this thing generates. During heavy testing and average daily use, a large amount of heat was being displaced. The top of the unit is basically a heatsink and it does its job well. Because of this, it will certainly be going on the top of our rack due to ventilation concerns. I beleive that this design is pretty solid and it would most likely take the abuse without batting an eye, but I didn’t want to risk burning this one out.

To conclude, if you need a rack mount system that will run pfSense, is well supported by the community, and you don’t need to cross the 250Mbps barrier, this may be the unit for you. This is the second model of device that we have purchased from Netgate, and as always we weren’t disappointed. If you need something a bit less performant and easier on the budget, be sure to check out the Netgate m1n1wall 2D3/2D13. It has 3x100Mbps ports and gets the job done well.

–Himuraken

VMware ESX, NIC Teaming, and VLAN Trunking with HP ProCurve

Found this great article and thought I would share it with RH readers:

Original text from http://blog.scottlowe.org/

In an earlier article about VMware ESX, NIC teaming, and VLAN trunking, I described what the configuration should look like if one were using these features with Cisco switch hardware. It’s been a quite popular post, one I will probably need to update soon.

In this article, I’d like to discuss how to do the same thing, but using HP ProCurve switch hardware. The article is broken into three sections: using VLANs, using link aggregation (NIC teaming), and using both together.
Using VLAN Trunking

To my Cisco-oriented mind, VLANs with ProCurve switches are handled quite differently. Port-based VLANs, in which individual ports are assigned to one or more VLANs, allow a switch port to participate in that VLAN as either an untagged fashion or in a tagged fashion.

The difference here is really simpler than it may seem: the untagged VLAN can be considered the “native VLAN” from the Cisco world, meaning that the VLAN tags are not added to packets traversing that port. Putting a port in a VLAN in untagged mode is essentially equivalent to making that port an access port in the Cisco IOS world. Only one VLAN can be marked as untagged, which makes sense if you think about it.

Any port groups that should receive traffic from the untagged VLAN need to have VLAN ID 0 (no VLAN ID, in other words) assigned.

A tagged VLAN, on the other hand, adds the 802.1q VLAN tags to traffic moving through the port, like a VLAN trunk. If a user wants to use VST (virtual switch tagging) to host multiple VLANs on a single VMware ESX host, then the ProCurve ports need to have those VLANs marked as tagged. This will ensure that the VLAN tags are added to the packets and that VMware ESX can direct the traffic to the correct port group based on those VLAN tags.

In summary:

* Assign VLAN ID 0 to all port groups that need to receive traffic from the untagged VLAN (remember that a port can only be marked as untagged for a single VLAN). This correlates to the discussion about VMware ESX and the native VLAN, in which I reminded users that port groups intended to receive traffic for the native VLAN should not have a VLAN ID specified.
* Be sure that ports are marked as tagged for all other VLANs that VMware ESX should see. This will enable the use of VST and multiple port groups, each configured with an appropriate VLAN ID. (By the way, if users are unclear on VST vs. EST vs. VGT, see this article.)
* VLANs that VMware ESX should not see at all should be marked as “No” in the VLAN configuration of the ProCurve switch for those ports.

Using Link Aggregation

There’s not a whole lot to this part. In the ProCurve configuration, users will mark the ports that should participate in link aggregation as part of a trunk (say, Trk1) and then set the trunk type. Here’s the only real gotcha: the trunk must be configured as type “Trunk” and not type “LACP”.

In this context, LACP refers to dynamic LACP, which allows the switch and the server to dynamically negotiate the number of links in the bundle. VMware ESX doesn’t support dynamic LACP, only static LACP. To do static LACP, users will need to set the trunk type to Trunk.

Then, as has been discussed elsewhere in great depth, configure the VMware ESX vSwitch’s load balancing policy to “Route based on ip hash”. Once that’s done, everything should work as expected. This blog entry gives the CLI command to set the vSwitch load balancing policy, which would be necessary if configuring vSwitch0. For all other vSwitches, the changes can be made via VirtualCenter.

That’s really all there is to making link aggregation work between an HP ProCurve switch and VMware ESX.
Using VLANs and Link Aggregation Together

This section exists only to point out that when a trunk is created, the VLAN configuration for the members of that trunk disappears, and the trunk must be configured directly for VLAN support. In fact, users will note that the member ports don’t even appear in the list of ports to be configured for VLANs; only the trunks themselves appear.

Key point to remember: apply your VLAN configurations after your trunking configuration, or else you’ll just have to do it all over again.

With this information, users should now be pretty well prepared to configure HP ProCurve switches in a VMware ESX environment. Feel free to post any questions, clarifications, or corrections in the comments below, and thanks for reading!

ZFS Performance Testing: Dell PowerEdge 2950 III

In my previous post on ZFS performance testing I ran through various tests on a particular test system that I had running at the time. That system has come and gone to a better place in the proverbial cloud. This go round, I have a similar server with a different ZFS configuration. Lets dive in to the system and tests.

Test rig:

  • Dell PowerEdge 2950
  • Xeon Quad Core 1.6GHz
  • 8GB RAM
  • PERC5 – Total of 5 logical drives with read ahead and write back enabled.
  • 2x160GB SATAII 7200 RPM – Hardware RAID1
  • 4x2TB SATAII 7200 RPM – Four (4) Hardware RAID0’s (Controller does not support JBOD mode)
  • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

GNU dd:
Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

Results:
Results are listed as configuration, write, then read.

  • Reference run: 2x160GB 7200 RPM SATAII RAID1
    • 85.6 MB/s
    • 92.5 MB/s

  • ZFS stripe pool utilizing two (2) SATA disks
    • 221 MB/s
    • 206 MB/s

  • ZFS stripe pool utilizing two (2) SATA disks with dataset compression set to “On”
    • 631 MB/s
    • 1074 MB/s

  • ZFS mirror pool utilizing two (2) SATA disks
    • 116 MB/s
    • 145 MB/s

  • ZFS mirror pool utilizing two (2) SATA disks with dataset compression set to “On”
    • 631 MB/s
    • 1069 MB/s
  • Notes, Thoughts & Mentionables:
    There are a few things worth mentioning about this system:
    On the hard disk side of things, the hardware RAID1 was made up of Western Digital Blue disks while the other four (4) disks are Western Digital 2TB Green drives. If you have done your home work, you already know that the WD EARS disks use 4K sectors and masks this as 512byte sectors so that OS’ don’t complain. If disks are not properly formatted and/or sector aligned with this in mind, performance takes a tremendous hit. The reason for such inexpensive disks for this build is simple: This server is configured as a backup destination and as such, size is more important than the reliability that a SAS solution would provide.

    Compressions test results were, to say the least, quite interesting. It should be noted that the stripe and mirror pools performed quite similarly. Further testing of these results will be required, but it seems that the maximum score of 1074 MB/s was limited only by the CPU. During the read test all four cores of the quad core CPU were maxed. This becomes even more interesting when you compare the results of this two disk stripe pool with my previous findings on the six disk stripe pool running the same test. The earlier test rig scored much lower and it would appear to be the difference in CPUs that made such a strong difference.

    –himuraken

ZFS Performance Testing: Dell PowerEdge 2900

***Update***
This started a simple post to share a few results from various levels of zfs/array testing. Be sure to check back from time to time as I add additional configuration results.

***Begin Original Text***
I have been playing around with ZFS on various operating systems lately and have been trying to compare performance. I figured that sharing some of my results would give others something to compare with. Plus, I am on borrowed time with this unit, it is big, loud, and taking up free space and spare time in the home office.

Test rig:

  • Dell PowerEdge 2900
  • Xeon Dual Core 3.0GHz (HT Enabled-OS showing 4 cores)
  • 14GB RAM
  • PERC5 – Total of 7 logical drives with read ahead and write back enabled.
  • 2x146GB SAS 15K RPM – Hardware RAID1 for OS
  • 6x1TB SATAII 7200 RPM – Six (6) SATAII 7200 RPM Disks for testing. (Controller does not support JBOD mode)
  • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

GNU dd:
Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

Results: Each disk configured as a separate RAID0 array on controller.
Results are listed as configuration, write, then read.

  • ZFS raidz1 pool utilizing six (6) SATA disks
    • 133 MB/s
    • 311 MB/s

  • ZFS raidz1 pool utilizing six (6) SATA disks with dataset compression set to “On”
    • 414 MB/s
    • 359 MB/s

  • ZFS raidz2 pool utilizing six (6) SATA disks
    • 180 MB/s
    • 286 MB/s

  • ZFS raidz2 pool utilizing six (6) SATA disks with dataset compression set to “On”
    • 414 MB/s
    • 361 MB/s

  • ZFS stripe pool utilizing six (6) SATA disks
    • 190 MB/s
    • 263 MB/s

  • ZFS stripe pool utilizing six (6) SATA disks with dataset compression set to “On”
    • 429 MB/s
    • 381 MB/s
  • Results: Each disk configured as a member of a single RAID0 array.
    Results are listed as configuration, write, then read.

    • ZFS stripe pool utilizing six (6) SATA disks
      • 353.4 MB/s
      • 473.0 MB/s

    • ZFS stripe pool utilizing six (6) SATA disks with dataset compression set to “On”
      • 420.8 MB/s
      • 340.9 MB/s
    • Results: Each disk configured as a member of a single RAID5 array.
      Results are listed as configuration, write, then read.

      • ZFS stripe pool utilizing six (6) SATA disks
        • 322.0 MB/s
        • 325.9 MB/s

      • ZFS stripe pool utilizing six (6) SATA disks with dataset compression set to “On”
        • 438.8x MB/s
        • 371.8 MB/s
      • Results: Each disk configured as a member of a single RAID10 array.
        Results are listed as configuration, write, then read.

        • ZFS stripe pool utilizing six (6) SATA disks
          • 251.2 MB/s
          • 304.3 MB/s

        • ZFS stripe pool utilizing six (6) SATA disks with dataset compression set to “On”
          • 430.7 MB/s
          • 360.9 MB/s
        • Notes, Thoughts & Mentionables:
          It is worth noting that the results of the datasets with compression on can be a bit misleading. This is due to the source we are using with dd; /dev/zero. Feeding a string of zeroes into a compression algorithm is probably the best case scenario when it comes to compression. In real world conditions, data being read or written that is compressible would experience an increase in performance, while non-compressible data would likely suffer a penalty.

          I am hoping to conduct the same tests on the exact hardware in the near future. I will be switching the six (6) SATA disks over to varying hardware RAID levels and comparing them again.

          ***Update***
          In a follow-up post to this one, I concluded that compression read and write performance on this particular test rig was being limited by the CPU. I am hoping to swap out the current Intel Xeon 3.0GHz dual core for a quad core for additional comparison.

          –himuraken

Ubuntu Maverick Meerkat 10.10 Netbook Performance Issues

Being that it is my job and my nature to keep systems running, I generally don’t upgrade OS’ quickly. Doing so introduces change which in turn breaks things. After performing an install of the latest version of Ubuntu Netbook 10.10 I found the performance to quite poor. After a quick Google search I found that I was far from the only one with the issue. You can follow more on that here. It is worth mentioning that I had this installed on an HP Mini 311 which is one of the faster netbooks available at the time of this writing. I went back to 10.04 and it is now a useful system again.

–Himuraken

VMware Best Practices

I have come across a few useful KB articles from VMware recently and thought I would stick them here for future reference.


Installing ESX 4.1 and vCenter Server 4.1 best practices

If you are planning to or have recently deployed vCenter Server 4.1 or ESX(i) 4.1, this KB article from VMware may be of some value to you: Installing ESX 4.1 and vCenter Server 4.1 best practices


Best practices for virtual machine snapshots in the VMware environment

Most of us work with snapshots and snapshot tech every day. In fact, I for one cannot imagine doing some of things I do without snapshots any longer. I use to do everything without snapshots, but technology is good for making all lazy. Either way, this best practices article is worth a read.

–Himuraken

Sharepoint error: Cannot connect to the configuration database

Starting seeing this on a few clients servers after a recent patch management cycle. It appears to be KB934525 that takes the blame here. After attempting a repair install on one of the systems I eventually found this buried at the bottom of some page that references KB934525. Your application event log will most likely be full of these:

Unknown SQL Exception 33002 occured. Additional error information from SQL Server is included below.

Access to module dbo.proc_getObjectsByClass is blocked because the signature is not valid.

The long and short of it is this: Try running the SharePoint Products and Technologies Configuration Wizard and accept the defaults. If that doesn’t resolve the issue, read on for more…

Post-installation information
After you apply this hotfix package, you must run the SharePoint Products and Technologies Configuration Wizard.

To do this, click Start, point to All Programs, point to Administrative Tools, and then click SharePoint Products and Technologies Configuration Wizard.

Important Because of a problem with the hotfix installation, you must not run the SharePoint Products and Technologies Configuration Wizard to complete the installation if you are running a Windows SharePoint Services stand-alone installation that uses the Windows Internal Database Engine. Instead, you must use the Psconfig.exe command-line utility. To do this, follow these steps:

1. Click Start, click Run, type cmd in the Open box, and then click OK.
2. Change to the following directory:
system drive\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\Bin
3. Run the following command:
psconfig -cmd upgrade -inplace b2b
If you ran the SharePoint Products and Technologies Configuration Wizard, you may experience a long delay, and the wizard may not be completed successfully.

Alternatively, the configuration process may be unsuccessful, and you may receive the following error messages:

Configuration Failed
An exception of type System.Data.SQLClient.SQLException was thrown. Additional exception information access to module dbo.proc_MSS_GetConfigurationProperty is blocked because the signature is not valid.
Additionally, the following event may be logged in the Application log:

Event ID: 5586
Source: Windows SharePoint Services 3
Unknown SQL Exception 33002 occured.
Additional error information from SQL Server is included below.
Access to module dbo.proc_MSS_GetConfigurationProperty is blocked because the signature is not valid.

If you experience these issues, use the Psconfig.exe command to manually complete the installation of the hotfix.

–himuraken