Tethering WiFi Only Galaxy Tab To Motorola Droid

Bought my signifigant other a Galaxy Tablet for an upcoming trip and decided to get her up and running with WiFi tethering. I tether my laptops to my Droid’s frequently and love it. Long story short, it wasn’t as straight forward as I had hoped. After several hours of frustrating work trying to get a wifi only Samsung Galaxy tablet connected to an ad-hoc network, I finally have found a solution. The problem is that I needed to be able to connect the tablet to one of our rooted Droid 1 phones which are running wifi tether. This is quite difficult (thanks Google) to do because Android filters out and hides ad-hoc networks. After trying many things including switching out the wpa_supplicant for a different one via Root Explorer, I have found the following to work well. Keep in mind that simply allowing the OS to connect to ad-hoc networks would be ideal.

Be sure to have a working setup of wifi tether running on your phone if that is where you are sourcing the ad-hoc connection.

Step 1: Root your tablet using Z4root and reboot as required.
Step 2: Install ZT-180 on the tablet from the Android Market.
Step 3: Configure the ZT-180 application to connect to the SSID of your wifi tether application.
Step 4: Switch to ad-hoc mode within ZT-180 and enjoy πŸ™‚

It appears to me that the ZT app acts as a proxy between the tablet and the ad-hoc peer.

Truly shame on Google here, there are so many user forums filled to the brim with requests and issues. I can only begin to think of the number of people who have to void warranties and/or for go support by rooting their devices to do something the iPad allows by default.

–himuraken

Nexus S OTA Update to 2.3.4

Just dropping a quick post as a time stamp. I woke up this morning to an alert on my Nexus S indicating that Gingerbread 2.3.4 was available.

I haven’t had anytime to dig in or read change logs yet, but I did notice that Gtalk now has front facing camera / video support.

***Update*** Looks like the Gmail app now has Priority Inbox features.

–himuraken

Initial Thoughts: Netgate Hamakua

In this post I would like to share some of my initial thoughts of the Netgate Hamakua. We were looking for a 1U half depth rack mount system to run pfSense 1.2.3 on. Although we haven’t mentioned it much here on the blog, we love working with pfSense. PfSense is a fork of the m0n0wall project and is based on FreeBSD. I have pfSense running on everything from discard eMachine workstations to multi-thousand dollar rack mount servers, and everything in between.

We have pfSense embedded running on a number of ALIX based Netgate m1n1wall 2D3’s and it is an excellent combination of lower power and stable performance. When it came time to migrate from a VM based install to hardware in our rack we looked to Netgate. We went with the rack-mount version of the Hamakua and purchased the optional VGA break-out cable. The Hamakua has room for a 2.5″ HDD drive which is an excellent option if you need that sort of thing. It is important to note that the embedded installation of pfSense does not output any data to the VGA port. So if you are running embedded you will see the initial POST / BIOS phase of the boot and then thats it. This is due to the fact that the embedded install is targeted mainly for lower power devices that use serial for display output.

From what I have been able to gather from books, forums, and #pfsense on Freenode, it is obvious that key developers of the pfSense project test on this hardware extensively. And for good reason, its a great platform: 1U, 16W power consumption, 1GHz CPU, and 5 NIC’s/interfaces. You can find great documentation on the pfSense site regarding embedded and full installations for this unit. Long story short, they use it and develop on it, it will be around for awhile.

We are anticipating an upgrade from our current DS3 connectivity to 1Gbps and wanted to have something that make at least some use of the new line. For this reason we did some basic performance testing using our 2900 ZFS test box and another similarly spec’d server. While running large data transfers between two individual 1Gbps interfaces we were able to max the system at roughly 250Mbps throughput. This is right inline with the sizing guide in the pfSense book. The appears to be a limitation of the 1Ghz processor. Be sure to take a look at the pfSense book for sizing and throughput requirements, it is quite helpful in this regard in addition to others.

One thing that is worth mentioning is the heat that this thing generates. During heavy testing and average daily use, a large amount of heat was being displaced. The top of the unit is basically a heatsink and it does its job well. Because of this, it will certainly be going on the top of our rack due to ventilation concerns. I beleive that this design is pretty solid and it would most likely take the abuse without batting an eye, but I didn’t want to risk burning this one out.

To conclude, if you need a rack mount system that will run pfSense, is well supported by the community, and you don’t need to cross the 250Mbps barrier, this may be the unit for you. This is the second model of device that we have purchased from Netgate, and as always we weren’t disappointed. If you need something a bit less performant and easier on the budget, be sure to check out the Netgate m1n1wall 2D3/2D13. It has 3x100Mbps ports and gets the job done well.

–Himuraken

ZFS Performance Testing: Dell PowerEdge 2950 III

In my previous post on ZFS performance testing I ran through various tests on a particular test system that I had running at the time. That system has come and gone to a better place in the proverbial cloud. This go round, I have a similar server with a different ZFS configuration. Lets dive in to the system and tests.

Test rig:

  • Dell PowerEdge 2950
  • Xeon Quad Core 1.6GHz
  • 8GB RAM
  • PERC5 – Total of 5 logical drives with read ahead and write back enabled.
  • 2x160GB SATAII 7200 RPM – Hardware RAID1
  • 4x2TB SATAII 7200 RPM – Four (4) Hardware RAID0’s (Controller does not support JBOD mode)
  • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

GNU dd:
Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

Results:
Results are listed as configuration, write, then read.

  • Reference run: 2x160GB 7200 RPM SATAII RAID1
    • 85.6 MB/s
    • 92.5 MB/s

  • ZFS stripe pool utilizing two (2) SATA disks
    • 221 MB/s
    • 206 MB/s

  • ZFS stripe pool utilizing two (2) SATA disks with dataset compression set to “On”
    • 631 MB/s
    • 1074 MB/s

  • ZFS mirror pool utilizing two (2) SATA disks
    • 116 MB/s
    • 145 MB/s

  • ZFS mirror pool utilizing two (2) SATA disks with dataset compression set to β€œOn”
    • 631 MB/s
    • 1069 MB/s
  • Notes, Thoughts & Mentionables:
    There are a few things worth mentioning about this system:
    On the hard disk side of things, the hardware RAID1 was made up of Western Digital Blue disks while the other four (4) disks are Western Digital 2TB Green drives. If you have done your home work, you already know that the WD EARS disks use 4K sectors and masks this as 512byte sectors so that OS’ don’t complain. If disks are not properly formatted and/or sector aligned with this in mind, performance takes a tremendous hit. The reason for such inexpensive disks for this build is simple: This server is configured as a backup destination and as such, size is more important than the reliability that a SAS solution would provide.

    Compressions test results were, to say the least, quite interesting. It should be noted that the stripe and mirror pools performed quite similarly. Further testing of these results will be required, but it seems that the maximum score of 1074 MB/s was limited only by the CPU. During the read test all four cores of the quad core CPU were maxed. This becomes even more interesting when you compare the results of this two disk stripe pool with my previous findings on the six disk stripe pool running the same test. The earlier test rig scored much lower and it would appear to be the difference in CPUs that made such a strong difference.

    –himuraken

ZFS Performance Testing: Dell PowerEdge 2900

***Update***
This started a simple post to share a few results from various levels of zfs/array testing. Be sure to check back from time to time as I add additional configuration results.

***Begin Original Text***
I have been playing around with ZFS on various operating systems lately and have been trying to compare performance. I figured that sharing some of my results would give others something to compare with. Plus, I am on borrowed time with this unit, it is big, loud, and taking up free space and spare time in the home office.

Test rig:

  • Dell PowerEdge 2900
  • Xeon Dual Core 3.0GHz (HT Enabled-OS showing 4 cores)
  • 14GB RAM
  • PERC5 – Total of 7 logical drives with read ahead and write back enabled.
  • 2x146GB SAS 15K RPM – Hardware RAID1 for OS
  • 6x1TB SATAII 7200 RPM – Six (6) SATAII 7200 RPM Disks for testing. (Controller does not support JBOD mode)
  • FreeNAS 0.7.2 Sabanda (revision 5543)-ZFS v13

GNU dd:
Tests performed from the CLI using good ole’ GNU dd. The following command was used to first write, and then read back:

dd if=/dev/zero of=foo bs=2M count=10000 ; dd if=foo of=/dev/null bs=2M

Results: Each disk configured as a separate RAID0 array on controller.
Results are listed as configuration, write, then read.

  • ZFS raidz1 pool utilizing six (6) SATA disks
    • 133 MB/s
    • 311 MB/s

  • ZFS raidz1 pool utilizing six (6) SATA disks with dataset compression set to “On”
    • 414 MB/s
    • 359 MB/s

  • ZFS raidz2 pool utilizing six (6) SATA disks
    • 180 MB/s
    • 286 MB/s

  • ZFS raidz2 pool utilizing six (6) SATA disks with dataset compression set to “On”
    • 414 MB/s
    • 361 MB/s

  • ZFS stripe pool utilizing six (6) SATA disks
    • 190 MB/s
    • 263 MB/s

  • ZFS stripe pool utilizing six (6) SATA disks with dataset compression set to “On”
    • 429 MB/s
    • 381 MB/s
  • Results: Each disk configured as a member of a single RAID0 array.
    Results are listed as configuration, write, then read.

    • ZFS stripe pool utilizing six (6) SATA disks
      • 353.4 MB/s
      • 473.0 MB/s

    • ZFS stripe pool utilizing six (6) SATA disks with dataset compression set to “On”
      • 420.8 MB/s
      • 340.9 MB/s
    • Results: Each disk configured as a member of a single RAID5 array.
      Results are listed as configuration, write, then read.

      • ZFS stripe pool utilizing six (6) SATA disks
        • 322.0 MB/s
        • 325.9 MB/s

      • ZFS stripe pool utilizing six (6) SATA disks with dataset compression set to “On”
        • 438.8x MB/s
        • 371.8 MB/s
      • Results: Each disk configured as a member of a single RAID10 array.
        Results are listed as configuration, write, then read.

        • ZFS stripe pool utilizing six (6) SATA disks
          • 251.2 MB/s
          • 304.3 MB/s

        • ZFS stripe pool utilizing six (6) SATA disks with dataset compression set to “On”
          • 430.7 MB/s
          • 360.9 MB/s
        • Notes, Thoughts & Mentionables:
          It is worth noting that the results of the datasets with compression on can be a bit misleading. This is due to the source we are using with dd; /dev/zero. Feeding a string of zeroes into a compression algorithm is probably the best case scenario when it comes to compression. In real world conditions, data being read or written that is compressible would experience an increase in performance, while non-compressible data would likely suffer a penalty.

          I am hoping to conduct the same tests on the exact hardware in the near future. I will be switching the six (6) SATA disks over to varying hardware RAID levels and comparing them again.

          ***Update***
          In a follow-up post to this one, I concluded that compression read and write performance on this particular test rig was being limited by the CPU. I am hoping to swap out the current Intel Xeon 3.0GHz dual core for a quad core for additional comparison.

          –himuraken

Ubuntu Maverick Meerkat 10.10 Netbook Performance Issues

Being that it is my job and my nature to keep systems running, I generally don’t upgrade OS’ quickly. Doing so introduces change which in turn breaks things. After performing an install of the latest version of Ubuntu Netbook 10.10 I found the performance to quite poor. After a quick Google search I found that I was far from the only one with the issue. You can follow more on that here. It is worth mentioning that I had this installed on an HP Mini 311 which is one of the faster netbooks available at the time of this writing. I went back to 10.04 and it is now a useful system again.

–Himuraken

VMUG: Tech Data in Clearwater, FL.

Attended my second VMware User Group Meeting (VMUG) today. The first meeting that I attended was held at the Southwest Florida Water Management District’s office a couple of month back. Today’s meeting was sponsored by Tech Data and held at their Clearwater, FL headquarters.

First up was VMware which gave a presentation of upcoming features in View 4.5. Numerous alpha/beta screenshots were displayed which showcased numerous improvements and new features. We were asked as group not to share the exact features as this is still a work in progress so I will end that there.

Next up was IBM with a more hardware CAPEX/OPEX oriented presentation. Some of the hardware that was discussed was really impressive, things like 2TB of system RAM, blade enclosures, and 1.8″ SSD “flash packs” that cut costs and push IOPS through the roof.

The final presenter Qlogic, was pretty interesting. The presentation included 8Gb FC, fibre channel over ethernet, and 10Gb copper.

After the presentations everyone broke up into groups and went on tours of the Tech Data Solutions Center and then into discussions with each of the three presenters.

See you at the next VMUG.

–Himuraken

VDI. Who Will Win?

The VDI marketplace has been heating up. VMware’s recent launch of View 4 seems to have sparked more interest overall.
Microsoft has further cemented its partnership with Citrix to provide virtual desktops. Microsoft has also simplified (and reduced pricing) on using desktop OS in a virtual environment.
Recent promotions by Citrix (and Microsoft) are geared towards taking customers away from VMware. Citrix, which has a large installed base in the remote access arena, should do well with the XenApp to XenDesktop trade-in.
Red Hat has just announced that it will offer desktop virtualization based on KVM.
I expect many companies may wait until their next desktop refresh cycle to implement VDI. Moving to low-cost thin and zero clients certainly makes sense from an administration perspective.
This is an exciting time for anyone involved in virtualization!

– habanero_joe