Project Fi Network Switching & Dial Codes

After a day or two of spotty outbound service at the house on two different Project Fi accounts, using both my Nexus 6 as well as my Nexus 6P, so I decided to reach out to support. After some troubleshooting we were able to determine that it was a T-Mobile specific issue that was effecting the ability to make outbound calls. Forcing the phone onto Sprint, US Cellular, or just using WiFi calling was flawless. At the end of the day it just a local tower issue as driving to a different location worked fine.

At the end of the call I compiled the notes I had taken during the call and asked the support person to fill in the blanks. I am sure there are more codes but here are the ones we used, just punch these into your dialer app on any Project Fi phone and off you go…

*#*#344636#*#* – Code to determine current network.
*#*#34777#*#* – Code to switch over to Sprint’s network.
*#*#34866#*#* – Code to switch over to Tmobile’s network.
*#*#34872#*#* – Code to switch over to U.S. Cellular’s network.
*#*#342886#*#* – Code to go to Auto (auto switching enabled).

The support rep mentioned that after 2 hours or so that the phone should default back to auto mode.

Hope this helps you in your travels or just as a work around for spotty/temp service issues.

–himuraken

Dell PowerEdge 13th Gen Fan Noise

I recently came across the opportunity to assist a client with installing their new Dell PowerEdge R730XD. Quite the beefy server config, 2x10core CPUs, 128GB of RAM, 12x4TB NL-SAS, you know, all the goodies. This machine is slated to replace an aging T610 that has seen better days performance-wise.

I went ahead and put an Intel 10GbE card in the server since all other hosts in the server room including both backup boxes are 10GbE enabled and are connected to our new Netgear 10GbE switch. Keep in mind this was an industry standard PCIe 10GbE card, a particularly good one, the Intel TX540-2. After installing VMware ESXi, and later, Windows Server 2012 R2, users were complaining about the loud “jet sounding” noise coming from the server room. After logging into the Dell iDRAC Enterprise card I immediately noticed that the fans were running around 92% which was roughly 15K RPM or thereabouts. This was regardless of operating system mind you, so I couldn’t even blame Windows OR VMware this time.

After looking around online at various forums I realized that the system was running the fans near max speed/volume due to the presence of a non-certified PCIe card installed into the system. For all intents and purposes, non-certified means you didn’t pay through the nose to acquire the identical hardware from Dell. Essentially, since the Intel card doesn’t carry the Dell specific code/firmware to report back that “all is well over here in PCIe/temperature land”, the system defaults to running the fans in jet engine mode. For posterity’s sake and to clarify, this will happen on pretty much any non-Dell card that is inserted. In researching the issue I found numerous folks that put actively cooled GPUs, old school 4x1Gbps network cards, you name it, high speed fan noise.

Well no big deal, all you have to do is go into the Dell BIOS and modify a setting or two so that the system doesn’t run the fans at full steam when a card inserted right? Wrong! That would be the logical assumption and design choice to make so you know they didn’t make it that easy. Read on below to understand how I finally got this system to quiet down. The info below is compiled from many sources and some of my own figuring out, just though it would be helpful to have it all in one place.

Step 1: Enable IPMI
For this step enter your Dell servers setup/config screen and get to the remote access configuration/iDRAC setup. In the iDRAC setup you need to do all of the standard stuff like assigning an IP and setting user credentials etc, but you MUST also turn set “Enable IPMI over LAN” to yes. This setting is crucial to completing the steps below successfully.

Step 2: Get IPMI tools
Linux users can use their preferred package/distribution method to obtain ipmitool while Windows users will need to grab the Dell OpenManage BMC Utility and get it installed.

Next, open up and command prompt and navigate to the directory the BMC utility installed to, on my system this was: C:\Program Files (x86)\Dell\SysMgt\bmc\

From there go you will see several files, the program that we are using here is ipmitool.exe. Go ahead and run ipmitools.exe without any switches/arguments just to make sure its installed and working.

Step 3:
The third and final step is essentially ‘the fix’. This is where you can check the status, and then disable or enable the systems cooling response to third party cards that are installed on the PCIe bus. This part was a little frustrating at first because I was working in the right direction and was just about there but the commands weren’t being sent or interpreted the way the should have been.

You must use the lanplus option instead of lan but it is important to note that lanplus does NOT work unless you’ve enabled the “Enable IPMI over LAN” setting that I mentioned back in step 1. The non-intuitive part about that was that although I was running the right command aside from lan vs lanplus, I really didn’t get any clear feedback as to why the command wouldn’t “take”.

Anyhow, here is the base command which you need to acquaint yourself with:

ipmitool -I lanplus -H ipaddress -U root -P password raw

Obviously you will need to substitute your own iDRAC ip, user, and password. After that, just tack on one of the three commands below.

Disable Third-Party PCIe Card Default Cooling Response:
ipmitool -I lanplus -H ipaddress -U root -P password raw 0x30 0xce 0x00 0x16 0x05 0x00 0x00 0x00 0x05 0x00 0x01 0x00 0x00

Enable Third-Party PCIe Card Default Cooling Response:
ipmitool -I lanplus -H ipaddress -U root -P password raw 0x30 0xce 0x00 0x16 0x05 0x00 0x00 0x00 0x05 0x00 0x00 0x00 0x00

To check the current third party PCIe card default cooling setting:
ipmitool -I lanplus -H ipaddress -U root -P password raw 0x30 0xce 0x01 0x16 0x05 0x00 0x00 0x00

This response means disabed:
16 05 00 00 00 05 00 01 00 00

This response means enabled:
16 05 00 00 00 05 00 00 00 00

After disabling the third party cooling response my system went from the previously mentioned 15K RPM mark down to a user verified sane noise level/speed of around 6K RPM.

A key takeaway and disappointment for me is that in this day and age of widespread standards and simplicity, things are becoming increasingly proprietary and complex.

–himuraken

Proxmox Lab: Intel NUC

Thanks for reading this first post in a new series I am putting together titled “Proxmox Lab”. In this blog series I will be covering various things related to Proxmox and the various hardware I have tested things on.

In this installment we will discuss a small foot print low power build that you can carry in your pocket, well, if you wear cargo shorts with the big pockets on the side.

Around two years ago I had purchased one of the earlier all black Intel NUC systems and a 32GB Crucial mSATA disk to run Proxmox 3.1 or 3.2, I forget the version at the time. Anyhow, around the same time that I attempted to complete the build of the device a client called up and expressed an immediate need for a small PC that could hide behind a conference room wall mount TV. Just like that my Intel NUC disappeared…

Months later I was able to find enough free time to get a new NUC, this time it was the more modern, current as of the time of this post, silver and black version. I went with the Core i3 variant as I didn’t want to go Celeron and the i5 was out of stock. Armed with 8GB of low voltage RAM (1.35V s required), I installed Proxmox to a 32GB Crucial mSATA drive and off I went. I strictly used the local storage for ISOs and the Proxmox system itself. This system ran excellently and never gave me so much as a hiccup. The combination of super fast BIOS and the SSD boot volume meant that this thing would boot or reboot so fast that I had to double check that I actually shut it down, quite a nice problem to have.

As so often happens here at the home office lab, I change hardware pretty frequently. Often times it is due to client needs or desires, other times it’s simply that I see something new and shiny. Regardless of the reasons, I rarely regret the money spent as the investment always comes back many times over in the form of education and experience gained.

This i3 system was eventually replaced about a year later when someone I knew really wanted the i3 NUC so I sent it packing to a new and better home. Since I had grown accustom to the silence, low power/heat, and wonderfully small size of the NUC, I had to find something like it to replace the one I had just gotten rid of. Well, after little debate, I ordered a new Intel NUC again, this time armed with a higher clock speed and the wondrous Core i5 badge. System memory was boosted to the full 16GB allowed by the board and off I went. Just like its Core i3 counterpart, this NUC performed flawlessly in all regards. Stable, fast, and truly affordable.

If you are looking for low power (as in electricity consumption), high performance, and physically small and beautiful package for your home lab test machine/hypervisor, be sure to take a serious look at the Intel NUC. Just imagine a shoe box full of Intel NUCs acting as a full on Proxmox cluster! Aside from the physical memory constraints inherent to this platform, I have seriously considered putting a handful of these into client networks as small foot print Proxmox clusters.

Pros: Tiny system, low energy usage, high performance.
Cons: Usually a tad more expensive than a comparable i3/i5 SFF desktop PC with the same specs. Requires mSATA and low voltage memory, both of which you probably do not have laying around.

A final note, unless you are doing CPU intensive tasks, which you probably are not, then skip the i5 variant. While it works great, I noticed zero performance increase over my Core i3 NUC. Obviously, this varies from workload to workload so be sure you know what you need.

I hope this helps any perspective home lab enthusiasts out there and be sure to stay tuned for my next build which I just finished ordering…

–himuraken

FreeNAS on the Lenovo TS440

With a recent build fresh on the brain I figured I’d share some thoughts on the hardware used as I hope it helps others. I googled and tried to find information ahead of time and found sparse info.

Recently, I set out to replace my current small office FreeNAS box. From a performance standpoint the box looked great on paper: AMD 8core @ 3.4Ghz, 32GB of “good” memory, an expensive Seasonic power supply, and 16 drives attached to a pair of Dell PERC H200 controllers packed into a high end Lian Li full tower. The tower had a SAS backplane and the 5.25″ bays had two 4 disk SATA enclosures installed. The towering behemoth worked like a champ for quite sometime. Day in and day out, the trusty homebrew served up NFS exports to ESXi, Proxmox, and numerous other LAN hosts ranging from RaspberryPi’s to FOG imaging VMs and things of that nature.

LIFE WAS GOOD AND IGNORANCE IS BLISS.

Once or twice while physically away from the box, meaning out of town, I received alerts from an external monitoring service that some of my VMs were down. Of course this only happens when you are away, and only to systems that DON’T have a hardware level remote access solution like IPMI, Intel vPro, HP iLO, or Dell DRAC. But I digress for surely it is OK for your entire FreeNAS box to just mysteriously power off. Not a UPS failure, just an old fashioned “who knows”. Take all that plus the frustration of not being able to power the box on remotely and you begin to see why the homebrew had to go.

Some cursory searches online and a quick check with the fine folk over in #freenas got me thinking about custom vs prebuilt boxes. After comparing prices of various boards and form factors I determined that the Lenovo TS series of towers servers might be a good fit. Several people on #freenas and the internet in general had info on the TS140 which is the smaller and cheaper of the two, but I wanted at least 8 drive bays. The TS140 looks nice if you only need 1-4 cabled drives, hot swap isn’t an option on the little guy.

Armed with what seemed like proper info at the time, I ordered the 4bay variant of the TS440 since it was on sale for a meager $299.99 with free shipping. My plan was to test the system as it came and then add the secondary drive cage and backplane for a grand total of 8 hard drives. As it turns out, my plan was ill-conceived as I could not locate any vendor selling the hardware I needed. I reached out to a well known IT and Lenovo vendor to get the info I needed. Much to my dismay, I was informed that Lenovo does not sell the parts needed to take the 4 bay all the way up to 8 drives. This detail is quite frustrating since the documentation I found stated that the system can be used with a 4 or 8 bay config. That is technically true, but only if you order the right SKU/Lenovo specific part number in the first place.

I am happy to report that the TS440 with XUX SKU is humming along happily now. The XUX model comes with the drive cage, backplane, and add-on controller necessary to run 8 drives. The RAID controller included in the system happily recognized my 4TB SATA disks and 3TB SAS disks. The controller supports RAID levels 0/1/10 out of the box but defaults to exporting disks as JBOD as long as you don’t manually set them up in a RAID array, perfect for ZFS. An option that I also decided to go with was the second power supply. The TS440 comes with a single hot swap supply and a spacer/blank slot for the second.

Hope this helps you with your small office NAS builds if you are condering a Lenovo TS440.

–himuraken

Migrating from VMware ESXi to QEMU/KVM

For a myriad of reasons, I have been looking at alternatives to VMware ESXi for a few months. Virtualizing a few machines here and there has proven educational. Learning the ropes of working with qemu/kvm, libvirt, and virsh has been challenging at times, but overall a pleasure to work with. Working with kvm is great although it takes some getting use to coming from a VMware/ESXi centric environment.

Up to this point all of the virtual machines that I had worked with were new systems. After some research and a few backups of my current VMs running on one of my ESXi hosts, I decided to migrate a few production VMs. Here are the steps that I used to move virtual machines over from a licensed vSphere 4.1 installation to a Linux host running qemu/kvm.

For starters, be sure that you have full backups of any VMs that you plan on working with. With that out of the way, you are ready to start:

1. Remove all snapshots from the virtual machine across all virtual disks.

2. Uninstall VMware Tools and then perform a clean shutdown of the guest operating system.

3. Copy the virtual hard disk(s) over to the qemu/kvm host. The virtual disk is typically the largest file within a VM’s directory and will usually be named something like ‘guestname-flat.vmdk’

4. On the qemu/kvm host, change to the directory containing the .vmdk file. Assuming you are using qcow2 disk images, run the following command to convert the .vmdk: kvm-img convert -O qcow2 guestname-flat.vmdk newguestname.qcow2

5. Create a new VM on the qemu/kvm host and choose the recently converted disk image as your existing drive/image. It is important that you create your new guest with the same or similar settings as it had before. I recommend cloning the MAC address over to the new guest for added simplicity with NIC detection, assignment, and third party software licensing.

6. Attempt to boot the system. Depending upon your guests virtual disk settings and other factors, the system may hang during boot. Edit your virtual machine and set the controller type to SCSI assuming that was the controller type back on ESXi.

At this point your system should be up and running on the new host. I did find notes and suggestions that qemu/kvm can run vmdk files/disk images, but there seemed to be a handful of caveats so I decided to convert the vmdk’s over to a native format.

–himuraken

HP ProLiant MicroServer Flexibility

I’ve been meaning to put some of my thoughts on the HP MicroServer N40L for quite some time and just haven’t made the time to do it, so here goes.

Long ago I was searching for a reason to purchase and play with HP’s MicroServers and got my chance when a client asked for an affordable backup device. I jumped at the chance and ordered one of the N40L’s. These units are listed as part of the ProLiant family of servers which sounded promising, but being the skeptic that I am, I didn’t expect much for the seemingly measly $350 price tag.

The unit comes with an AMD dual core CPU, 2GB of RAM, 250GB HDD, and a 1Gbps NIC. The system has a mini-PCIe slot for a remote access/iLO/DRAC type card, and a second standard PCIe slot. Although the system ships with only a single drive, all four bays have “hot swap” trays/carriers, making adding additional disks no problem. I say “hot swap” because I am pretty sure that the backplane/controller do not allow actual hot swapping in its true sense, YMMV. Another note on the hardware; the motherboard can be easily removed from the system by disconnecting a few cables and backing out two thumb screws. The board is on a simple and quite brilliant tray assembly which makes removal, upgrade, and insertion simple. Do yourself a favor when you purchase the system by maxing out the RAM at 8GB(DDR3/ECC) and adding the optional iLO/remote access card. For basic NAS and low end Linux server duties the 2GB will work fine and you will have no regrets, but going to 8GB really opens the doors, more on that next.

Before I jump into exactly what it can do, it is worth mentioning what YOU should not do with it. For instance, don’t try and be a hero to your clients by touting this as an ultra affordable server solution. I have read of several people putting SBS on this box and then using it as the primary file and mail server for 20+ users. Don’t be a dummy, if you’re trying to service your clients properly get them a truly redundant system with hardware RAID, dual PSU’s and things of that nature. You are providing a disservice to your clients if you use this in a place it should not be used. Responsibility rant over…

With the remote access card, 8GB of RAM, and a couple of SATA drives, you are ready to play. This is the little server that could and it shows. The thing runs VMware ESXi5, Linux, Windows, FreeBSD(FreeNAS) and many other things. An important thing to remember is that the included disk controller uses fake RAID/driver assisted RAID so don’t expect RAID support outside of Windows. With that limitation in mind, this makes the ideal small business backup device, home virtualization lab, or any other number of roles you can through at it.

Fast forward to today and the device has served me and many others quite nicely. Although not a comprehensive list of installs, I can confirm successful installation on the following operating systems:

  • Debian Lenny (i386/AMD64)
  • Debian Squeeze (i386/AMD64) Currently Debian stable release
  • Debian Wheezy (i386/AMD64) Currently Debian testing release
  • Ubuntu 10.04 (i386/AMD64)
  • FreeNAS 0.7 (i386)
  • FreeNAS 8 (i386/AMD64)
  • VMware ESXi 4.1
  • VMware ESXi 5.0
  • Windows Server 2008 R2
  • Windows Small Business Server 2011
  • Whew! What a list and that just touches the surface of what you can run. Those just happen to be the configurations that I have tested with success. My current configuration consists of the base system running 8GB of RAM, iLO card, 1x64GB SSD and 4x1TB RAID edition drives. I’ve got Debian stable AMD64 running on / and have 4x1TB RE drives using Linux md RAID in level 5 mounted on /home. This acts as my internal NFS server and virtualization lab. The system runs vm guests well through KVM although you will have to watch the CPU. Being a dual core 1.5GHz, the system will usually run out of CPU before you hit any other bottlenecks.

    In conclusion, if you need a flexible and affordable storage device for most small business or home needs, a cheap virtualization lab in a box, or similar configuration, you will not be disappointed by this device.

    –himuraken

    Debian Squeeze & Broadcom b43 etc

    So you like Debian, and why wouldn’t you, it is great after all. Unfortunately, many laptops come from the factory sporting Broadcom-based chipsets. So inevitably I complete a Debian install and Broadcom takes the wind out of my sales. I then trudge over to http://wiki.debian.org/wl#Squeeze and go through the paces. Why? I do it over and over. Well enough is enough, I mean this isn’t a tricky script to write. So for your enjoyment, I have put it all together into a small bash script to simplify things for future installs. First, be sure to add the non-free repo to your /etc/apt/sources.list file.
    Then create and run a .sh file containting:

    #!/bin/bash
    aptitude update
    aptitude install module-assistant wireless-tools
    m-a a-i broadcom-sta
    echo blacklist brcm80211 >> /etc/modprobe.d/broadcom-sta-common.conf
    update-initramfs -u -k $(uname -r)
    modprobe -r b44 b43 b43legacy ssb brcm80211
    modprobe wl
    iwconfig

    Enjoy!

    –himuraken