Project Fi Network Switching & Dial Codes

After a day or two of spotty outbound service at the house on two different Project Fi accounts, using both my Nexus 6 as well as my Nexus 6P, so I decided to reach out to support. After some troubleshooting we were able to determine that it was a T-Mobile specific issue that was effecting the ability to make outbound calls. Forcing the phone onto Sprint, US Cellular, or just using WiFi calling was flawless. At the end of the day it just a local tower issue as driving to a different location worked fine.

At the end of the call I compiled the notes I had taken during the call and asked the support person to fill in the blanks. I am sure there are more codes but here are the ones we used, just punch these into your dialer app on any Project Fi phone and off you go…

*#*#344636#*#* – Code to determine current network.
*#*#34777#*#* – Code to switch over to Sprint’s network.
*#*#34866#*#* – Code to switch over to Tmobile’s network.
*#*#34872#*#* – Code to switch over to U.S. Cellular’s network.
*#*#342886#*#* – Code to go to Auto (auto switching enabled).

The support rep mentioned that after 2 hours or so that the phone should default back to auto mode.

Hope this helps you in your travels or just as a work around for spotty/temp service issues.

–himuraken

Dell PowerEdge 13th Gen Fan Noise

I recently came across the opportunity to assist a client with installing their new Dell PowerEdge R730XD. Quite the beefy server config, 2x10core CPUs, 128GB of RAM, 12x4TB NL-SAS, you know, all the goodies. This machine is slated to replace an aging T610 that has seen better days performance-wise.

I went ahead and put an Intel 10GbE card in the server since all other hosts in the server room including both backup boxes are 10GbE enabled and are connected to our new Netgear 10GbE switch. Keep in mind this was an industry standard PCIe 10GbE card, a particularly good one, the Intel TX540-2. After installing VMware ESXi, and later, Windows Server 2012 R2, users were complaining about the loud “jet sounding” noise coming from the server room. After logging into the Dell iDRAC Enterprise card I immediately noticed that the fans were running around 92% which was roughly 15K RPM or thereabouts. This was regardless of operating system mind you, so I couldn’t even blame Windows OR VMware this time.

After looking around online at various forums I realized that the system was running the fans near max speed/volume due to the presence of a non-certified PCIe card installed into the system. For all intents and purposes, non-certified means you didn’t pay through the nose to acquire the identical hardware from Dell. Essentially, since the Intel card doesn’t carry the Dell specific code/firmware to report back that “all is well over here in PCIe/temperature land”, the system defaults to running the fans in jet engine mode. For posterity’s sake and to clarify, this will happen on pretty much any non-Dell card that is inserted. In researching the issue I found numerous folks that put actively cooled GPUs, old school 4x1Gbps network cards, you name it, high speed fan noise.

Well no big deal, all you have to do is go into the Dell BIOS and modify a setting or two so that the system doesn’t run the fans at full steam when a card inserted right? Wrong! That would be the logical assumption and design choice to make so you know they didn’t make it that easy. Read on below to understand how I finally got this system to quiet down. The info below is compiled from many sources and some of my own figuring out, just though it would be helpful to have it all in one place.

Step 1: Enable IPMI
For this step enter your Dell servers setup/config screen and get to the remote access configuration/iDRAC setup. In the iDRAC setup you need to do all of the standard stuff like assigning an IP and setting user credentials etc, but you MUST also turn set “Enable IPMI over LAN” to yes. This setting is crucial to completing the steps below successfully.

Step 2: Get IPMI tools
Linux users can use their preferred package/distribution method to obtain ipmitool while Windows users will need to grab the Dell OpenManage BMC Utility and get it installed.

Next, open up and command prompt and navigate to the directory the BMC utility installed to, on my system this was: C:\Program Files (x86)\Dell\SysMgt\bmc\

From there go you will see several files, the program that we are using here is ipmitool.exe. Go ahead and run ipmitools.exe without any switches/arguments just to make sure its installed and working.

Step 3:
The third and final step is essentially ‘the fix’. This is where you can check the status, and then disable or enable the systems cooling response to third party cards that are installed on the PCIe bus. This part was a little frustrating at first because I was working in the right direction and was just about there but the commands weren’t being sent or interpreted the way the should have been.

You must use the lanplus option instead of lan but it is important to note that lanplus does NOT work unless you’ve enabled the “Enable IPMI over LAN” setting that I mentioned back in step 1. The non-intuitive part about that was that although I was running the right command aside from lan vs lanplus, I really didn’t get any clear feedback as to why the command wouldn’t “take”.

Anyhow, here is the base command which you need to acquaint yourself with:

ipmitool -I lanplus -H ipaddress -U root -P password raw

Obviously you will need to substitute your own iDRAC ip, user, and password. After that, just tack on one of the three commands below.

Disable Third-Party PCIe Card Default Cooling Response:
ipmitool -I lanplus -H ipaddress -U root -P password raw 0x30 0xce 0x00 0x16 0x05 0x00 0x00 0x00 0x05 0x00 0x01 0x00 0x00

Enable Third-Party PCIe Card Default Cooling Response:
ipmitool -I lanplus -H ipaddress -U root -P password raw 0x30 0xce 0x00 0x16 0x05 0x00 0x00 0x00 0x05 0x00 0x00 0x00 0x00

To check the current third party PCIe card default cooling setting:
ipmitool -I lanplus -H ipaddress -U root -P password raw 0x30 0xce 0x01 0x16 0x05 0x00 0x00 0x00

This response means disabed:
16 05 00 00 00 05 00 01 00 00

This response means enabled:
16 05 00 00 00 05 00 00 00 00

After disabling the third party cooling response my system went from the previously mentioned 15K RPM mark down to a user verified sane noise level/speed of around 6K RPM.

A key takeaway and disappointment for me is that in this day and age of widespread standards and simplicity, things are becoming increasingly proprietary and complex.

–himuraken

Proxmox Lab: Intel NUC

Thanks for reading this first post in a new series I am putting together titled “Proxmox Lab”. In this blog series I will be covering various things related to Proxmox and the various hardware I have tested things on.

In this installment we will discuss a small foot print low power build that you can carry in your pocket, well, if you wear cargo shorts with the big pockets on the side.

Around two years ago I had purchased one of the earlier all black Intel NUC systems and a 32GB Crucial mSATA disk to run Proxmox 3.1 or 3.2, I forget the version at the time. Anyhow, around the same time that I attempted to complete the build of the device a client called up and expressed an immediate need for a small PC that could hide behind a conference room wall mount TV. Just like that my Intel NUC disappeared…

Months later I was able to find enough free time to get a new NUC, this time it was the more modern, current as of the time of this post, silver and black version. I went with the Core i3 variant as I didn’t want to go Celeron and the i5 was out of stock. Armed with 8GB of low voltage RAM (1.35V s required), I installed Proxmox to a 32GB Crucial mSATA drive and off I went. I strictly used the local storage for ISOs and the Proxmox system itself. This system ran excellently and never gave me so much as a hiccup. The combination of super fast BIOS and the SSD boot volume meant that this thing would boot or reboot so fast that I had to double check that I actually shut it down, quite a nice problem to have.

As so often happens here at the home office lab, I change hardware pretty frequently. Often times it is due to client needs or desires, other times it’s simply that I see something new and shiny. Regardless of the reasons, I rarely regret the money spent as the investment always comes back many times over in the form of education and experience gained.

This i3 system was eventually replaced about a year later when someone I knew really wanted the i3 NUC so I sent it packing to a new and better home. Since I had grown accustom to the silence, low power/heat, and wonderfully small size of the NUC, I had to find something like it to replace the one I had just gotten rid of. Well, after little debate, I ordered a new Intel NUC again, this time armed with a higher clock speed and the wondrous Core i5 badge. System memory was boosted to the full 16GB allowed by the board and off I went. Just like its Core i3 counterpart, this NUC performed flawlessly in all regards. Stable, fast, and truly affordable.

If you are looking for low power (as in electricity consumption), high performance, and physically small and beautiful package for your home lab test machine/hypervisor, be sure to take a serious look at the Intel NUC. Just imagine a shoe box full of Intel NUCs acting as a full on Proxmox cluster! Aside from the physical memory constraints inherent to this platform, I have seriously considered putting a handful of these into client networks as small foot print Proxmox clusters.

Pros: Tiny system, low energy usage, high performance.
Cons: Usually a tad more expensive than a comparable i3/i5 SFF desktop PC with the same specs. Requires mSATA and low voltage memory, both of which you probably do not have laying around.

A final note, unless you are doing CPU intensive tasks, which you probably are not, then skip the i5 variant. While it works great, I noticed zero performance increase over my Core i3 NUC. Obviously, this varies from workload to workload so be sure you know what you need.

I hope this helps any perspective home lab enthusiasts out there and be sure to stay tuned for my next build which I just finished ordering…

–himuraken

Power Consumption Table

This page is reserved for keeping track of power usage of commonly used systems and components. Eventually this should be combined into a sort able table but this will do for now.

Dell Power Edge T430
PSU: 495W
CPU(s):
System powered off: 8W
Idle 0 drive(s): 108W
Idle 1x2TB Constellation SAS drive(s): 119W
Idle 4x2TB Constellation SAS drive(s): 147W
Idle 8x2TB Constellation SAS drive(s): 181W

Lenovo TS440 8Bay (XUX sku)
PSU: 450W
CPU(s): 1x Intel Xeon E3-1225 v3 (3.20GHz)
System powered off: 4W
Idle 0 drive(s): 54W
Idle 1x1TB Constellation SAS drive(s): 63W
Idle 4x1TB Constellation SAS drive(s): 87W
Idle 8x1TB Constellation SAS drive(s): 115W

FreeNAS on the Lenovo TS440

With a recent build fresh on the brain I figured I’d share some thoughts on the hardware used as I hope it helps others. I googled and tried to find information ahead of time and found sparse info.

Recently, I set out to replace my current small office FreeNAS box. From a performance standpoint the box looked great on paper: AMD 8core @ 3.4Ghz, 32GB of “good” memory, an expensive Seasonic power supply, and 16 drives attached to a pair of Dell PERC H200 controllers packed into a high end Lian Li full tower. The tower had a SAS backplane and the 5.25″ bays had two 4 disk SATA enclosures installed. The towering behemoth worked like a champ for quite sometime. Day in and day out, the trusty homebrew served up NFS exports to ESXi, Proxmox, and numerous other LAN hosts ranging from RaspberryPi’s to FOG imaging VMs and things of that nature.

LIFE WAS GOOD AND IGNORANCE IS BLISS.

Once or twice while physically away from the box, meaning out of town, I received alerts from an external monitoring service that some of my VMs were down. Of course this only happens when you are away, and only to systems that DON’T have a hardware level remote access solution like IPMI, Intel vPro, HP iLO, or Dell DRAC. But I digress for surely it is OK for your entire FreeNAS box to just mysteriously power off. Not a UPS failure, just an old fashioned “who knows”. Take all that plus the frustration of not being able to power the box on remotely and you begin to see why the homebrew had to go.

Some cursory searches online and a quick check with the fine folk over in #freenas got me thinking about custom vs prebuilt boxes. After comparing prices of various boards and form factors I determined that the Lenovo TS series of towers servers might be a good fit. Several people on #freenas and the internet in general had info on the TS140 which is the smaller and cheaper of the two, but I wanted at least 8 drive bays. The TS140 looks nice if you only need 1-4 cabled drives, hot swap isn’t an option on the little guy.

Armed with what seemed like proper info at the time, I ordered the 4bay variant of the TS440 since it was on sale for a meager $299.99 with free shipping. My plan was to test the system as it came and then add the secondary drive cage and backplane for a grand total of 8 hard drives. As it turns out, my plan was ill-conceived as I could not locate any vendor selling the hardware I needed. I reached out to a well known IT and Lenovo vendor to get the info I needed. Much to my dismay, I was informed that Lenovo does not sell the parts needed to take the 4 bay all the way up to 8 drives. This detail is quite frustrating since the documentation I found stated that the system can be used with a 4 or 8 bay config. That is technically true, but only if you order the right SKU/Lenovo specific part number in the first place.

I am happy to report that the TS440 with XUX SKU is humming along happily now. The XUX model comes with the drive cage, backplane, and add-on controller necessary to run 8 drives. The RAID controller included in the system happily recognized my 4TB SATA disks and 3TB SAS disks. The controller supports RAID levels 0/1/10 out of the box but defaults to exporting disks as JBOD as long as you don’t manually set them up in a RAID array, perfect for ZFS. An option that I also decided to go with was the second power supply. The TS440 comes with a single hot swap supply and a spacer/blank slot for the second.

Hope this helps you with your small office NAS builds if you are condering a Lenovo TS440.

–himuraken

The iPhone 5

The smart phone market just got hotter. The Apple iPhone 5 ($200-$400) didn’t quite wow us, but it does bring some cool features that are ground breaking for apple, that  includes a 4-inch widescreen Retina display, brand new A6 processor,  aluminum and glass enclosure, 4G LTE and dual-channel 802.11n 5GHz networking, an improved 8 megapixel iSight camera with panorama mode, a FaceTime HD camera, 16, 32, or 64GB of storage.