ARM vs. Intel Atom comparison

Van Smith wrote an awesome article comparing current ARM processors and their lower power consuming x86 friends such as the Intel Atom.

Here’s the conclusion of his performance benchmark tests:

The ARM Cortex-A8 achieves surprisingly competitive performance across many integer-based benchmarks while consuming power at levels far below the most energy miserly x86 CPU, the Intel Atom. In fact, the ARM Cortex-A8 matched or even beat the Intel Atom N450 across a significant number of our integer-based tests, especially when compensating for the Atom’s 25 percent clock speed advantage.

However, the ARM Cortex-A8 sample that we tested in the form of the Freescale i.MX515 lived in an ecosystem that was not competitive with the x86 rivals in this comparison. The video subsystem is very limited. Memory support is a very slow 32-bit, DDR2-200MHz.

Languishing across all of the JavaScript benchmarks, the ARM Cortex-A8 was only one-third to one-half as fast as the x86 competition. However, this might partially be a result of the very slow memory subsystem that burdened the ARM core.

More troubling is the unacceptably poor double-precision floating-point throughput of the ARM Cortex-A8. While floating-point performance isn’t important to all tasks and is certainly not as important as integer performance, it cannot be ignored if ARM wants its products to successfully migrate upwards into traditional x86-dominated market spaces.

Sending alerts to your Linux desktop when things go wrong

I run gnome on my work desktop, and even with our various monitoring solutions I still use some custom notification tools to get alerted when specific issues occur. One of these tools is gnome-notify, which allows you to create a visible notification inside your desktop workspace. This tool has several useful options, which are displayed when you run notify-send with the “-?” option:

$ /usr/bin/notify-send -?
  /usr/bin/notify-send [OPTION...]  [BODY] - create a notification

Help Options:
  -?, --help                        Show help options

Application Options:
  -u, --urgency=LEVEL               Specifies the urgency level (low, normal, critical).
  -t, --expire-time=TIME            Specifies the timeout in milliseconds at which to expire the notification.
  -i, --icon=ICON[,ICON...]         Specifies an icon filename or stock icon to display.
  -c, --category=TYPE[,TYPE...]     Specifies the notification category.
  -h, --hint=TYPE:NAME:VALUE        Specifies basic extra data to pass. Valid types are int, double, string and byte.
  -v, --version                     Version of the package.

To use this tool to send an alert when a fault is detected, I typically wrap some conditional logic to parse the output of one or more commands:


if [ "${system_check}" = "1" ] then;
    /usr/bin/notify-send -t 10000 -u critical "Problem with server X"

The code above will run the command embedded inside $(), and capture the output from this command in the variable system_check. If the value of the output is 1, then notify-send will be invoked to send a notification with the string “Problem with server X” to my desktop. The real value of notify-send comes when you combine it with the clusterit tools, and generate notifications based on the result of running a command across ALL of your servers. Nice!

Sending commands to multiple Linux and Solaris machines through a single graphical shell window

I have been a long time user of clusterssh, which is a fanstastic tool for sending commands to multiple hosts. Inside each terminal window you can type a command, or you can use the master console to send a command to all of the windows you opened. The clusterit tools comes with a similar tool, dvt, which is quite similar to clusterssh. Dvt uses the CLUSTER variable just like dsh and company, and has a master console that allows you to send commands to each server in that list. To use dvt, you will first need to create a list of hosts you want to connect to:

$ cat /home/matty/cluster/nodes

Once you have a list of hosts, you will need to export this list through the CLUSTER variable:

$ export CLUSTER=/home/matty/cluster/nodes

After these two items have been completed, you can run dvt to create an xterm on each server (click here to see what this looks like). You can then type in each terminal window, or use the master window to send the same command to all of the windows you have opened. This is another awesome tool that every SysAdmin should have in their tool belt, and it’s easy to install clusterit!

Windows Server 2008 is a rather nice product

I’ve been crazy busy over the past few months. In addition to preparing for my RHCE exam in July, I have also been studying for the Microsoft Windows Server 2008 MCITP certification. This is a huge change for me, and I wouldn’t have thought in a million years that I would be so focused on learning everything there is to know about Windows. But the reality is almost EVERY company out there runs Microsoft software, and to truly solve problems you need to know what each OS and application is capable of.

The more I mess around with Windows 2008, active directory, Windows cluster server and the various applications that run on top of Windows Server, the more I’m starting to like it. While I don’t expect to become a full time Windows administrator (vSphere, Solaris, Linux, AIX and storage will continue to be #1 on my list of things to learn about), I definitely have found a new appreciation for Windows and hope to use it more in the future. If you are a heavy Windows Server user, please let me know what you like and dislike about it. I’ll share my list in a follow up post.

Managing 100s of Linux and Solaris machines with clusterit

I use numerous tools to perform my SysAdmin duties. One of my favorite tools it clusterit, which is a suite of programs that allows you to run commands across one or more machines in parallel. To begin using the awesomeness that is clusterit, you will first need to download and install the software. This is as easy as:

$ wget

$ tar xfvz clusterit*.gz

$ cd clusterit* && ./configure –prefix=/usr/local/clusterit && make && make install

Once the software is installed, you should have a set of binaries and manual pages in /usr/local/clusterit. To use the various tools in the clusterit/bin directory, you will first need to create one or more cluster files. Each cluster file contains a list of hosts you want to manage as a group, and each host is separated by a newline. Here is an example:

$ cat servers

The cluster file listed above contains 5 servers named foo1 – foo5. To tell clusterit you want to use this list of hosts, you will need to export the file via the $CLUSTER environment variable:

$ export CLUSTER=/home/matty/clusters/servers

Once you specify the list of hosts you want to use in the $CLUSTER variable, you can start using the various tools. One of the handiest tools is dsh, which allows you to run commands across the hosts in parallel:

$ dsh uptime

foo1  :   2:17pm  up 8 day(s), 23:37,  1 user,  load average: 0.06, 0.06, 0.06
foo2  :   2:17pm  up 8 day(s), 23:56,  0 users,  load average: 0.03, 0.03, 0.02
foo3  :   2:17pm  up 7 day(s), 23:32,  1 user,  load average: 0.27, 2.04, 3.21
foo4  :   2:17pm  up 7 day(s), 23:33,  1 user,  load average: 3.98, 2.07, 0.96
foo5  :   2:17pm  up  5:06,  0 users,  load average: 0.08, 0.09, 0.09

In the example above I ran the uptime command across all the servers listed in file that is referenced by the CLUSTER variable! You can also do more complex activities through dsh:

$ dsh ‘if uname -a | grep SunOS >/dev/null; then echo Solaris; fi’
foo1 : Solaris
foo2 : Solaris
foo3 : Solaris
foo4 : Solaris
foo5 : Solaris

This example uses dsh to run uname across a batch of servers, and prints the string Solaris if the keyword “SunOS” is found in the uname output. Clusterit also comes with a distributed scp command called pcp, which you can use to copy a file to a number of hosts in parallel:

$ pcp /etc/services /tmp

services                   100%  616KB 616.2KB/s   00:00    
services                   100%  616KB 616.2KB/s   00:00    
services                   100%  616KB 616.2KB/s   00:00    
services                   100%  616KB 616.2KB/s   00:00    
services                   100%  616KB 616.2KB/s   00:00    

$ openssl md5 /etc/services
MD5(/etc/services)= 14801984e8caa4ea3efb44358de3bb91

$ dsh openssl md5 /tmp/services
foo1 : MD5(/tmp/services)= 14801984e8caa4ea3efb44358de3bb91
foo2 : MD5(/tmp/services)= 14801984e8caa4ea3efb44358de3bb91
foo3 : MD5(/tmp/services)= 14801984e8caa4ea3efb44358de3bb91
foo4 : MD5(/tmp/services)= 14801984e8caa4ea3efb44358de3bb91
foo5 : MD5(/tmp/services)= 14801984e8caa4ea3efb44358de3bb91

In this example I am using pcp to copy the file /etc/services to each host, and then using dsh to create a checksum of the file that was copied. Clusterit also comes with a distributed top (dtop), distributed df (pdf) as well as a number of job control tools! If you are currently performing management operations using the old for stanza:

for i in `cat hosts`
    ssh $host 'run_some_command'

You really owe it to yourself to set up clusterit. You will be glad you did!

How to ensure sure you can boot if your initrd image has problems

I was playing around with some new kernel bits a few weeks back, and needed to update my initrd image. Having encountered various situations where a box wouldn’t boot due to a botched initrd file, I have become overly protective of this file. Now each time I have to perform an update, I will first create a backup of the file:

$ cp /boot/initrd- /boot/initrd-

Once I have a working backup, I like to add a menu.lst entry that allows me to restore to a know working state:

title Fedora 11 (
     root (hd0,0)
     kernel /vmlinuz- ro root=LABEL=/
     initrd /initrd-

If my changes cause the machine to fail to boot, I can pick the backup menu entry and I’m off and running. If you don’t want to pollute your menu.lst, you can also specify the initrd manually from the grub command menu. Backups are key, and not having to boot into rescue mode is huge. :)