A couple of quick and easy ways to display JSON data on the Linux command line

I interact with RESTful services daily and periodically need to review the JSON objects exposed through one or more endpoints. There are several Linux utilities that can take a JSON object and print the object in an easily readable form. The pygmentize utility (available in the python-pygments package) can be fed a JSON object via a file or STDIN:

$ curl http://bind:8080/json 2>/dev/null | pygmentize |more

In the output above I’m retrieving a JSON object from the Bind statistics server and feeding it to pygmentize via STDIN. Pygmentize will take the object is given and produce a nightly formatted JSON object on STDOUT.

Pygmentize is super handy but the real power house of the JSON command line processors is jq. This amazing utility has numerous features which allow you to retrieve keys, values, objects and arrays and apply complex filters and operations to these elements. In its simplest form jq will take a JSON object and produce pretty output:

$ curl http://bind:8080/json 2>/dev/null | jq '.' |more
  "json-stats-version": "1.2",
  "boot-time": "2017-09-09T11:56:04.442Z",
  "config-time": "2017-09-09T11:56:04.520Z",
  "current-time": "2017-09-09T12:24:43.982Z",
  "version": "9.11.2",

To see the real power of jq we need to observe how to use operations and filters on a JSON object. The Bind statistics server produces a JSON object similar to the following (this was heavily edited to conserve space):

    "NS", 1,

Lets say you wanted to view the number of A, NX, PTR and MX records queried. We can use the jq filter to grab the qtypes object and pass that through a filter to retrieve the values of the A, NS, PTR and MX keys:

$ curl http://bind:8080/json 2>/dev/null \
          | jq -r '.qtypes| "\(.A) \(.NS) \(.PTR) \(.MX)"'
7023 1 153 1

In this example I am using string interpolation to turn the values of A, NS, PTR and MX into a string which is then printed on STDOUT. Jq also has a number of useful math operations which can be applied to the values in a JSON object. To sum the totals of the various failure response codes in the rcodes object we can use the addition operation:

$ curl 2>/dev/null \
          | jq -r '.rcodes| .NXDOMAIN + .SERVFAIL \
                  + .REFUSED + .FORMERR'

IN this example I am retrieving the values of the NXDOMAIN, SERVFAIL, REFUSED and FORMERR keys and sum’ing them with the addition operator. If you are new to jq or JSON I would highly suggest reading the jq manual and introducing JSON. These are excellent resources!

Getting more out of your Linux servers with moreutils

I accidentally came across the moreutils package a few years back and the day I did my life changed forever. This package contains some killer utilities and fill some nice gaps in the *NIX tool chain. Here is a list of the binaries in this package (descriptions taken from the man page of each utility):

chronic - runs a command quietly unless it fails
combine - combine sets of lines from two files using boolean operations
errno - look up errno names and descriptions
ifdata - get network interface info without parsing ifconfig output
ifne - Run command if the standard input is not empty
isutf8 - check whether files are valid UTF-8
lckdo - run a program with a lock held
mispipe - pipe two commands, returning the exit status of the first
pee - tee standard input to pipes
sponge - soak up standard input and write to a file
ts - timestamp input
vidir - edit directory
vipe - edit pipe
zrun - automatically uncompress arguments to command

I’m especially fond of errno, chronic and pee. But my favorite utilities have to be ifne and ts. Ifne is useful if you need to run a command if output is present. One such use is e-mailing someone if a monitoring program spits out an error:

$ hardware_monitor | mail -s “Problem detected with the hardare on `/bin/hostname` admins

The ts utility is just as useful. Say you have a program that randomly spits out lines of gik and you want to know when the lines of gik occurred. To get a timestamp you can pipe the programs output to ts:

$ gik_generator | ts
Nov 02 09:55:11 The world needs more cow bell!
Nov 02 09:55:12 The world needs more cow bell!
Nov 02 09:55:13 The world needs more cow bell!

I love coming across tools that make troubleshooting and shell scripting more enjoyable. Now we just need more cow bell!

Viewing Linux tape drive statistics with tapestat

A while back I wrote a blog entry showing how to get tape drives statistics with systemtap. This script wasn’t very reliable and I would frequently see it crash after collecting just a few samples. Due to the work of some amazing Linux kernel engineers I no longer have to touch systemtap. Recent Linux kernels now expose a number of incredibly useful statistics through the /sys file system:

$ pwd

$ ls -l
total 0
-r--r--r-- 1 root root 4096 Oct 10 16:15 in_flight
-r--r--r-- 1 root root 4096 Oct 10 16:15 io_ns
-r--r--r-- 1 root root 4096 Oct 10 16:15 other_cnt
-r--r--r-- 1 root root 4096 Oct 10 15:30 read_byte_cnt
-r--r--r-- 1 root root 4096 Oct 10 15:30 read_cnt
-r--r--r-- 1 root root 4096 Oct 10 16:15 read_ns
-r--r--r-- 1 root root 4096 Oct 10 16:15 resid_cnt
-r--r--r-- 1 root root 4096 Oct 10 15:30 write_byte_cnt
-r--r--r-- 1 root root 4096 Oct 10 15:30 write_cnt
-r--r--r-- 1 root root 4096 Oct 10 16:15 write_ns

There is also a tapestats utility in the syststat package that can be used to summarize these statistics:

$ tapestat -z 1

Linux 2.6.32-642.1.1.el6.x86_64 (wolfie)        10/10/2016      _x86_64_        (24 CPU)

Tape:     r/s     w/s   kB_read/s   kB_wrtn/s %Rd %Wr %Oa    Rs/s    Ot/s
st0         0     370           0       94899   0  22  22       0       0
st1         0     367           0       93971   0  18  19       0       0
st2         0     315           0       80885   0  19  19       0       0
st3         0      27           0        6979   0   1   1       0       0

Tape:     r/s     w/s   kB_read/s   kB_wrtn/s %Rd %Wr %Oa    Rs/s    Ot/s
st0         0     648           0      165888   0  30  30       0       0
st2         0     362           0       92928   0  17  17       0       0

This is a useful addition and I no longer have to worry about systemtap croaking when I’m tracking down issues.

Sudo insults — what a fun feature!

I think humor plays a big role in life, especially the life of a SysAdmin. This weekend I was cleaning up some sudoers files and came across a reference to the “insult” option in the documentation. Here is what the manual says:

insults If set, sudo will insult users when they enter an incorrect password. This flag is off by default.”

This of course peaked my curiosity, and the description in the online documentation got me wondering what kind of insults sudo would spit out. To test this feature out I compiled sudo with the complete set of insults:

$ ./configure –prefix=/usr/local –with-insults –with-all-insults

$ gmake

$ gmake install

To enable insults I added “Defaults insults” to my sudoers file. This resulted in me laughing myself silly:

$ sudo /bin/false

Take a stress pill and think things over.
This mission is too important for me to allow you to jeopardize it.
I feel much better now.
sudo: 3 incorrect password attempts

$ sudo /bin/false

Have you considered trying to match wits with a rutabaga?
You speak an infinite deal of nothing
You speak an infinite deal of nothing
sudo: 3 incorrect password attempts

Life without laughter is pretty much a useless life. You can quote me on that one! ;)

Summarizing system call activity on Linux systems

Linux has a guadzillion debugging utilities available. One of my favorite tools for debugging problems is strace, which allows you to observe the system calls a process is making in realtime. Strace also has a “-c” option to summarize system call activity:

$ strace -c -p 28009

Process 28009 attached
Process 28009 detached
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 78.17    0.001278           2       643           select
 14.80    0.000242           1       322           read
  7.03    0.000115           0       322           write
------ ----------- ----------- --------- --------- ----------------
100.00    0.001635                  1287           total

The output contains the percentage of time spent in each system call, the total time in seconds, the microseconds per call, the total number of calls, a count of the number of errors and the type of system call that was made. This output has numerous uses, and is a great way to observe how a process is interacting with the kernel. Good stuff!

Breaking the telnet addiction with netcat

After many years of use it’s become almost second nature to type ‘telnet <HOST> <PORT>’ when I need to see if a system has TCP port <PORT> open. Newer systems no longer install telnet by default:

$ telnet google.com 80
-bash: telnet: command not found

I can’t think of a valid reason to keep telnet around (there are probably valid use cases). Since netcat and tcpdump are a billion times better for debugging TCP issues, I need to apply newer microcode to my brain to perform a ‘s/telnet/nc -v/g’ each time I need to test if a TCP port is open:

$ nc -v google.com 80
Connection to google.com 80 port [tcp/http] succeeded!

Anyone else have a telnet attachment they just can’t break? :)