There have been a number of folks in the SysAdmin blogosphere posting about iotop, which is a Linux utility for viewing I/O activity in a a top-like display. To see just how useful iotop was, I installed it and ran it with the “-o” option (only report processes that are actively doing I/O):
$ iotop -o
Total DISK READ: 11.85 K/s | Total DISK WRITE: 118.81 M/s
PID USER DISK READ DISK WRITE SWAPIN IO> COMMAND
783 root 7.90 K/s 620.04 K/s 0.00 % 1.37 % [kjournald]
6199 matty 3.95 K/s 118.21 M/s 0.00 % 0.07 % dd if /dev/zero of /tmp/foo
This is cool stuff, and I am stoked that I can now easily view I/O activity on my Linux and Solaris hosts. Niiiice!
I recently learned about the new ChrootDirectory in OpenSSH 5.2, and wanted to play around with it to see what it was capable of. To begin my quest, I started off by creating a couple of users that would be chroot’ed to their home directories when they logged into the server with sftp. Once the users were created, I added the following configuration stanza to my sshd_config file to chroot these users when they logged in with their sftp client:
Subsystem sftp internal-sftp
Match user u1,u2,u3
ChrootDirectory /home/%u
X11Forwarding no
AllowTcpForwarding no
ForceCommand internal-sftp
Once these directives where added, I started up the daemon in debug mode:
$ /usr/local/sbin/sshd -ddd -f /usr/local/etc/sshd_config
Debug mode will cause the daemon to log verbosely to stdout, which is extremely useful for locating problems with new configuration directives. Now that the daemon was running, I tried to login with the user u1:
$ sftp -oPort=222 u1@192.168.1.15
Connecting to 192.168.1.15...
u1@192.168.1.15's password:
Read from remote host 192.168.1.15: Connection reset by peer
Connection closed
The first attempt was a no go, but luckily verbose logging made debugging this issue a snap:
debug3: mm_get_keystate: Getting compression state
debug3: mm_get_keystate: Getting Network I/O buffers
debug3: mm_share_sync: Share sync
debug3: mm_share_sync: Share sync end
debug3: safely_chroot: checking '/'
debug3: safely_chroot: checking '/home/'
debug3: safely_chroot: checking '/home/u1'
bad ownership or modes for chroot directory "/home/u1"
After changing /home/u1 to be owned by root, I was able to login and poke around:
$ sftp -oPort=222 u1@192.168.1.15
Connecting to 192.168.1.15...
u1@192.168.1.15's password:
sftp> **pwd**
Remote working directory: /
sftp> **ls -l**
drwxr-xr-x 2 1001 1001 4096 Mar 15 15:03 uploads
sftp> **cd uploads**
sftp> **ls -l**
-rw-r--r-- 1 1001 1001 39655552 Mar 15 15:04 techtalk1.mp3
sftp> put techtalk2*
Uploading techtalk2.mp3 to /uploads/techtalk2.mp3
techtalk2.mp3 3% 3776KB 2.3MB/s 00:39 ETA^
sftp> ls -l
-rw-r--r-- 1 1001 1001 5046272 Mar 15 15:11 techtalk2.mp3
-rw-r--r-- 1 1001 1001 39655552 Mar 15 15:04 techtalk1.mp3
This is super useful, though building chroot jails for normal SSH sessions will require a bit more work (i.e., you need to populate the chroot directory with all the config files and binaries needed to run a typical shell session). Makejail can make this a WHOLE lot easier, and I am about to submit a patch to the makejail developers to allow it to work on Solaris hosts. OpenSSH rocks!
There are a bunch of utilities available to monitor bandwidth utilization on Linux hosts, and I’ve touched on a few in previous posts. I recently came across bwm-ng while perusing the Debian package repository, and decided to try it out. When bwm-ng is executed without any arguments, it provides a relatively simple curses interface with throughput statistics for each interface in the system:
$ bwm-ng
bwm-ng v0.6 (probing every 0.500s), press 'h' for help
input: /proc/net/dev type: rate
| iface Rx Tx Total
==============================================================================
lo: 0.00 KB/s 0.00 KB/s 0.00 KB/s
eth0: 2275.89 KB/s 57.56 KB/s 2333.45 KB/s
------------------------------------------------------------------------------
total: 2275.89 KB/s 57.56 KB/s 2333.45 KB/s
But the simplicity of the tool stops there, since there are a SLEW of options to control the output format, and whether or not sampled data is written to a file. This is a nifty utility, but I think I will stick with iftop.
I periodically need to take input data from various utilities and convert it to columnar data. There are a million ways to do this, but I have come to rely on the paste utility to perform this task:
$ ls
1 11 13 15 17 19 20 4 6 9
10 12 14 16 18 2 3 5 78
$ ls | paste - - -
1 10 11
12 13 14
15 16 17
18 19 2
20 3 4
5 6 78
9
In the output above, paste will take the input given to it and print the data in 3 columns (you can add more hyphens to get more columns of data). If anyone has some interesting little tidbits such as this, feel free to add them to the comments section. Thanks!
While poking around the Internet, I came across a link to project middleman. The project provides an easy way for administrators to parallelize tasks inside shell scripts, and is described rather nicely in the README file that comes with the source code:
“The philosophy behind mdm is that users should benefit from their multi-core systems without making drastic changes to their shell scripts. With mdm, you annotate your scripts to specify which commands might benefit from parallelization, and then you run it under the supervision of the mdm system. At runtime, the mdm system dynamically discovers parallelization opportunities and run the annotated commands in parallel as appropriate.”
And when they mention annotating a shell script, it really is as simple as placing the “mdm-run” binary in front of tasks that can be parallelized (you can also define an I/O profile if tasks will interfere with each others I/O streams):
$ mdm-run convert2ogg *.mp3
This is pretty sweet, and I need to play around with this a bit more on my quad core desktop. Rock on!
I just came across Parallelizing Jobs with xargs, which describes how to use the xargs “-P” option to parallelize tasks:
$ ls .mp3 | xargs -P 8 -n 1 convert2ogg
The “-P” argument to xargs will cause 8 convert2ogg processes to be kicked off, and the “-n” option will ensure that only one argument of the ls output is passed to each process that is created. This is sweet, and I can DEFINITELY see myself using this super useful argument in the future!!!!!