Blog O' Matty


Gathering file and directory statistics with DTrace

This article was posted by Matty on 2006-04-02 19:38:00 -0400 -0400

After learning about fsstat, I decided to create a similar program that used DTrace (fsstat uses libkstat). After reading through syscall.h and reviewing the system calls available in the syscall provider, I created dfsstat.pl:

$ dfsstat.pl 5

process open close read write stat creat link unlink symlink mkdir rmdir
cat 148 148 74 37 111 0 0 0 0 0 0
dfsstat.pl 0 0 6 0 0 0 0 0 0 0 0
dtrace 0 0 0 7 0 0 0 0 0 0 0
ln 111 74 0 0 185 0 0 0 37 0 0
mkdir 148 111 0 0 148 0 0 0 0 37 0
mv 111 74 0 0 111 0 0 0 0 0 0
rm 370 259 0 0 370 0 0 222 0 0 37
test.sh 0 222 0 37 0 74 0 0 0 0 0
touch 111 222 0 0 407 148 0 0 0 0 0

Hope folks find this useful.

Solaris fsstat command

This article was posted by Matty on 2006-04-01 12:40:00 -0400 -0400

I came across a reference to the fsstat command on the opensolaris observability list, and decided to BFU to build 36 to check it out. This nifty utility will print file operations per file system, which is incredibly useful:

$ fsstat -n -F 5

lookup creat remov link renam mkdir rmdir rddir symlnk rdlnk
95.4K 173 31 0 1 0 0 230 3 2.30K ufs
646 0 0 0 0 0 0 126 0 133 proc
0 0 0 0 0 0 0 0 0 0 zfs
2.37K 1.96K 1.75K 1 72 4 0 0 0 0 tmpfs
0 0 0 0 0 0 0 0 0 0 mntfs

$ fsstat -i -F 5

read read write write rddir rddir rwlock rwulock
ops bytes ops bytes ops bytes ops ops
32.7K 59.0M 1.48K 2.89M 230 294K 34.4K 34.4K ufs
1.33K 59.4K 0 0 128 36.4K 1.45K 1.45K proc
0 0 0 0 0 0 0 0 zfs
23.4K 23.4M 26.1K 22.6M 0 0 49.5K 49.5K tmpfs
21 2.39K 0 0 0 0 21 21 mntfs

The formatting is kinda funky, but it looks like some additional work is being done to improve fsstat. This is a super useful utility, and I hope it makes Solaris 10 Update 2!!!

Generating weekly patch reports with Solaris

This article was posted by Matty on 2006-04-01 12:28:00 -0400 -0400

If you manage servers running Solaris 9 and 10, you may have noticed that Sun added the smpatch utility to assist with the cumbersome job of patching. I try my best to apply all critical and security patches to the systems I support, and have come to rely on the following cron job to notify me when new patches are available for my systems:

0 0 * * 0 /usr/sbin/smpatch analyze | /usr/bin/mailx -s "Patch list for `/usr/bin/hostname`" matty

The smpatch “analyze” option will retrieve the list of available patches from Sun, and compare those with the patches currently applied to the system. If smpatch detects that a new patch is available, it will print the patchid and patch description to standard output, which I pipe to mailx. This results in an email similar to the following showing up in my inbox:

From: Super-User
To: matty
Subject: Patch list for tigger
Date: Sat, 1 Apr 2006 11:14:23 -0500T(EST):00-04:00

118375-07 SunOS 5.10: nfs Patch
122243-01 SunOS 5.10: patch forthdebug
122242-01 SunOS 5.10: patch cmlb
122241-01 SunOS 5.10: patch dad
118346-03 SunOS 5.10: libnsl Patch
122027-02 SunOS 5.10: bge Driver Patch
121118-06 SunOS 5.10_sparc, Sun Update Connection Client, System Edition 1.0.4
119278-07 CDE 1.6: dtlogin patch
122206-01 GNOME 2.6.0: On-screen Keyboard Patch
120460-07 GNOME 2.6.0: Gnome libs Patch
122204-01 GNOME 2.6.0: configuration framework Patch
122210-01 GNOME 2.6.0: GNOME Media Player Patch
119368-04 GNOME 2.6.0: Printing Technology Patch
122208-01 GNOME 2.6.0: Removable Media Patch
120286-02 GNOME 2.6.0: Gnome text editor Patch
119906-04 Gnome 2.6.0: Virtual File System Framework patch
119538-04 GNOME 2.6.0: Window Manager Patch

I use the patch details to decide if I need to patch the system, and to document which areas of the system were patched. If something goes horribly awry after ‘smpatch update’ runs, I can look back through my patch emails to see which subsystems and applications were impacted.

ZFS snapshots

This article was posted by Matty on 2006-04-01 11:09:00 -0400 -0400

File system snapshots have made their way into pretty much all of the major file systems (e.g., UFS, VxFS, WAFL, ZFS, etc.), and allow storage administrators to create point-in-time consistent copies (it’s not a block-by-block copy, since snapshots typically only grow when a block is changed) of a file system. Since snapshots contain a consistent point-in-time copy of the data in a file system, they are ideal for backups, instantaneous data recovery, and can assist with disaster recovery. Snapshot support is available with the Zetabyte File System (ZFS), and can be accessed with the zfs utilities “snapshot” option:

$ zfs snapshot snaptest@Thursday

In this example, a snapshot of the file system named “snaptest” will be created with the name “Thursday.” To view the list of snapshots that have been created, the zfs “list” option can be used:

$ zfs list

NAME USED AVAIL REFER MOUNTPOINT
snaptest 18.0G 32.1G 18.0G /snaptest
snaptest@Thursday 0 - 18.0G -

Once a snapshot is created, you can access the snapshots contents by changing to the .zfs directory in the filesystem that was snapshoted:

$ cd /snaptest/.zfs/snapshot/Thursday

$ ls -l

total 37786139
-rw------T 1 root root 1073741824 Mar 31 20:57 file1
-rw------T 1 root root 1073741824 Mar 31 21:00 file2
-rw------T 1 root root 1073741824 Mar 31 21:01 file3
-rw------T 1 root root 1073741824 Mar 31 21:01 file4

To recover a file from a snapshot, you can change to the subdirectory under the .zfs directory you want to recover a file from, and invoke your favorite file-level utility to salvage the file:

$ cd /snaptest/.zfs/snapshot/Thursday

$ cp file1 /tmp

Once you are finished using the contents of a snapshot, you can use the zfs utilities “destroy” option to remove the snapshot:

$ zfs destroy snaptest@Thursday

$ zfs list

NAME USED AVAIL REFER MOUNTPOINT
snaptest 1.50G 48.6G 1.50G /snaptest

Snapshots have been around for quite some time (I first used snapshots on NetApp filers), and it’s cool to see that ZFS supports them.

Pine spell checking

This article was posted by Matty on 2006-04-01 10:28:00 -0400 -0400

I periodically use pine to check email, and frequently use the Ctrl-T command sequence to invoke the pine spell checker. When pine checks the spelling of a message, it hops from word-to-word in a goofy random fashion. While reading some pine documentation last week, I finally found out why:

“When you first use the standard Unix spell checker, it may appear that it is randomly jumping all around your message - actually, the spell checker processes your message one word at a time, in alphabetical order. Other spell checkers such as ispell for Unix operate differently and offer more features, such as creating a personal “dictionary” of words."*