Resizing Veritas volumes with vxresize

We were getting close to running out of space on one of our database volumes last week, and I needed to add some additional storage to ensure that things kept running smoothly. The admin who originally created the VxVM database volume only used half of each of the five disks that were associated with the volume / file system that were at capacity, which meant I had roughly 18GB of free space available on each device to work with:

$ vxdg free | egrep ‘(D01|D02|D03|D04|D05)’

GROUP        DISK         DEVICE       TAG          OFFSET    LENGTH    FLAGS
datadg       D01          c2t0d0s2     c2t0d0       35547981   35547981  -
datadg       D02          c2t1d0s2     c2t1d0       35547981   35547981  -
datadg       D03          c2t2d0s2     c2t2d0       35547981   35547981  -
datadg       D04          c2t3d0s2     c2t3d0       35547981   35547981  -
datadg       D05          c2t4d0s2     c2t4d0       35547981   35547981  -
datadg       D06          c2t5d0s2     c2t5d0       35547981   35547981  -

Now there are a number of ways to resize volumes and file systems with VxVM and VxFS. You can use vxassist to grow or shrink a volume, and then use the fsadm utility to extend the file system. You can also perform both of these operations with vxresize, which takes the name of the volume to resize, the disk group the volume is a part of, and a size parameter to indicate that you want to grow (you use a “+” to indicate that you want to grow the volume by the value immediately following the plus sign) or shrink (you use a “-” to indicate that you want to shrink the volume by the value immediately following the dash) the volume. Since I preferred the most efficient method to resize my volume, I fired up vxresize and told it to extend the volume named datavol01 by 35547981 blocks:

$ /etc/vx/bin/vxresize -g datadg -F vxfs datavol01 +35547981

Instead of specifying blocks, you can also use unit specifiers such as “m” to denote megabytes, and “g” to denote gigabytes. As with all operations that change the structure of storage, you should test any resizing operations on non-production systems prior to changing production systems.

Concert review: Shadows Fall, Lacuna Coil and Stone Sour

As you can see from my concert review archive, I am a bug fan of hard rock. So when Jagermeister announced that they were sending Shadows Fall, Lacuna Coil and Stone Sour out on the road together, I knew I had to make one of the shows. My opportunity came earlier this week, when I got the chance to see each of these bands for the first time live.

Shadows Fall came on stage shortly after a local opening band finished, and played their hearts out. I am not a huge fan of Shadow Fall’s music, but I enjoyed hearing them jam! Once they finished their set list, Lacuna Coil came came out to perform their set list. This was the first time I have seen Lacuna Coil perform, and I was amazed at how well Cristina Scabbia sounded live (Christina is the lead singer). The band played a relatively short set list, but they sounded great (“Swamped” was incredible!).

After another shirt intermission, Stone Sour took the stage. I have listened to a bunch of their music on CD, but had no idea what to expect from them live. Well, after they ripped through their first song, I knew I was in for a night of hard rocking musical blyss! Stone Sour has to be one of the most crowd friendly bands I have ever seen perform, and you could tell they were there to play a killer show for those who attended. The band played a good mix of tunes from both their old and new albums, including my personal favorite “Bother,” as well as covers of Lynyrd Skynyrd’s “Sweet Home Alabama” and Chris Isaak’s “Wicked Game.”

This was a night that I will forever remember, and I had an awesome time rocking out with the folks I met at the show, and each of the bands that was part of the Jagermeister music tour! I have a number of shows on the docket for May, June and July, and plan to blog about each concert I attend. Shibby!

When SSH permissions bite!

Last week I set up several Linux and Solaris hosts to use key based authentication. For some reason two of the hosts continued to prompt me for a password, even though the server and client were configured correctly to used DSA keys (I was using the same config on all of the servers, so I knew it worked). When I traced the sshd daemon on one of the hosts that was misbehaving, I saw the following just before the password prompt was displayed:

$ strace -f -p `pgrep sshd`
<.....>
stat("/home/matty/.ssh/authorized_keys", {st_mode=S_IFREG|0664, st_size=1026, ...}) = 0
open("/home/matty/.ssh/authorized_keys", O_RDONLY) = 4
lstat("/home", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
lstat("/home/matty", {st_mode=S_IFDIR|0700, st_size=4096, ...}) = 0
lstat("/home/matty/.ssh", {st_mode=S_IFDIR|0700, st_size=4096, ...}) = 0
lstat("/home/matty/.ssh/authorized_keys", {st_mode=S_IFREG|0664, st_size=1026, ...}) = 0
lstat("/home", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
lstat("/home/matty", {st_mode=S_IFDIR|0700, st_size=4096, ...}) = 0
fstat(4, {st_mode=S_IFREG|0664, st_size=1026, ...}) = 0

The strace output made me realize that $HOME/.ssh might not be set to 0700, or the authorized_keys file might not be set to 0600. It turns out the permissions on both entries were set incorrectly, and after adjusting the permissions (which got borked by an incorrect umask entry in /etc/profile), everything worked as expected. As a side note, I am curious why the SSH daemon doesn’t log the permission errors when run with multiple debug flags. This would make a fantastic RFE! :)

Adding swap to Linux hosts

I recently ran out of swap space on one of my production application servers, and needed to add some additional swap on the fly. Since I didn’t have a spare slice free on the server, I created a 1GB file on my / file system with dd, and then used the mkswap and swapon utilities to create a swap device out of that file:

$ dd if=/dev/zero of=/swap1.swp bs=1024 count=512K

$ mkswap /swap1.swp

$ swapon /swap1.swp

To verify the new swap device was available, I dumped /proc/swaps:

$ cat /proc/swaps

Filename                                Type            Size    Used    Priority
/dev/hda2                               partition       522104  160     -1
/swap1.swp                              file            1048568 0       -2

Sizing swap is easy to do, but when a server changes roles, previous swap estimates no longer come into play. I am planning to kickstart the server with a different disk layout, which will allow me to allocate a block device of the right size to swap. For the interim, this met our needs.

Running commands across multiple servers with clusterssh

I periodically need to perform repetitive maintenance operations (e.g., patching systems) on groups of servers, which typically requires me to run a similar set of commands on multiple hosts. To make my life easier, I use the super useful clusterssh utility to interactively run commands across a group of servers.

Cluster SSH is super easy to configure, and uses the concept of a “cluster” to define a group of similar nodes. To get up and running with clusterssh, you will first need to run cssh with the “-u” option to generate a configuration file (this step is optional, but creating a config file reduces start up time):

$ cssh -u > $HOME/.csshrc

Once the configuration file is created, you can open the file in a text editor, and change the settings to fit your administration preferences. Some of the settings that can be changed are ssh connection parameters, the placement of xterms on your screen, the placement of the master command window, window titles, etc. In additional to modifying look and feel type items, you also need to add one or more clusters, which are logical groupings of machines. On one of my desktops, I have a “backup” cluster that I use to connect to our master and media servers, a “web” cluster to connect to our web servers, an “app” cluster to connect to our application servers, and a “db” cluster to connect to our database servers. To define a new cluster, you can add a group description followed by an equal sign and one or more hosts, and then register that grouping using the “clusters” key word. Here are the entries from my .csshrc file:

clusters = backup web app db
backup = nbmed01 nbmed02 nbmed03 nbmed04 nbmas01 nbmas02
web = web01 web02 web03 web04 web05 web06 web07 web08
app = app01 app02
db = db01 db02

After the configuration settings are adjusted and one ore more clusters are defined, you can run the cssh utility with the name of the cluster to connect to:

$ cssh web

This wil open up one xterm window per server, and create an SSH connection to the server. If you have RSA or DSA keys set up, each xterm window will display a prompt on the server. In addition to opening one xterm window per server, cssh will also create a command window to send commands to all of the servers that you are connected to (you can also type into individual xterms, which is useful for running commands on just one system). This makes patching, host files updates (when you can’t completely depend on DNS) and the such super super easy! This is some serious bling yizos! Jeah!

Making rc scripts chkconfig aware

After adding a new rc script to /etc/init.d/ on a RHEL 4 box last week, I was greeted with the following error when I ran chkconfig to create the /etc/rc[0-6].d symbolic links:

$ /sbin/chkconfig llc2 on
service llc2 does not support chkconfig

After a big of poking around, it look like chkconfig looks for a line similar to the following in each run control script:

# chkconfig: 345 99 50

The values after the “chkconfig:” statement contain the runlevels to enable the script at, the value to use after the “S” in the start scripts, and the value to use after the “K” in the kill scripts. So 345 would cause the start script to be executed at run levels 3, 4 and 5, the start script would be named S99llc2, and the kill script would be named K50llc2. Shibby!