Remotely mounting directories through SSH

I manage a fair number of Linux hosts, and have recently been looking for ways to securely mount remote directories on my servers for administrative purposes. NFS and Samba don’t have a terribly good security track record, so I don’t like to use either of these solutions unless truly warranted. Rsync over SSH is pretty sweet, but it’s not quite as transparent as I would like it to be. Since all of my hosts support SSH, I started to wonder if someone had developed a solution to transparently move files between two systems using SSH. After a bit of digging, I came across the super cool sshfs fuse module, which does just that!

Sshfs allows you to “mount” a remote directory over the SSH protocol, and it provides transparent access to files stored on a remote server. To use this nifty module with Fedora, you first need to install the fuse-sshfs package:

$ yum install fuse-sshfs

Once the fuse kernel modules and userland utilities are installed, the sshfs utility can be used to mount a remote directory on a local mount point. In the following example, the sshfs utility is used to mount the directory $HOME/backup on the server giddieup on the local directory /home/matty/backup:

$ sshfs -C -o reconnect,idmap=user giddieup:backup/ /home/matty/backup

Once the sshfs command completes, you can add and remove files to the locally mounted directory (/home/matty/backup in this case), and these changes will be automatically propogated to the remote server. The first time I ran sshfs I received the error “Operation not permitted.” After digging into this further, I noticed that the fusermount and sshfs utilities were not setuid root out of the box. To address this problem, I changed the group ownership of both utilities to fuse, put myself in the fuse group, added the setuid bit and changed the mode of both executables to 0750. The opensolaris community is currently porting FUSE to Solaris, and I am looking forward to eventually being able to use SSHFS on my Solaris hosts!

Running processes in fixed time intervals

While messing around with Sun Cluster 3.2, I came across hatimerun. This nifty program can be used to run a program in a fixed amount of time, and kill the program if it runs longer that the time alloted to it. If hatimerun kills a program, it will return a status code of 99. If the program runs in it’s alloted time, hatimerun will return a status code of 0. To use hatimerun, you need to pass a time interval to the “-t” option, as well as a program to run in that time interval:

$ hatimerun -t 10 /bin/sleep 8

$ echo $?
0

$ hatimerun -t 10 /bin/sleep 12

$ echo $?
99

If anyone knows of a general purpose tool for doing this (preferably something that ships with Solaris or Redhat Enterprise Linux), I would appreciate it if you could leave a comment with further details.

First thoughts of Sun Cluster 3.2

Over the past few weeks, I have been heads down studying for the Sun Cluster 3.2 beta exam. I finally took the certification test this week, and am hopeful that I passed (I am pretty sure I did). Prior to studying for this exam, my last experience with Sun’s clustering technology was Sun Cluster 2.2. I was not a big fan of it, since it caused a number of outages at my previous employer, and lacked a number of features that were available in Veritas Cluster Server.

When I learned about the Sun Cluster 3.2 exam a few weeks ago, I thought I would give Sun’s clustering technology a second chance (I was very hesitant to spend time with it, but the folks on the Sun cluster oasis psyched me up to work with it). I have only worked with Sun cluster 3.2 for three weeks, but my view of Sun’s clustering techology has completely changed. Sun cluster 3.2 is an incredible product, and has some amazingly cool features (and it’s free if you don’t need support!!). Here are a few of my favorites:

– You can create scalable services that run on one or more nodes at the same time. This allows you to turn a pool of web servers (which can run in global or non-global zones) into a large mesh of load-balanced servers.

– Sun cluster 3.2 comes with data service agents (these are the entities responsible for starting, stopping and monitoring a given application) for a number of commercial (e.g., Oracle, Oracle RAC, etc.) and opensource (e.g., BIND, NFS, Apache, Samba, etc.) applications.

– ZFS pool and non-global zone fail over are integrated natively into the product, so you can deploy highly available zones, or use a zpool with one of your HA services.

– Sun cluster 3.2 uses a single global device namespace to represent devices, which means the underlying device names can be different on each host.

– Global file systems are supported, which allows you to mount a file system on one node, and access it from any node in the cluster through the cluster interconnects.

– Sun cluster 3.2 comes with a new full featured command line that actually makes sense (this is the third time Sun has changed the Sun cluster command set, which has annoyed more than one adminstrator. I think they finally got it right, so hopefully it won’t change again!).

– There is a thorough set of documents and manual pages that describe the cluster agent API, and how to use it to easily (and I do mean easily) create agents for applications that don’t have a bundled agent.

– Resource and resource group dependencies can be created, and affinities can be used to control where in the cluster a resource group is brought online.

– Sun cluster manager (the web-based administrative portal for SC 3.2) has a really nice layout,
and allows you to view and manage pretty much every facet of the cluster over an HTTPS connection.

– The Sun cluster oasis is run by the folks who developed the code that went into Sun cluster 3.2. Not only have the developers and architects posted numerous useful examples, they answer comments left by cluster administrators.

While Sun cluster 3.2 has some cool features, there are still a few downsides (at least I think they are):

– To take a node out of cluster mode, you can to reboot the server with the “-x” option. This is a royal pain for machines that take a looooong time to boot, and leaves your cluster at risk for longer periods of time (there really needs to be a cluster -n NODENAME stop-cluster option added to stop the cluster framework on one or more nodes without a reboot).

– Log files are distributed throughout the file system, and are not centrally located. If you need to look at cluster framework messages, you need to look through /var/adm/messages. If on the other hand you want to monitor the Apache or Oracle data service logs, you need to wander to wanother location. If you need to review the commandlog, there is yet another place to check. Maybe Sun could investigate using a single location for log files (like VCS does).

– The Sun cluster 3.2 documentation set is riddled with typographical errors, contains a number of examples that don’t match what is being described, documents seem to contradict each other, and information is repeated in doens and dozens of places. There is also the issue of docs.sun.com taking days to load documents (why can’t Sun build a scable site for documentation?).

– While the global device namespace is nifty, it seems silly that you have to allocate a 512MB+ slice (or slices if you need to mirror the slice to make it highly availabe) on each node to contain the file system used for global devices (i.e., /globaldevices).

– Sun cluster only supports Solaris, which makes it hard for shops to standardize on a single clustering package.

Since Sun Cluster 3.2 is still relatively new, there isn’t a whole lot of data out there to gauge how reliable and stable it is. VCS is a great clustering framework, and if SC 3.2 is as stable as the folks on the cluster oasis claim it is, I think they will definitely give VCS a run for their money on Solaris hosts (hopefully the Sun folks will investigate porting Sun cluster to Linux)

Redhat Linux FTP client annoyance

The netkit-ftp client that ships with Redhat Enterprise Linux comes with a verbose option, which will among other things instruct the client to print the number of bytes transferred after each file is successfully sent. These messages look similar to the following:

85811076 bytes sent in 1.3e+02 seconds (6.7e+02 Kbytes/s)

I had several enormous files (each > 2GB) I needed to move to another server, and noticed that the netkit-ftp client wasn’t printing status messages after the files were transferred. To see what was causing the issue, I started reading throught the netkit-ftp source code. After a few minutes of poking around ftp.c, I came across this gem:

void
sendrequest(const char *cmd, char *local, char *remote, int printnames)
{
   volatile long bytes = 0


  while ((c = read(fileno(fin), buf, sizeof (buf))) > 0) {
      printf("Bytes (%ld) is incremented by %d\n", bytes, c);
      bytes += c;
      for (bufp = buf; c > 0; c -= d, bufp += d)
       if ((d = write(fileno(dout), bufp, c)) <= 0)
              break;
    ......
}

I reckon the folks who developed this code never transferred files larger than 2^31 bits on 32-bit platforms. After changing bytes (and the code that uses bytes) to use the unsigned long long data type, everything worked as expected. I digs me some opensource!

Speeding up Oracle disk I/O on RHEL4 systems

While poking around the web last wek, I came across a good paper from Redhat that describes how to utilize asynchronous and direct I/O with Oracle. I have been using the Oracle filesystemio_options=”SetAll” initialization parameter on a few RHEL 4 database servers to efficiently use memory, and had no idea idea that it provided the throughput numbers listed in Figure 2. Nice!