Over the past few weeks, I have been heads down studying for the Sun Cluster 3.2 beta exam. I finally took the certification test this week, and am hopeful that I passed (I am pretty sure I did). Prior to studying for this exam, my last experience with Sun’s clustering technology was Sun Cluster 2.2. I was not a big fan of it, since it caused a number of outages at my previous employer, and lacked a number of features that were available in Veritas Cluster Server.
When I learned about the Sun Cluster 3.2 exam a few weeks ago, I thought I would give Sun’s clustering technology a second chance (I was very hesitant to spend time with it, but the folks on the Sun cluster oasis psyched me up to work with it). I have only worked with Sun cluster 3.2 for three weeks, but my view of Sun’s clustering techology has completely changed. Sun cluster 3.2 is an incredible product, and has some amazingly cool features (and it’s free if you don’t need support!!). Here are a few of my favorites:
You can create scalable services that run on one or more nodes at the same time. This allows you to turn a pool of web servers (which can run in global or non-global zones) into a large mesh of load-balanced servers.
Sun cluster 3.2 comes with data service agents (these are the entities responsible for starting, stopping and monitoring a given application) for a number of commercial (e.g., Oracle, Oracle RAC, etc.) and opensource (e.g., BIND, NFS, Apache, Samba, etc.) applications.
ZFS pool and non-global zone fail over are integrated natively into the product, so you can deploy highly available zones, or use a zpool with one of your HA services.
Sun cluster 3.2 uses a single global device namespace to represent devices, which means the underlying device names can be different on each host.
Global file systems are supported, which allows you to mount a file system on one node, and access it from any node in the cluster through the cluster interconnects.
Sun cluster 3.2 comes with a new full featured command line that actually makes sense (this is the third time Sun has changed the Sun cluster command set, which has annoyed more than one adminstrator. I think they finally got it right, so hopefully it won’t change again!).
There is a thorough set of documents and manual pages that describe the cluster agent API, and how to use it to easily (and I do mean easily) create agents for applications that don’t have a bundled agent.
Resource and resource group dependencies can be created, and affinities can be used to control where in the cluster a resource group is brought online.
Sun cluster manager (the web-based administrative portal for SC 3.2) has a really nice layout, and allows you to view and manage pretty much every facet of the cluster over an HTTPS connection.
The Sun cluster oasis is run by the folks who developed the code that went into Sun cluster 3.2. Not only have the developers and architects posted numerous useful examples, they answer comments left by cluster administrators.
While Sun cluster 3.2 has some cool features, there are still a few downsides (at least I think they are):
To take a node out of cluster mode, you can to reboot the server with the “-x” option. This is a royal pain for machines that take a looooong time to boot, and leaves your cluster at risk for longer periods of time (there really needs to be a cluster -n NODENAME stop-cluster option added to stop the cluster framework on one or more nodes without a reboot).
Log files are distributed throughout the file system, and are not centrally located. If you need to look at cluster framework messages, you need to look through /var/adm/messages. If on the other hand you want to monitor the Apache or Oracle data service logs, you need to wander to wanother location. If you need to review the commandlog, there is yet another place to check. Maybe Sun could investigate using a single location for log files (like VCS does).
The Sun cluster 3.2 documentation set is riddled with typographical errors, contains a number of examples that don’t match what is being described, documents seem to contradict each other, and information is repeated in doens and dozens of places. There is also the issue of docs.sun.com taking days to load documents (why can’t Sun build a scable site for documentation?).
While the global device namespace is nifty, it seems silly that you have to allocate a 512MB+ slice (or slices if you need to mirror the slice to make it highly availabe) on each node to contain the file system used for global devices (i.e., /globaldevices).
Sun cluster only supports Solaris, which makes it hard for shops to standardize on a single clustering package.
Since Sun Cluster 3.2 is still relatively new, there isn’t a whole lot of data out there to gauge how reliable and stable it is. VCS is a great clustering framework, and if SC 3.2 is as stable as the folks on the cluster oasis claim it is, I think they will definitely give VCS a run for their money on Solaris hosts (hopefully the Sun folks will investigate porting Sun cluster to Linux)
The netkit-ftp client that ships with Redhat Enterprise Linux comes with a verbose option, which will among other things instruct the client to print the number of bytes transferred after each file is successfully sent. These messages look similar to the following:
85811076 bytes sent in 1.3e+02 seconds (6.7e+02 Kbytes/s)
I had several enormous files (each > 2GB) I needed to move to another server, and noticed that the netkit-ftp client wasn’t printing status messages after the files were transferred. To see what was causing the issue, I started reading throught the netkit-ftp source code. After a few minutes of poking around ftp.c, I came across this gem:
void
sendrequest(const char *cmd, char *local, char *remote, int printnames)
{
volatile long bytes = 0
while ((c = read(fileno(fin), buf, sizeof (buf))) > 0) {
printf("Bytes (%ld) is incremented by %dn", bytes, c);
bytes += c;
for (bufp = buf; c > 0; c -= d, bufp += d)
if ((d = write(fileno(dout), bufp, c)) <= 0)
break;
......
}
I reckon the folks who developed this code never transferred files larger than 2^31 bits on 32-bit platforms. After changing bytes (and the code that uses bytes) to use the unsigned long long data type, everything worked as expected. I digs me some opensource!
This is the news of the year. Spinal Tap is reuniting to fight global warming! I hope they will turn it up to 11!!!
While poking around the web last wek, I came across a good paper from Redhat that describes how to utilize asynchronous and direct I/O with Oracle. I have been using the Oracle filesystemio_options="SetAll” initialization parameter on a few RHEL 4 database servers to efficiently use memory, and had no idea idea that it provided the throughput numbers listed in Figure 2. Nice!
We were getting close to running out of space on one of our database volumes last week, and I needed to add some additional storage to ensure that things kept running smoothly. The admin who originally created the VxVM database volume only used half of each of the five disks that were associated with the volume / file system that were at capacity, which meant I had roughly 18GB of free space available on each device to work with:
$ vxdg free | egrep '(D01|D02|D03|D04|D05)'
GROUP DISK DEVICE TAG OFFSET LENGTH FLAGS datadg D01 c2t0d0s2 c2t0d0 35547981 35547981 - datadg D02 c2t1d0s2 c2t1d0 35547981 35547981 - datadg D03 c2t2d0s2 c2t2d0 35547981 35547981 - datadg D04 c2t3d0s2 c2t3d0 35547981 35547981 - datadg D05 c2t4d0s2 c2t4d0 35547981 35547981 - datadg D06 c2t5d0s2 c2t5d0 35547981 35547981 -
Now there are a number of ways to resize volumes and file systems with VxVM and VxFS. You can use vxassist to grow or shrink a volume, and then use the fsadm utility to extend the file system. You can also perform both of these operations with vxresize, which takes the name of the volume to resize, the disk group the volume is a part of, and a size parameter to indicate that you want to grow (you use a “+” to indicate that you want to grow the volume by the value immediately following the plus sign) or shrink (you use a “-” to indicate that you want to shrink the volume by the value immediately following the dash) the volume. Since I preferred the most efficient method to resize my volume, I fired up vxresize and told it to extend the volume named datavol01 by 35547981 blocks:
$ /etc/vx/bin/vxresize -g datadg -F vxfs datavol01 +35547981
Instead of specifying blocks, you can also use unit specifiers such as “m” to denote megabytes, and “g” to denote gigabytes. As with all operations that change the structure of storage, you should test any resizing operations on non-production systems prior to changing production systems.