Over the past few weeks, I have been heads down studying for the Sun Cluster 3.2 beta exam. I finally took the certification test this week, and am hopeful that I passed (I am pretty sure I did). Prior to studying for this exam, my last experience with Sun’s clustering technology was Sun Cluster 2.2. I was not a big fan of it, since it caused a number of outages at my previous employer, and lacked a number of features that were available in Veritas Cluster Server.
When I learned about the Sun Cluster 3.2 exam a few weeks ago, I thought I would give Sun’s clustering technology a second chance (I was very hesitant to spend time with it, but the folks on the Sun cluster oasis psyched me up to work with it). I have only worked with Sun cluster 3.2 for three weeks, but my view of Sun’s clustering techology has completely changed. Sun cluster 3.2 is an incredible product, and has some amazingly cool features (and it’s free if you don’t need support!!). Here are a few of my favorites:
You can create scalable services that run on one or more nodes at the same time. This allows you to turn a pool of web servers (which can run in global or non-global zones) into a large mesh of load-balanced servers.
Sun cluster 3.2 comes with data service agents (these are the entities responsible for starting, stopping and monitoring a given application) for a number of commercial (e.g., Oracle, Oracle RAC, etc.) and opensource (e.g., BIND, NFS, Apache, Samba, etc.) applications.
ZFS pool and non-global zone fail over are integrated natively into the product, so you can deploy highly available zones, or use a zpool with one of your HA services.
Sun cluster 3.2 uses a single global device namespace to represent devices, which means the underlying device names can be different on each host.
Global file systems are supported, which allows you to mount a file system on one node, and access it from any node in the cluster through the cluster interconnects.
Sun cluster 3.2 comes with a new full featured command line that actually makes sense (this is the third time Sun has changed the Sun cluster command set, which has annoyed more than one adminstrator. I think they finally got it right, so hopefully it won’t change again!).
There is a thorough set of documents and manual pages that describe the cluster agent API, and how to use it to easily (and I do mean easily) create agents for applications that don’t have a bundled agent.
Resource and resource group dependencies can be created, and affinities can be used to control where in the cluster a resource group is brought online.
Sun cluster manager (the web-based administrative portal for SC 3.2) has a really nice layout, and allows you to view and manage pretty much every facet of the cluster over an HTTPS connection.
The Sun cluster oasis is run by the folks who developed the code that went into Sun cluster 3.2. Not only have the developers and architects posted numerous useful examples, they answer comments left by cluster administrators.
While Sun cluster 3.2 has some cool features, there are still a few downsides (at least I think they are):
To take a node out of cluster mode, you can to reboot the server with the “-x” option. This is a royal pain for machines that take a looooong time to boot, and leaves your cluster at risk for longer periods of time (there really needs to be a cluster -n NODENAME stop-cluster option added to stop the cluster framework on one or more nodes without a reboot).
Log files are distributed throughout the file system, and are not centrally located. If you need to look at cluster framework messages, you need to look through /var/adm/messages. If on the other hand you want to monitor the Apache or Oracle data service logs, you need to wander to wanother location. If you need to review the commandlog, there is yet another place to check. Maybe Sun could investigate using a single location for log files (like VCS does).
The Sun cluster 3.2 documentation set is riddled with typographical errors, contains a number of examples that don’t match what is being described, documents seem to contradict each other, and information is repeated in doens and dozens of places. There is also the issue of docs.sun.com taking days to load documents (why can’t Sun build a scable site for documentation?).
While the global device namespace is nifty, it seems silly that you have to allocate a 512MB+ slice (or slices if you need to mirror the slice to make it highly availabe) on each node to contain the file system used for global devices (i.e., /globaldevices).
Sun cluster only supports Solaris, which makes it hard for shops to standardize on a single clustering package.
Since Sun Cluster 3.2 is still relatively new, there isn’t a whole lot of data out there to gauge how reliable and stable it is. VCS is a great clustering framework, and if SC 3.2 is as stable as the folks on the cluster oasis claim it is, I think they will definitely give VCS a run for their money on Solaris hosts (hopefully the Sun folks will investigate porting Sun cluster to Linux)