While setting up monit to monitor several services I support, I decided to look for an in-depth HTTP monitoring solution to compliment the monitoring capabilities provided by monit. To be more exact, I wanted to find a monitoring solution that would validate the authenticity of the content returned by a web server. Several monitoring solutions (including monit) will issue a GET request to a web server, and check that the server replied with a 200 OK status code. This works for most situations, but it doesn’t detect content deployment snafus, or server misconfigurations (the ones that don’t generate 500 status codes). I couldn’t find an opensource software package that provided this level of in-depth monitoring, so I decided to write content-check.
Content-check is written in Bourne shell, and provides in-depth HTTP monitoring by comparing a saved SHA1 hash with a SHA1 hash generated from the content returned by a web server. If the two hashes don’t match, content-check will generate a syslog entry (which can be picked up by monit) with the logger utility, and E-mail the website administrator to let them know that the content did not hash to a known value.
To configure content-check, you first need to generate a hash for the webpage you want to monitor. This can be accomplished by passing an absolute URL to content-check’s “-g” (generate hash) option:
$ content-check -g http://prefetch.net/index.htm
da39a3ee5e6b4b0d3255bfef95601890afd80709
After you generate the hash, you will need to place the hash and the absolute URL to monitor in a text file. This file can contain multiple site / hash values, but only one site / hash pair is allowed per line. Once the file is populated with one or more sites to monitor, content-check can be invoked with the “-f” option and the file that contains the list of sites to monitor:
$ cat sites
http://prefetch.net/articles/yum.html da39a3ee5e6b4b0d3255bfef95601890afd80709
http://prefetch.net/index.html da39a3ee5e6b4b0d3255bfef95601890afd80709
$ content-check -f sites
If one of the sites listed in the file doesn’t hash to the value stored in the file, an E-mail is sent to the address passed to the “-e” option (or root), and a syslog message similar to the following is generated:
Jul 21 16:27:01 neutron matty: [ID 702911 daemon.notice] Content from
http://prefetch.net/index.html did not hash to
da39a3ee5e6b4b0d3255bfef95601890afd80709
Since it is possible for web servers to break in ways that allow them to still serve content, validating the content they return is the only way to know for sure that your site is working optimally.
After several frustrating days of debugging smpatch, I finally found a way to get it working on my X64 Solaris 10 server. I am not 100% certain why this fixed my problem, and since I don’t have access to the smpatch source code I will probably never know. To recap the problem, I was able to register my server with the Solaris sconadm utility, but was receiving the following error each time I ran smpatch to check for new updates:
$ smpatch analyze
Failure: Cannot connect to retrieve Database/current.zip: This system is currently unregistered and is unable to retrieve patches from the Sun Update Connection. Please register your system using the Update Manager.
To get smaptch to run successfully, I first had to remove my server’s assetid from the patch repository. This was accomplished with the ccr utilities “-r” option:
$ /usr/lib/cc-ccr/bin/ccr -r cns.assetid
Once the assetid was remove, I ran the sconadm utility to register my server:
$ sconadm register -a -r RegistrationProfile
sconadm is running
Authenticating user …
finish registration!
After these two tasks completed, smpatch ran successfully and reported a number of patches that needed to be applied. This experience really taught me how much I like yum, and I have since decided to replace smpatch with the pca (patch check advanced) utility after reading Chris’s blog and Frank’s comments. Pca is an AWESOME piece of software, and I wish I would have found it sooner. More to come on pca in future posts.
If you are interested in using yum to manage packages on Linux servers, you might be interested in my article managing packages with yum. If you have comments or feedback, let me know.
For several years I turned to a CIDR reference or bc to calculate network and broadcast addresses when variable length subnetting was in use. While poking around the /bin directory on one of my CentOS 4.0 servers, I came across the ipcalc utility. This nifty utility allows you to feed it an IP address and prefix, and it will spit out the network and broadcast address associated with that IP address:
$ ipcalc -nb 192.168.1.45/28
BROADCAST=192.168.1.47
NETWORK=192.168.1.32
I love finding buried treasure. :)
If you are curious what the various UltraDMA modes mean, and the speeds they operate at, you might be interested in the bugclub UltraDMA tutorial.