Capping a Solaris processes memory

Solaris 10 introduced numerous capabilities, including the ability to use memory caps to limit the amount of memory available to a project. Memory caps are configured through the project(4) facility, and use the rcap.max-rss resource control to limit the amount of memory that a project can consume. Memory caps are enforced by the rcapd daemon, which is a userland process that periodically checks process memory usage, and takes action when a process has exceeded it’s alloted amount of memory. To use memory caps on a server or inside a zone, the rcapadm utility needs to be run with the “-E” (enable memory caps) option to enable memory caps:

$ rcapadm -E

In addition to starting rcapd, the capadm utility will enable the SMF services to start rcapd when the system boots. To see if memory caps are enabled on a system, the rcapadm utility can be run without any arguments:

$ rcapadm

                                 state: enabled
      memory cap enforcement threshold: 0%
               process scan rate (sec): 15
            reconfiguration rate (sec): 60
                     report rate (sec): 5
               RSS sampling rate (sec): 5

After memory capping is enabled, the projmod utility can be used to configure memory caps. To configure a 512MB memory cap for all processes that run as the user apache, the projmod utility can be run with the “-K” option, and the rcap.max-rss resource control set to the amount of memory you would like to assign to the project:

$ projmod -s -K rcap.max-rss=512MB user.apache

This will add a new entry similar to the following to the project database, which is stored in the file /etc/project:

$ grep user.apache /etc/project
user.apache:100:Apache:apache::rcap.max-rss=536870912

Once a project is configured, you can enforce a memory cap in two ways (there may be more, but these are the two methods I have come across while reading the RM documentation). The first method uses the newtask utility to start a process in a project that has been configured with memory caps. The following example shows how to start the apache web server in the user.apache project, which was configured above:

$ /usr/bin/newtask -p user.apache /home/apps/apache/httpd/bin/httpd -k start

The second way to enforce a memory cap is to force a user to establish a new login session. If the user has been added to the project database, they will inherit the resource controls that are associated with their user id in /etc/project. To view the project a user is assigned to, the id command can be run with the “-p” option:

$ su – apache
Sun Microsystems Inc. SunOS 5.10 Generic January 2005

$ id -p
uid=103(apache) gid=1(other) projid=100(user.apache)

Once a process is started and associated with a project that has memory caps configured, you can use the rcapstat utility to monitor memory usage, and the paging activity that occurs due to the processes in the project utilizing more memory than has been alloted to them:

$ rcapstat 10

    id project         nproc    vm   rss   cap    at avgat    pg avgpg
   100 user.apache        15  266M  164M  512M    0K    0K    0K    0K
   101 user.mysql          1   59M   11M  256M    0K    0K    0K    0K
    id project         nproc    vm   rss   cap    at avgat    pg avgpg
   100 user.apache        15  266M  164M  512M    0K    0K    0K    0K
   101 user.mysql          1   59M   11M  256M    0K    0K    0K    0K

Memory caps are super useful, but they do have a few issues. The biggest issue is that shared memory is not accounted for properly, so processes that use shared memory can suck up more memory that the amount configured in the memory cap. The second issue is that you can’t use memory caps in the global zone to limit how much memory is used in a local zone. Both of these issues are being worked on by Sun, and hopefully a fix will be out in the coming months.

SMART utilities for your favorite operating system

While perusing the web a few weeks back, I came across SMARTReporter. SMARTReporter is a wicked cool software package that can be used to monitor hard drive SMART data under OS X, and it is 100% free (you should probably send a small donation to the author if you decide to use it). Now that I have SMARTReporter in my software arsenal, I have a tool to monitor SMART data on each of operating systems I support:

– For Solaris, OpenBSD, FreeBSD and Linux, I use Smartmontools

– For OS X, I use SMARTReporter

– For Windows, I use Active SMART

All three package rock, and they have saved my bacon on more than one occassion!

Viewing utilization per file descriptor on Solaris 10 hosts

While load-testing a MySQL back-end last weekend, I wanted to be able to monitor read and write utilization per file descriptor. The DTraceToolkit comes with a nifty script named pfilestat that does just that:

$ pfilestat 841

    STATE   FDNUM      Time Filename                                          
      read      63        0% /tmp/#sql_349_0.MYI     
     write      64        0% /tmp/#sql_349_0.MYD     
      read      64        0% /tmp/#sql_349_0.MYD     
     write      63        0% /tmp/#sql_349_0.MYI     
      read      18        0%                                       
     write      18        0%                                       
      read      60        0% /opt/mysql/data/db/one
   running       0        0%                                             
   waitcpu       0        9%                                             
     sleep       0       89%                                             

     STATE   FDNUM      KB/s Filename                                    
      read      63         0 /tmp/#sql_349_0.MYI     
     write      63         0 /tmp/#sql_349_0.MYI     
      read      18         0                                       
     write      64         0 /tmp/#sql_349_0.MYD     
      read      64         0 /tmp/#sql_349_0.MYD     
     write      18         7                                       
      read      60       181 /opt/mysql/data/db/one

Total event time (ms): 4263   Total Mbytes/sec: 0

In addition to displaying the amount of data that is read from or written to each file descriptor, pfilestat also provides information on how much time is spent sleeping and waiting for I/O. This is yet another reason why the DTraceToolkit is da shiznit!

Tracing vxassist activity

While creating a few Veritas volumes last week, I wanted to see the commands that vxassist was executing under the covers. This was easily accomplished by adding the “-v” (trace commands executed by vxassist) option to the vxassist command line:

$ vxassist -b -v make datavol02 1g layout=mirror mirror=3
/usr/sbin/vxvol -g datadg -o bg -o plexfork=128 — start datavol02

VxVM is an awesome volume manager, and there are all kinds of cool things buried in the manual pages!

New version of ssl-cert-check

I received a nifty patch from Ken Gallo to allow ssl-cert-check to report when certificates stored in a PKCS#12 database will expire. This is super useful, especially if you are managing iPlanet/SunONE/Netscape products. If you haven’t used ssl-cert-check before, it’s a bourne shell script that can be used to alert you prior to a certificate expiring. The script is available on prefetch.net, and is documented in the article proactively handling SSL certificate expiration. Thanks Ken for the awesome patch!

Securely backing up a wordpress configuration to a remote location

I have been using wordpress as my blogging engine for quite some time. To ensure that I can recover my blog in the event of a disaster (a good example would be a server catching on fire), I take weekly backups of the MySQL database that stores my posts and the wordpress configuration.Since the wordpress backups are relatively small, I typically use mysqldump to extract the data from the MySQL database, and openssl to encrypt the data. This allows me to email my backup to a remote location, and ensures that prying eyes cannot view any data that might be sensitive. To accomplish this, I use the following shell script:

#!/bin/bash

export PATH=/usr/bin:/usr/sfw/bin

DBNAME="dbname"
DBPASS="password"
DBUSER="dbuser"
EMAIL=admin@something.com"
SYMMETRICKEY="SOMESECUREWPASSWORD"

mysqldump --opt -u ${DBUSER} -p${DBPASS} ${DBNAME} wp_categories \
                  wp_comments wp_linkcategories wp_links wp_options \
                  wp_post2cat wp_postmeta wp_posts wp_usermeta wp_users \
                  | /home/apps/bin/openssl bf -e -a -k ${SYMMETRICKEY} \
                  | mailx -vv -s "Wordpress backup (`/bin/date`)" ${EMAIL}

This solution has worked well for me for the past two years, and I have never had a problem running openssl with the “-d” (decrypt data) option to decrypt the data that openssl’s “-e” (encrypt data) option produces. I reckon I should probably add “START PAYLOAD” and “END PAYLOAD” strings to the output to ensure that the data made it to the destination in one piece.