Blog O' Matty


Renaming a ZFS pool

This article was posted by Matty on 2006-11-15 16:40:00 -0400 -0400

While messing around with ZFS last weekend, I noticed that I made a typo when I created one of my pools. Instead of naming a pool “apps,” I accidentally named it “app”:

$ zpool status -v

pool: app
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
app ONLINE 0 0 0
c0d1 ONLINE 0 0 0
c1d0 ONLINE 0 0 0
c1d1 ONLINE 0 0 0

errors: No known data errors

To fix this annoyance, I first exported the pool:

$ zpool export app

And then imported it with the correct name:

$ zpool import app apps

After the import completed, my pool contained the name I had originally intended to give it:

$ zpool status -v

pool: apps
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
apps ONLINE 0 0 0
c0d1 ONLINE 0 0 0
c1d0 ONLINE 0 0 0
c1d1 ONLINE 0 0 0

errors: No known data errors

Niiiiiiiiiiiiiiiiice!

Validating SMF manifests with xmllint

This article was posted by Matty on 2006-11-15 15:59:00 -0400 -0400

I recently created SMF manifests for a few services I support. When I ran svccfg to import one of the manifests, it spit out the following error indicating that it couldn’t parse the document:

$ svccfg import wls92.xml
svccfg: couldn’t parse document

Since the svccfg error message didn’t provide the number line that was causing the problem, I decided to run xmllint to see where the problem was:

$ xmllint wls92.xml

wls92.xml:13: parser error : Opening and ending tag mismatch: ervice_bundle line 3 and service_bundle </service_bundle>

It turns out that when I cut and pasted text into the new manifest, I left out a left bracket and the letter s. It’s all about getting your lint on!

Capping a Solaris processes memory

This article was posted by Matty on 2006-11-14 19:27:00 -0400 -0400

Solaris 10 introduced numerous capabilities, including the ability to use memory caps to limit the amount of memory available to a project. Memory caps are configured through the project(4) facility, and use the rcap.max-rss resource control to limit the amount of memory that a project can consume. Memory caps are enforced by the rcapd daemon, which is a userland process that periodically checks process memory usage, and takes action when a process has exceeded it’s alloted amount of memory. To use memory caps on a server or inside a zone, the rcapadm utility needs to be run with the “-E” (enable memory caps) option to enable memory caps:

$ rcapadm -E

In addition to starting rcapd, the capadm utility will enable the SMF services to start rcapd when the system boots. To see if memory caps are enabled on a system, the rcapadm utility can be run without any arguments:

$ rcapadm

state: enabled
memory cap enforcement threshold: 0%
process scan rate (sec): 15
reconfiguration rate (sec): 60
report rate (sec): 5
RSS sampling rate (sec): 5

After memory capping is enabled, the projmod utility can be used to configure memory caps. To configure a 512MB memory cap for all processes that run as the user apache, the projmod utility can be run with the “-K” option, and the rcap.max-rss resource control set to the amount of memory you would like to assign to the project:

$ projmod -s -K rcap.max-rss=512MB user.apache

This will add a new entry similar to the following to the project database, which is stored in the file /etc/project:

$ grep user.apache /etc/project
user.apache:100:Apache:apache::rcap.max-rss=536870912

Once a project is configured, you can enforce a memory cap in two ways (there may be more, but these are the two methods I have come across while reading the RM documentation). The first method uses the newtask utility to start a process in a project that has been configured with memory caps. The following example shows how to start the apache web server in the user.apache project, which was configured above:

$ /usr/bin/newtask -p user.apache /home/apps/apache/httpd/bin/httpd -k start

The second way to enforce a memory cap is to force a user to establish a new login session. If the user has been added to the project database, they will inherit the resource controls that are associated with their user id in /etc/project. To view the project a user is assigned to, the id command can be run with the “-p” option:

$ su - apache
Sun Microsystems Inc. SunOS 5.10 Generic January 2005

$ id -p
uid=103(apache) gid=1(other) projid=100(user.apache)

Once a process is started and associated with a project that has memory caps configured, you can use the rcapstat utility to monitor memory usage, and the paging activity that occurs due to the processes in the project utilizing more memory than has been alloted to them:

$ rcapstat 10

id project nproc vm rss cap at avgat pg avgpg
100 user.apache 15 266M 164M 512M 0K 0K 0K 0K
101 user.mysql 1 59M 11M 256M 0K 0K 0K 0K
id project nproc vm rss cap at avgat pg avgpg
100 user.apache 15 266M 164M 512M 0K 0K 0K 0K
101 user.mysql 1 59M 11M 256M 0K 0K 0K 0K

Memory caps are super useful, but they do have a few issues. The biggest issue is that shared memory is not accounted for properly, so processes that use shared memory can suck up more memory that the amount configured in the memory cap. The second issue is that you can’t use memory caps in the global zone to limit how much memory is used in a local zone. Both of these issues are being worked on by Sun, and hopefully a fix will be out in the coming months.

SMART utilities for your favorite operating system

This article was posted by Matty on 2006-11-10 19:20:00 -0400 -0400

While perusing the web a few weeks back, I came across SMARTReporter. SMARTReporter is a wicked cool software package that can be used to monitor hard drive SMART data under OS X, and it is 100% free (you should probably send a small donation to the author if you decide to use it). Now that I have SMARTReporter in my software arsenal, I have a tool to monitor SMART data on each of operating systems I support:

All three package rock, and they have saved my bacon on more than one occassion!

Viewing utilization per file descriptor on Solaris 10 hosts

This article was posted by Matty on 2006-11-10 19:08:00 -0400 -0400

While load-testing a MySQL back-end last weekend, I wanted to be able to monitor read and write utilization per file descriptor. The DTraceToolkit comes with a nifty script named pfilestat that does just that:

$ pfilestat 841

STATE FDNUM Time Filename
read 63 0% /tmp/#sql_349_0.MYI
write 64 0% /tmp/#sql_349_0.MYD
read 64 0% /tmp/#sql_349_0.MYD
write 63 0% /tmp/#sql_349_0.MYI
read 18 0%
write 18 0%
read 60 0% /opt/mysql/data/db/one
running 0 0%
waitcpu 0 9%
sleep 0 89%

STATE FDNUM KB/s Filename
read 63 0 /tmp/#sql_349_0.MYI
write 63 0 /tmp/#sql_349_0.MYI
read 18 0
write 64 0 /tmp/#sql_349_0.MYD
read 64 0 /tmp/#sql_349_0.MYD
write 18 7
read 60 181 /opt/mysql/data/db/one

Total event time (ms): 4263 Total Mbytes/sec: 0

In addition to displaying the amount of data that is read from or written to each file descriptor, pfilestat also provides information on how much time is spent sleeping and waiting for I/O. This is yet another reason why the DTraceToolkit is da shiznit!