I have been spending some of my spare time reading Learning Python and experimenting with various Python modules. One of the cool things about the Python modules facility is easy_install, which allows you to install modules from local files or a remote location. To install a module from a local directory, you can pass the name of the module to easy_install:
$ easy_install simplejson-2.0.9
Processing
Running setup.py -q bdist_egg –dist-dir /Users/matty/simplejson-2.0.9/egg-dist-tmp-vNeFam
unable to execute gcc: No such file or directory
***************************************************************************
WARNING: The C extension could not be compiled, speedups are not enabled.
Failure information, if any, is above.
I’m retrying the build without the C extension now.
***************************************************************************
***************************************************************************
WARNING: The C extension could not be compiled, speedups are not enabled.
Plain-Python installation succeeded.
***************************************************************************
Adding simplejson 2.0.9 to easy-install.pth file
Installed /Library/Python/2.5/site-packages/simplejson-2.0.9-py2.5.egg
Processing dependencies for simplejson==2.0.9
Finished processing dependencies for simplejson==2.0.9
To install from a network location, you can pass a URL with the location of the module to easy_install:
$ easy_install
Downloading http://pyyaml.org/download/pyyaml/PyYAML-3.08.tar.gz
Processing PyYAML-3.08.tar.gz
Running PyYAML-3.08/setup.py -q bdist_egg –dist-dir /var/folders/-a/-aYP7CaPEk00kLo-WDtJ9U+++TI/-Tmp-/easy_install-3IzAQN/PyYAML-3.08/egg-dist-tmp-5VIWBq
unable to execute gcc: No such file or directory
libyaml is not found or a compiler error: forcing –without-libyaml
(if libyaml is installed correctly, you may need to
specify the option –include-dirs or uncomment and
modify the parameter include_dirs in setup.cfg)
zip_safe flag not set; analyzing archive contents…
Adding PyYAML 3.08 to easy-install.pth file
Installed /Library/Python/2.5/site-packages/PyYAML-3.08-py2.5-macosx-10.5-i386.egg
Processing dependencies for PyYAML==3.08
Finished processing dependencies for PyYAML==3.08
You can also pass the name of a module to easy_install:
$ easy_install pexpect
Searching for pexpect
Reading http://pypi.python.org/simple/pexpect/
Reading http://pexpect.sourceforge.net/
Reading http://sourceforge.net/project/showfiles.php?group_id=59762
Best match: pexpect 2.4
Downloading http://pypi.python.org/packages/source/p/pexpect/pexpect-2.4.tar.gz#md5=fe82d69be19ec96d3a6650af947d5665
Processing pexpect-2.4.tar.gz
Running pexpect-2.4/setup.py -q bdist_egg –dist-dir /tmp/easy_install-3b5oOV/pexpect-2.4/egg-dist-tmp-aYlhyn
zip_safe flag not set; analyzing archive contents…
Adding pexpect 2.4 to easy-install.pth file
Installed /Library/Python/2.5/site-packages/pexpect-2.4-py2.5.egg
Processing dependencies for pexpect
Finished processing dependencies for pexpect
Which will cause easy_install to search the Python package index, and install the module based on information in the package index. I loves me some Python!
I was playing around with ZFS root a week or two back, and wanted to be able to create the ZFS root pool and associated file systems (dump device, swap, /var) through jumpstart. To install to a ZFS root pool, you can add the “pool” directive to your client profile:
pool rpool auto 4g 4g rootdisk.s0
The entry above breaks down as follows:
pool <root pool name> <pool size> <swap size> <dump device size> <device list>
The device list can contain a single device for non-mirrored configurations, or multiple devices for mirrored configurations. If you specify a mirrored configuration, you will need to include the “mirror” keyword in your profile:
pool rpool auto 4g 4g mirror c0t0d0s0 c0t1d0s0
If you are using live upgrade, you can also name the boot environment with the “bootenv” keyword. This is pretty cool stuff, and it’s nice having the various ZFS features (checksums, snapshots, compression, etc.) available in the root pool!
I came across Neelakanth Nadgir’s blog while doing some research, and his performance analysis tools (cmdtruss and inniostat) are pretty sweet. If you are looking to learn more about MySQL performance, you should take a look at High Performance MySQL and the Sun engineering blogs. There is some awesome stuff out there!
One of the nice features of ZFS is the ability to take file system snapshots, which you can then use to recover perviously deleted data. In recent opensolaris and Nevada builds, there are several auto-snapshot services that can be used to schedule hourly, daily, weekly and monthly snapshots:
$ svcs -a | grep auto-snapshot
disabled 9:28:28 svc:/system/filesystem/zfs/auto-snapshot:frequent
online 9:28:53 svc:/system/filesystem/zfssnap-roleadd:default
online 12:55:54 svc:/system/filesystem/zfs/auto-snapshot:daily
online 12:56:02 svc:/system/filesystem/zfs/auto-snapshot:weekly
online 12:56:11 svc:/system/filesystem/zfs/auto-snapshot:monthly
online 12:58:37 svc:/system/filesystem/zfs/auto-snapshot:hourly
To enable scheduled snapshots (these services are disabled by default), you can enable one or more of these services with svcadm. Once enabled, these services will create a cron entry in the zfssnap users crontab:
$ cat /var/spool/cron/crontabs/zfssnap
0 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31 * * /lib/svc/method/zfs-auto-snapshot svc:/system/filesystem/zfs/auto-snapshot:daily
0 0 1,8,15,22,29 * * /lib/svc/method/zfs-auto-snapshot svc:/system/filesystem/zfs/auto-snapshot:weekly
0 0 1 1,2,3,4,5,6,7,8,9,10,11,12 * /lib/svc/method/zfs-auto-snapshot svc:/system/filesystem/zfs/auto-snapshot:monthly
0 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23 * * * /lib/svc/method/zfs-auto-snapshot svc:/system/filesystem/zfs/auto-snapshot:hourly
The cron jobs are used to schedule the automated snapshots, and are added and removed when one of the services are enabled or disabled. I’m not entirely clear why the auto-snapshot author didn’t use “*” in the daily and hourly entries, but hopefully there is a good reason.
To view the list of snapshots on a host, you can use the zfs utility:
$ zfs list -r -t snapshot | head -5
NAME USED AVAIL REFER MOUNTPOINT
bits@zfs-auto-snap:daily-2009-06-18-12:55 0 - 34.4K -
bits@zfs-auto-snap:weekly-2009-06-18-12:56 0 - 34.4K -
bits@zfs-auto-snap:monthly-2009-06-18-12:56 0 - 34.4K -
bits@zfs-auto-snap:daily-2009-06-19-00:00 0 - 34.4K -
If you need to recover a file that was previously deleted, you can cd into the “.zfs/snapshot” directory in the file system that contained the deleted data:
$ zfs list bits/home
NAME USED AVAIL REFER MOUNTPOINT
bits/home 181K 1.17T 149K /home
$ cd /home/.zfs/snapshot
Locate the correct snapshot to recover the file from with ls:
$ ls -l | tail -5
drwxr-xr-x 3 root root 3 May 19 22:43
zfs-auto-snap:hourly-2009-06-20-09:00
drwxr-xr-x 3 root root 3 May 19 22:43
zfs-auto-snap:hourly-2009-06-20-10:00
drwxr-xr-x 3 root root 3 May 19 22:43
zfs-auto-snap:hourly-2009-06-20-11:00
drwxr-xr-x 3 root root 3 May 19 22:43
zfs-auto-snap:monthly-2009-06-18-12:56
drwxr-xr-x 3 root root 3 May 19 22:43
zfs-auto-snap:weekly-2009-06-18-12:56
And then recover the file with cp (or a restore program):
$ find . -name importantfile
./matty/importantfile
$ cp matty/importantfile /tmp
$ ls -la /tmp/importantfile
-rw-r–r– 1 root root 101186 Jun 20 11:30 /tmp/importantfile
This is pretty sweet, and being able to enable automated snapshots with a couple of svcadm invocations is super convenient!
I gave a talk last night on theLinux Kernel Virtual Machine (KVM) at the local Linux users group. The talk gives some background on KVM, and shows how to get KVM working on a server that supports processor virtualization extensions. The slides are available on my website, and I will try to get them linked to the presentation section of the ALE website. Thanks to everyone who came out! I had a blast!