I support a number of NFS clients, and periodically need to see which types of NFS operations are being performed. The nfsstat utility works pretty well for this, but sometimes I want to get a broader view of what is going on. When these situations arise, I like to fire up the nfswatch utility which displays network traffic along with a listing of NFS operations:
$ nfswatch
monty Tue May 12 20:47:21 2009 Elapsed time: 00:01:29
Interval packets: 34 (network) 34 (to host) 0 (dropped)
Total packets: 4482 (network) 4482 (to host) 0 (dropped)
Monitoring packets from interface bge0
int pct total int pct total
NFS3 Read 22 65% 3459 TCP Packets 33 97% 4292
NFS3 Write 5 15% 422 UDP Packets 0 0% 21
NFS Read 0 0% 0 ICMP Packets 1 3% 62
NFS Write 0 0% 0 Routing Control 0 0% 0
NFS Mount 0 0% 0 Addr Resolution 0 0% 107
Port Mapper 0 0% 1 Rev Addr Resol 0 0% 0
RPC Author: Matty
Other RPC Packets 0 0% 0 Other Packets 0 0% 0
22 NFS Procedures
Procedure int pct total completed avg(msec) std dev max resp
ACCESS 4 15% 284
CREATE 0 0% 0
GETATTR 18 67% 2991
LINK 0 0% 0
LOOKUP 0 0% 149
MKDIR 0 0% 0
MKNOD 0 0% 0
NULLPROC 0 0% 0
READ 0 0% 21
READDIR 0 0% 12
READDIRPLUS 0 0% 2
READLINK 0 0% 0
REMOVE 0 0% 0
RENAME 0 0% 0
RMDIR 0 0% 0
SETATTR 5 19% 422
SYMLINK 0 0% 0
nfswatch>
This is an awesome tool, and runs awesome on both Linux and Solaris hosts.
This putback from Casper should make all Solaris admins happy:
Author: Matty
Repository: /hg/onnv/onnv-gate
Latest revision: 9bff8d14ecc3a8f5d9fd0ee4a5c8a372dd688cd3
Total changesets: 1
Log message:
PSARC 2009/173 Fasttrack for turbo-charging SVr4 packaging
6820054 Turbocharged SVr4 package commands [PSARC 2009/173]
I played around with KVM live migration over the weekend, and it worked flawlessly. When I was first getting the environment (i.e., /etc/libvirt/libvirtd.conf, /etc/sysconfig/libvirt, SASL credentials, etc.) set up to allow migrations to work, I was greeted with the following error:
virsh # **migrate --live kvmnode1 qemu+tcp://thecrue/system**
error: invalid argument in only tcp URIs are supported for KVM
migrations
This message wasn’t the most descriptive error I’ve ever read, so I turned to the libvirt source to see what was going on. I saw the following comment:
/* Prepare is the first step, and it runs on the destination host. */
And immediately fired up virsh to double check the migrate syntax I was using:
$ virsh
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh # help migrate
.....
migrate [--live] <domain> <desturi> [<migrateuri>] [<dname>]
I had accidently run the migrate command from the destination host instead of the source host. Once I knew what was going on, I fired off the migrate from the source host and everything worked flawlessly:
virsh # migrate –live kvmnode1 qemu+tcp://thecrue/system
virsh #
I really dig KVM, and have recently become somewhat smitten with openvz. Both technologies rock, and I hope to talk about my experiences with each virtualization technology sometime in the future.
In my previous post, I mentioned how the mcelog utility can be used to detect hardware problems. Mcelog relies on the /dev/mcelog device being present, which requires the kernel to be built with the following options:
CONFIG_X86_MCE=y
CONFIG_X86_MCE_INTEL=y
CONFIG_X86_MCE_AMD=y
CONFIG_X86_MCE_THRESHOLD=y
To enable these, you can select the following options once you run ‘make menuconfig’:
[*] Machine Check Exception
│ │ [*] Intel MCE features
│ │ [*] AMD MCE features
One feature I really liked in Solaris 10 was SMF. It provides a framework using services manifests on the system to automatically respawn services should they die off. It handles dependencies, restarts, and a single unified command set to configure the system using svcs, svccfg, and svcadm.
Linux looks like they’ve started to integrate some of these features with a modified Init daemon that not only restarts defined services, but improves boot time. I’m going to be checking Initng out and will post with some further findings.