Locating WWPNs on Linux servers

I do a lot of storage-related work, and often times need to grab WWPNs to zone hosts and to mask storage. To gather the WWPNs I would often times use the following script on my RHEL and CentOS servers:

#!/bin/sh

FC_PATH="/sys/class/fc_host"
 
for fc_adapter in `ls ${FC_PATH}`
do  
    echo "${FC_PATH}/${fc_adapter}:"
    NAME=$(awk '{print $1}' ${FC_PATH}/${fc_adapter}/symbolic_name )
    echo "  $NAME `cat ${FC_PATH}/${fc_adapter}/port_name`"
done

This produced consolidated output similar to:

$ print_wwpns

/sys/class/fc_host/host5:
  QLE2562 0x21000024ff45afc2
/sys/class/fc_host/host6:
  QLE2562 0x21000024ff45afc3

While doing some research I came across sysfsutils, which contains the incredibly useful systool utility. This nifty little tool allows you to query sysfs values, and can be used to display all of the sysfs attributes for your FC adapters:

$ systool -c fc_host -v

Class = "fc_host"

  Class Device = "host5"
  Class Device path = "/sys/class/fc_host/host5"
    fabric_name         = "0x200a547fee9e5e01"
    issue_lip           = 
    node_name           = "0x20000024ff45afc2"
    port_id             = "0x0a00c0"
    port_name           = "0x21000024ff45afc2"
    port_state          = "Online"
    port_type           = "NPort (fabric via point-to-point)"
    speed               = "8 Gbit"
    supported_classes   = "Class 3"
    supported_speeds    = "1 Gbit, 2 Gbit, 4 Gbit, 8 Gbit"
    symbolic_name       = "QLE2562 FW:v5.06.03 DVR:v8.03.07.09.05.08-k"
    system_hostname     = ""
    tgtid_bind_type     = "wwpn (World Wide Port Name)"
    uevent              = 

  Class Device = "host6"
  Class Device path = "/sys/class/fc_host/host6"
    fabric_name         = "0x2014547feeba9381"
    issue_lip           = 
    node_name           = "0x20000024ff45afc3"
    port_id             = "0x9700a0"
    port_name           = "0x21000024ff45afc3"
    port_state          = "Online"
    port_type           = "NPort (fabric via point-to-point)"
    speed               = "8 Gbit"
    supported_classes   = "Class 3"
    supported_speeds    = "1 Gbit, 2 Gbit, 4 Gbit, 8 Gbit"
    symbolic_name       = "QLE2562 FW:v5.06.03 DVR:v8.03.07.09.05.08-k"
    system_hostname     = ""
    tgtid_bind_type     = "wwpn (World Wide Port Name)"
    uevent              = 

This output is extremely useful for storage administrators, and provides everything you need in a nice consolidated form. +1 for systool!

Configuring yum to keep more than three kernels

When you run ‘yum update’ on your Fedora system, the default yum configuration will keep the last 3 kernels. This allows you to fail back to a previous working kernel if you encounter an error or a bug. The number of kernels to keep is controlled by the installonly_limit option, which is thoroughly described in the yum.conf(8) manual page:

installonly_limit Number of packages listed in installonlypkgs to keep installed at the same time. Setting to 0 disables this feature. Default is ‘0’. Note that this functionality used to be in the “installonlyn” plugin, where this option was altered via. tokeep. Note that as of version 3.2.24, yum will now look in the yumdb for a installonly attribute on installed packages. If that attribute is “keep”, then they will never be removed.

If you need to keep more than 3 kernels, you can increase the value of installonly_limit in /etc/yum.conf.

Purging the yum header and package cache

Most of the Linux distributions that utilize the yum package manager cache headers and packages by default. These files are cached in the directory identified by the cachedir option, which defaults to /var/cache/yum on all of the hosts I checked. On my Fedora 16 desktop this directory has grown to 167MB in size:

$ du -sh /var/cache/yum
167M /var/cache/yum

You can clean out the cached directory with the yum “clean” option:

$ yum clean all

If disk space is an issue on your systems, you can also set the “keepcache” option to 0. This will remove cached files after they are installed, as noted in yum.conf(8)the manual page:

keepcache Either `1' or `0'. Determines whether or not yum keeps
          the cache of headers and packages after successful installation.
          Default is '1' (keep files)

This is a useful option for hosts that have limited disk space. Nice!

Sudo insults — what a fun feature!

I think humor plays a big role in life, especially the life of a SysAdmin. This weekend I was cleaning up some sudoers files and came across a reference to the “insult” option in the documentation. Here is what the manual says:

insults If set, sudo will insult users when they enter an incorrect password. This flag is off by default.”

This of course peaked my curiosity, and the description in the online documentation got me wondering what kind of insults sudo would spit out. To test this feature out I compiled sudo with the complete set of insults:

$ ./configure –prefix=/usr/local –with-insults –with-all-insults

$ gmake

$ gmake install

To enable insults I added “Defaults insults” to my sudoers file. This resulted in me laughing myself silly:

$ sudo /bin/false

Password: 
Take a stress pill and think things over.
Password: 
This mission is too important for me to allow you to jeopardize it.
Password: 
I feel much better now.
sudo: 3 incorrect password attempts

$ sudo /bin/false

Password: 
Have you considered trying to match wits with a rutabaga?
Password: 
You speak an infinite deal of nothing
Password: 
You speak an infinite deal of nothing
sudo: 3 incorrect password attempts

Life without laughter is pretty much a useless life. You can quote me on that one! ;)

Summarizing system call activity on Solaris hosts

I previously described how to use strace to to summarize system call activity on Linux hosts. Solaris provides similar data with the truss “-c” option:

$ truss -c -p 26396
syscall               seconds   calls  errors
read                     .000       3
write                   7.671   25038
time                     .000      21
stat                     .000      15
lseek                    .460   24944
getpid                   .000      15
kill                     .162    7664
sigaction                .004     237
writev                   .000       3
lwp_sigmask              .897   49887
pollsys                  .476    7667
                     --------  ------   ----
sys totals:             9.674  115494      0
usr time:               2.250
elapsed:              180.940

The output contains the total elapsed time, a breakdown of user and system time, the number of errors that occurred, the number of times each system call was invoked and the total accrued time for each system call. This has numerous uses, and allows you to easily see how a process is intereacting with the kernel. Sweet!

Summarizing system call activity on Linux systems

Linux has a guadzillion debugging utilities available. One of my favorite tools for debugging problems is strace, which allows you to observe the system calls a process is making in realtime. Strace also has a “-c” option to summarize system call activity:

$ strace -c -p 28009

Process 28009 attached
Process 28009 detached
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 78.17    0.001278           2       643           select
 14.80    0.000242           1       322           read
  7.03    0.000115           0       322           write
------ ----------- ----------- --------- --------- ----------------
100.00    0.001635                  1287           total

The output contains the percentage of time spent in each system call, the total time in seconds, the microseconds per call, the total number of calls, a count of the number of errors and the type of system call that was made. This output has numerous uses, and is a great way to observe how a process is interacting with the kernel. Good stuff!

« Older Entries