Built-in memory testing in the Linux 2.6.26 kernel

I have been using memtest86 and a custom built hardware testing image based on OpenSolaris, FMA and sun VTS for quite some time, and have had fantastic success with them. I just learned that the Linux kernel developers added built-in memory testing support to the Linux 2.6.26 kernel:

“Memtest is a commonly used tool for checking your memory. In 2.6.26 Linux is including his own in-kernel memory tester. The goal is not to replace memtest, in fact this tester is much simpler and less capable than memtest, but it’s handy to have a built-in memory tester on every kernel. It’s enabled easily with the “memtest” boot parameter.”

This is super useful, and will be useful when you don’t have memtest86 and company readily available. Nice!

Understanding memory usage on Linux hosts just got easier

While looking at the features that were added to the Linxu 2.6.20.X kernels, I came across this gem on the kernelnewbies website:

“Measuring how much memory processes are using is more difficult than it looks, specially when processes are sharing the memory used. Features like /proc/$PID/smaps (added in 2.6.14) help, but it has not been enough. 2.6.25 adds new statistics to make this task easier. A new /proc/$PID/pagemaps file is added for each process. In this file the kernel exports (in binary format) the physical page localization for each page used by the process. Comparing this file with the files of other processes allows to know what pages they are sharing. Another file, /proc/kpagemaps, exposes another kind of statistics about the pages of the system. The author of the patch, Matt Mackall, proposes two new statistic metrics: “proportional set size” (PSS) – divide each shared page by the number of processes sharing it; and “unique set size” (USS) (counting of pages not shared). The first statistic, PSS, has also been added to each file in /proc/$PID/smaps. In this HG repository you can find some sample command line and graphic tools that exploits all those statistics.”

This is awesome, and I thinking having USS and PSS statistics will greatly help admins understand how memory is being used on their systems. If you want to read more about this, check out the following LWN article.

dd-wrt is awesome!

My good friend and fellow blogging partner Mike Svoboda told me about dd-wrt a few weeks ago. Once Mike showed me what dd-wrt was capable of, I knew I needed to deploy it somewhere. I didn’t have a spare access point or router to test dd-wrt, so I decided to pick up a WRT54GL on NewEgg (I got mine for $25 off list price). As a long time OpenBSD and Soekris fan, I was very skeptical that dd-wrt would be able to stack up to what I currently had running (Soekris net4501+OpenBSD+PF) at home.

To get dd-wrt working, I used the installation guide provided on the dd-wrt wiki. Once the standard image was flashed and operational, I logged into the dd-wrt web interface and configured the router to meet my needs. Not only was I amazed with the breadth of features that are available in dd-wrt, but I was totally amazed at the monitoring capabilities that are available out of the box. Here is a screenshot from the bandwidth monitoring tab:


dd-wrt performance graph

dd-wrt has everything I need and more, and I have now completely replaced by OpenBSD router. Here are my favorite dd-wrt features sorted in a top ten list:

1. Super stable (it runs a Linux kernel, which is an added bonus)!

2. Lots of performance graphs and statistics.

3. Support for wireless A/B/G/N + wired Ethernet.

4. Incredible support through the forums (I have yet to use this, but the replies I’ve seen are killer).

5. Built-in support for various wireless security protocols (WPA, WPA2, 802.1X, etc.).

6. Functional DHCP server.

7. NAT and QOS support.

8. Built in DNS caching.

9. Bridging and VLAN support.

10. ipkg, which allows you to add numerous 3rd party packages (samba, upnp servers, etc.) to the router.

This is only a subset of what dd-rt can do, and I can’t speak highly enough of the product! Mike rocks for recommending this, and I am stoked that I have such a reliable device acting as my access point / Internet router (my previous APs were rather flakey, so hopefully you can see why I am so excited about having a stable device routing packets from my wireless and wired host to the Internet).

Fixing Solaris Cluster device ID (DID) mismatches

I had to replace a disk in one of my cluster nodes, and was greeted with the following message once the disk was swapped and I checked the devices for consistency:

$ cldevice check

cldevice:  (C894318) Device ID "snode2:/dev/rdsk/c1t0d0" does not match physical device ID for "d5".
Warning: Device "snode2:/dev/rdsk/c1t0d0" might have been replaced.



To fix this issue, I used the cldevice utilities repair option:

$ cldevice repair
Updating shared devices on node 1
Updating shared devices on node 2


Once the repair operation updated the devids, cldevice ran cleanly:

$ cldevice check

Niiiiiiiiice!

Building clusters with shared disk using VMWare server 2.0

I have a couple of lab machines that are running VMWare server 2.X under 64-bit CentOS 5.2. VMWare server has a cool feature where you can create “clusters in a box.” The cluster in a box feature allows you to share a virtual disk between more than one virtual machine, and since it support SCSI persistent reservations, you can truly simulate a real cluster. I have used this to deploy Oracle RAC, Solaris Cluster and Redhat Cluster server in my lab environment.

Based on the cluster in a box documentation on the VMWare website, sharing a disk between multiple nodes can be achieved by creating a virtual disk, and then importing that disk into each virtual machine. Once the disk is imported, you can add (or modify if they exist) the “scsiX.sharedBus = virtual” (X should match the controller that is used to host the shared disk) and “disk.locking = false” directives to the shared disk stanzas in each virtual machines vmx file. Here is a sample from a node I set up this weekend:

scsi1.present = “TRUE”
scsi1.sharedBus = “virtual”
scsi1.virtualDev = “lsilogic”
scsi1:0.present = “TRUE”
scsi1:0.fileName = “/data/vmware/shared/solariscluster/Disk1.vmdk”
disk.locking = “false”

This has served me well in my lab environment for quite some time, though I provide ZERO guarantees that it will work in yours (and there is always the possibility that sharing disk between multiple nodes will corrupt data). I am documenting this here as a reference for myself, though feel free to use this information at your own risk.

The end of the Solaris Cluster /globaldevices file system

I have been working with [Sun|Solaris] for as long as I can recall. One thing that always annoyed me was the need to have a 512MB file system devoted to /globaldevices. With Sun Cluster 3.2 update 2, this is no longer the case!:

  >>> Global Devices File System <<<

    Each node in the cluster must have a local file system mounted on 
    /global/.devices/node@ before it can successfully participate 
    as a cluster member. Since the "nodeID" is not assigned until 
    scinstall is run, scinstall will set this up for you.

    You must supply the name of either an already-mounted file system or a
    raw disk partition which scinstall can use to create the global 
    devices file system. This file system or partition should be at least 
    512 MB in size.

    Alternatively, you can use a loopback file (lofi), with a new file 
    system, and mount it on /global/.devices/node@.

    If an already-mounted file system is used, the file system must be 
    empty. If a raw disk partition is used, a new file system will be 
    created for you.

    If the lofi method is used, scinstall creates a new 100 MB file system
    from a lofi device by using the file /.globaldevices. The lofi method 
    is typically preferred, since it does not require the allocation of a 
    dedicated disk slice.

    The default is to use /globaldevices.

    Is it okay to use this default (yes/no) [yes]? yes

    Testing for "/globaldevices" on "snode1" ... failed

/globaldevices is not a directory or file system mount point.
Cannot use "/globaldevices" on "snode1".

    Is it okay to use the lofi method (yes/no) [yes]?  yes



This is pretty sweet, and I am SOOOOO glad that /globaldevices as a separate file system is no more! Thanks goes out to the Solaris cluster team for making this wish list item a reality! :)