Testing out live CD distributions with KVM

I read over the latest KVM putback log last night, and saw that KVM now supports booting from ISO images that are accessible via HTTP (it uses libcurl under the covers). This is pretty fricking cool, and allow you to boot in recovery mode without requiring local media or PXE, and provides a super easy way to play around with live CD distributions. To use this feature, you will first need to install KVM build 87 from source. Once that is complete, you can add the URL to your qemu-kvm command line:

$ qemu-kvm -cdrom http://disarm.prefetch.net/isos/fedoralivecd.dvd …

You can also fire up virsh and “edit” the URL into the XML data. KVM is pretty rad, and it’s amazing how many cool features the KVM developers keep kicking out!

Accessing the Python help facility from the Python shell

Python has a ton of useful modules, and the built-in help facility is extremely useful for gaining quick access to a description of methods in a given module. Once a module is imported with import:

>>> import pexpect

You can run dir(MODULE_NAME) to view the list of methods in the module:

>>> dir(pexpect)
[‘EOF’, ‘ExceptionPexpect’, ‘TIMEOUT’, ‘__all__’, ‘__builtins__’, ‘__doc__’, ‘__file__’, ‘__loader__’, ‘__name__’, ‘__revision__’, ‘__version__’, ‘errno’, ‘fcntl’, ‘os’, ‘pty’, ‘re’, ‘resource’, ‘run’, ‘searcher_re’, ‘searcher_string’, ‘select’, ‘signal’, ‘spawn’, ‘split_command_line’, ‘string’, ‘struct’, ‘sys’, ‘termios’, ‘time’, ‘traceback’, ‘tty’, ‘types’, ‘which’]

To get help on a specific method, you can pass the module and method name to the built-in help function:

>>> help(pexpect.spawn)
Help on class spawn in module pexpect:

class spawn(__builtin__.object)
 |  This is the main class interface for Pexpect. Use this class to start
 |  and control child applications.
 |  
 |  Methods defined here:

   .....



This is pretty sweet, and makes coding in Python super easy.

How does crossbow deal with MAC address conflicts?

When I gave my presentation on Solaris network virtualization a few months back, one of the folks in the audience asked me how Crossbow deals with duplicate MAC detection. I didn’t have a solid answer for the gentlemen that asked, but thanks to Nicolas Droux from Solaris kernel engineering, now I do. Here is what Nicolas had to say about this topic on the network-discuss mailing list:

“When a VNIC is created we have checks in place to ensure that the address doesn’t conflict with another MAC address defined on top of the same underlying NIC. When the MAC address is generated randomly, and the generated MAC address conflicts with another VNIC, we currently fail the whole operation. We should try another MAC address in that case, transparently to the user, I filed CR 6853771 to track this.

To reduce the risk of MAC address conflicts with physical NICs on other hosts on the network, we use by default a OUI with local bit set for random MAC addresses, and we let the administrator use a different OUI or prefix if desired. We currently don’t have a mechanism in place do perform automatic MAC address duplication detection between multiple hosts.”

I was under the impression that Crossbow used the ARP cache and DAD code to verify that a MAC address wasn’t in use on the network, but that doesn’t appear to be the case. Given this new information, I will need to modify my tools to assign a static MAC that is based off of the address assigned to th virtual NIC. Thanks Nicolas for the awesome reply to the list!

Implementing locks in shell scripts

I have been working on a shell script that manages lxc-containers, and came across a use case last where it is possible for two yum processes to interfere with each other. To ensure that only one yum process is run at a single point in time, I implemented file based locks using flock(1). Flock makes this super easy, since it has a “-x” option to create an exclusive lock (this is the default), and a “-n” option which causes flock to exit with a return code of 1 if it can’t obtain the lock. This allow code similar to the following to be used to protect sensitive areas:

 
(
    flock -n -x 200

    if [ $? != "0" ]; then
        echo "ERROR: Unable to acquire the yum lock. Is another yum running?"
        exit 1
    fi    
    
    # Do yum stuff here

) 200>${TMP_DIR}/yum.lck



In my case this tiny bit of code ensures that only one yum process is able to run, which helps keep my package database in a sane state.

Downloading Fedora and CentOS source code with yumdownloader

One of the things I really like about Linux is the availability of source code for the kernel and userland applications. If I encounter an issue where a program is misbehaving for some given reason, I can grab a source RPM from a network repository and start poking around to see what is going on. Fedora and CentOS provide the yumdownloader utility to download binary RPMs, and source RPMs when the “–source” option is specified:

$ yumdownloader –source kernel

Loaded plugins: refresh-packagekit
Enabling updates-source repository
Enabling fedora-source repository
kernel-2.6.31-0.17.rc0.git15.fc12.src.rpm                                    |  67 MB     05:28     



Once a source RPM is downloaded, you can install it and begin looking through the source. Sweet!

Installing lxc-containers on Fedora hosts

During the Q&A period of my KVM presentation the other night, the world famous Mike Warfield tipped me off to the lxc-container project. Lxc-containers are a lightweight virtualization solution similar to Solaris zones and openvz, and allow you to create one or more virtual execution environments on a Linux server. It appears all of the kernel plumbing needed to support lxc-containers has been integrated (check out the various namespace projects on LWN for details) into the latest Linux kernels, and distribution support for lxc-containers is starting to trickle in.

To install lxc-containers from the source RPM on sourceforge, you will first need to make sure you have the required dependencies. On my 64-bit Fedora 11 host, I needed to install the 32-bit libcap packages to get around a “please install libcap-devel” configure error:

$ yum install libcap-devel.i586

Once the dependencies are met, you can install the lxc source RPM (the lxc source RPM contains the source code for the lxc management tools, which are used to manage LXC containers) from sourceforge:

$ rpm -ivh lxc-0.6.2-1.src.rpm

   1:lxc                   
warning: user dlezcano does not exist - using root
warning: group dlezcano does not exist - using root
########################################### [100%]
warning: user dlezcano does not exist - using root
warning: group dlezcano does not exist - using root



To build RPMS from the lxc spec file, you can run rpmbuild with the build binary option:

$ rpmbuild -vv -bb ~/rpmbuild/SPECS/lxc.spec

If this completes successfully, you can install the RPMs it created:

$ cd /root/rpmbuild/RPMS/x86_64

$ rpm -ivh lxc*.rpm

Preparing...                ########################################### [100%]
   1:lxc-devel              ########################################### [ 33%]
   2:lxc                    ########################################### [ 67%]
   3:lxc-debuginfo          ########################################### [100%]

Now that everything is installed, you can use the lxc commands to create new containers. LXC containers look pretty sweet, and I will write up a separate post later this week that describes how to manage lxc-containers on Fedora hosts. A huge shout out to Mike for tipping me off to this!