Scalable storage for QEMU/KVM

While reading up on various scalable file systems I came across the sheepdog project. For those new to sheepdog, their website describes it as:

“Sheepdog is a distributed storage system for QEMU/KVM. It provides highly available block level storage volumes that can be attached to QEMU/KVM virtual machines. Sheepdog scales to several hundreds nodes, and supports advanced volume management features such as snapshot, cloning, and thin provisioning.”

This looks really cool, and I’m hoping to play around with it this weekend. Curious what experiences my readers have had with it?

Creating KVM guests with virt-install and qemu-kvm

In my KVM presentation, I discussed how to create KVM guests using the virt-install utility. To create a KVM guest, you can run the virt-install utility with one or more options that control where the guest will be installed, how to install it, and how to structure the guest hardware profile . Here is one such example:

$ virt-install --connect qemu:///system \
   --name kvmnode1 \
   --ram 512 \
   --file /nfs/vms/kvmnode1.disk1 \
   --file /nfs/vms/kvmnode1.disk2 \
   --network=bridge:br0 \
   --accelerate \
     -s 18 \
   --pxe \
     -d \
   --noautoconsole \
   --mac=54:52:00:53:20:15 \
   --nographics \

Under the covers virt-install executes qemu-kvm (at least on RHEL derives distributions), which is the process that is responsible for encapsulating the KVM guest in userspace. To start a guest using qemu-kvm, you can execute something similar to the following:

$ /usr/bin/qemu-kvm -M pc \
   -m 1024 \
   -smp 1 \
   -name kvmnode1 \
   -monitor stdio \
   -boot n \
   -drive file=/nfs/vms/kvmnode1,if=ide,index=0 \
   -net nic,macaddr=54:52:00:53:20:00,vlan=0 \
   -net tap,script=no,vlan=0,ifname=tap0 \
   -serial stdio\
   -nographic \
   -incoming tcp:0:4444

While virt-install is definitely easier to use, there are times when you may need to start a guest manually using qemu-kvm (certain options aren’t available through virt-install, so understanding how qemu-kvm works is key!). Viva la KVM!!!

Testing out live CD distributions with KVM

I read over the latest KVM putback log last night, and saw that KVM now supports booting from ISO images that are accessible via HTTP (it uses libcurl under the covers). This is pretty fricking cool, and allow you to boot in recovery mode without requiring local media or PXE, and provides a super easy way to play around with live CD distributions. To use this feature, you will first need to install KVM build 87 from source. Once that is complete, you can add the URL to your qemu-kvm command line:

$ qemu-kvm -cdrom …

You can also fire up virsh and “edit” the URL into the XML data. KVM is pretty rad, and it’s amazing how many cool features the KVM developers keep kicking out!

Getting To Know the Linux Kernel Virtualization Machine (KVM)

I gave a talk last night on the Linux Kernel Virtual Machine (KVM) at the local Linux users group. The talk gives some background on KVM, and shows how to get KVM working on a server that supports processor virtualization extensions. The slides are available on my website, and I will try to get them linked to the presentation section of the ALE website. Thanks to everyone who came out! I had a blast!

Redirecting the CentOS and Fedora Linux console to a serial port (virsh console edition)

During my KVM testing, I wanted to be able to use the virsh “console” command to access the console of my guests. This would make remote management a whole lot easier, since the default remote management interface (vnc) was a bit of overkill. To get virsh to allow me to console in, the first thing I did was update the menu.lst to get grub to write to the serial port:

serial –speed=115200 –unit=0 –word=8 –parity=no –stop=1
terminal –timeout=10 serial

In addition to the two items above, you also need to make sure you disable the splash screen. Next up, I had to adjust the kernel entries in the menu.lst to write to the serial port (ttyS0). Here is a sample entry that does just this:

title CentOS (2.6.18-128.1.10.el5)
	root (hd0,0)
	kernel /boot/vmlinuz-2.6.18-128.1.10.el5 ro root=LABEL=/ console=ttyS0
	initrd /boot/initrd-2.6.18-128.1.10.el5.img

The items listed above will configure grub and the kernel to write to the serial port, but you will not be able to login since a getty process isn’t configured to monitor the serial port. To fix this, you can add the serial device name to /etc/inittab (the entry below assumes you want to use agetty instead of one of the other getty implementations):

S0:12345:respawn:/sbin/agetty ttyS0 115200

After I configured init to spawn a getty process, I had to tell init that it was ok for root to login through ttyS0. Privileged logins are managed through /etc/securetty, which contains a list of devices that root is approved to log in on. To allow root logins over ttyS0, I appended it to the file:

echo “ttyS0” >> /etc/securetty

Once the items listed above were in place, I was able to fire up virsh and access the console through SSH:

$ virsh

virsh # console kvmnode1
Connected to domain kvmnode1
Escape character is ^]

CentOS release 5.3 (Final)
Kernel 2.6.18-128.1.10.el5 on an x86_64

kvmnode1 login: 

This is amazingly cool, and makes remote management super easy.

Create sasl accounts for libvirt

I have been playing around with libvirt, which is a virtualization toolkit that sits on top of the native virtualization technologies in various operating systems. Libvirt provides built-in support for managing remote nodes, which is useful when you need to enable one or more virtualization properties, or when you need to perform some type of administrative action (e.g., migrate a host to another machine). To allow secure access, libvirt supports transport layer security as well as authentication. TLS is typically used to secure the network transport, and SASL is used to provide authentication.

SASL currently supports MD5 and GSSAPI authentication. To configure libvirt to use the SASL DIGEST-MD5 mechanism, you will need to add a user to the SASL database. This can be accomplished with the saslpasswd2 command:

$ saslpasswd2 -a libvirt virt
Again (for verification):

The “-a” option specifies the application name to tie the user to, and the name passed to command is the account to create. To view the list of users in the SASL database, you can pass the name of the sasl password database (a Berkeley DB database) to the sasldblistusers2 command:

$ sasldblistusers2 -f /etc/libvirt/passwd.db
virt@thecrue: userPassword

After the accounts are created, you will need to make sure the digest-md5 mechanism is enabled in the /etc/sasl2/libvirt.conf configuration file:

# Default to a simple username+password mechanism
mech_list: digest-md5

Next you will need to edit /etc/libvirt/libvirtd.conf to enforce SASL authentication. If you are using a TCP socket to connect to your hosts (not recommended), you can update the auth_tcp directive:

# Change the authentication scheme for TCP sockets.
# If you don't enable SASL, then all TCP traffic is cleartext.
# Don't do this outside of a dev/test scenario. For real world
# use, always enable SASL and use the GSSAPI or DIGEST-MD5
# mechanism in /etc/sasl2/libvirt.conf
auth_tcp = "sasl"

If you are using TLS over TCP to connect to your hosts (this is recommended, since the user credentials will be encrypted and not passed to the remote nodes as plain text), you can update the auth_tls directive:

# Change the authentication scheme for TLS sockets.
# TLS sockets already have encryption provided by the TLS
# layer, and limited authentication is done by certificates
# It is possible to make use of any SASL authentication
# mechanism as well, by using 'sasl' for this option
auth_tls = "sasl"

After sasl is active, you will be prompted by libvirt to provide a user account and password each time an operation is performed:

$ virsh

virsh # migrate –live kvmnode2 qemu+tls://disarm/system
Please enter your authentication name:virt
Please enter your password:

Using TLS requires a bit more work to get operational, so I will leave that for a separate post. Libvirt is pretty sweet, and when KVM is fully integrated life will be grand!!