In my KVM presentation, I discussed how to create KVM guests using the virt-install utility. To create a KVM guest, you can run the virt-install utility with one or more options that control where the guest will be installed, how to install it, and how to structure the guest hardware profile . Here is one such example:
$ virt-install --connect qemu:///system
--name kvmnode1
--ram 512
--file /nfs/vms/kvmnode1.disk1
--file /nfs/vms/kvmnode1.disk2
--network=bridge:br0
--accelerate
-s 18
--pxe
-d
--noautoconsole
--mac=54:52:00:53:20:15
--nographics
--nonsparse
Under the covers virt-install executes qemu-kvm (at least on RHEL derives distributions), which is the process that is responsible for encapsulating the KVM guest in userspace. To start a guest using qemu-kvm, you can execute something similar to the following:
$ /usr/bin/qemu-kvm -M pc
-m 1024
-smp 1
-name kvmnode1
-monitor stdio
-boot n
-drive file=/nfs/vms/kvmnode1,if=ide,index=0
-net nic,macaddr=54:52:00:53:20:00,vlan=0
-net tap,script=no,vlan=0,ifname=tap0
-serial stdio
-nographic
-incoming tcp:0:4444
While virt-install is definitely easier to use, there are times when you may need to start a guest manually using qemu-kvm (certain options aren’t available through virt-install, so understanding how qemu-kvm works is key!). Viva la KVM!!!
I support a couple of yum repositories, and use the yum repository build instructions documented in my previous post to create my repositories. When I tried to apply the latest CentOS 5.3 updates to one of my servers last week, I noticed that I was getting a number of “Error performing checksum” errors:
$ yum repolist
Loaded plugins: fastestmirror
Determining fastest mirrors
Updates | 1.2 kB 00:00
primary.xml.gz | 376 kB 00:00
http://updates/repo/centos/5.3/updates/repodata/primary.xml.gz: [Errno
-3] Error performing checksum
Trying other mirror.
primary.xml.gz | 376 kB 00:00
http://updates/repo/centos/5.3/updates/repodata/primary.xml.gz: [Errno
-3] Error performing checksum
Trying other mirror.
Error: failure: repodata/primary.xml.gz from Updates: [Errno 256] No
more mirrors to try.
After reading through the code in yumRepo.py, I noticed that the error listed above is usually generated when the checksum algorithm specified in the repomd.xml file isn’t supported. The createrepo utility uses the sha256 algorithm by default in Fedora 11 (I created my repositories on a Fedora 11 host), so I decided to create my repository using the sha1 algorithm instead:
$ createrepo -v -s sha1 /var/www/html/repo/centos/5.3/updates
Once I created the repository metadata using the sha1 algorithm, everything worked as expected:
$ yum clean all
Loaded plugins: fastestmirror
Cleaning up Everything
Cleaning up list of fastest mirrors
$ yum repolist
Loaded plugins: fastestmirror
Determining fastest mirrors
Updates | 1.0 kB 00:00
primary.xml.gz | 367 kB 00:00
Updates 634/634
repo id repo name status
Updates Updates enabled : 634
repolist: 634
This debugging experience made me realize two things:
I had a h00t debugging this issue, and am glad everything is working correctly now! Nice!
I have written about the yum package manager in the past, and it’s one of the main reasons I use CentOS and Fedora Linux. Various 3rd party yum repositories are also available, allowing you to gain access to numerous packages that aren’t available in the stock distributions. This is great, but sometimes you want to create your own packages and distribute them to your clients. This is a piece of cake with yum, since you can create your own yum repositories.
To create your own yum repository, you will first need to install the yum-utils and createrepo packages:
$ yum install yum-utils createrepo
The createrepo package contains the createrepo utility, which can be used to create the repository metadata from a set of RPMS. The yum-utils package contains a couple of useful tools for managing repositories, including the verifytree and reposync commands. Once you have the tools installed, you will need to create a directory to store your RPMs and metadataa.
$ mkdir -p /var/www/html/repo/centos/5.3/updates
After the directory is created, you can use your favorite tool to copy the RPMs to this directory:
$ rsync -a rsync://mirrors.usc.edu/centos/5.3/updates/x86_64/RPMS/ /var/www/html/repo/centos/5.3/updates/RPMS
Next up, we need to create the repository metadata to go along with the RPMs. The createrepo utility was written to do just this, and takes the path to the base RPM directory as an argument:
$ createrepo -v -s md5 /var/www/html/repo/centos/5.3/updates
This will take several minutes to run, and will populate a number of files in the repodata directory:
$ cd /var/www/html/repo/centos/5.3/updates/repodata
$ ls -l
total 3208
-rw-r--r--. 1 root root 2509241 2009-11-24 11:43 filelists.xml.gz
-rw-r--r--. 1 root root 381198 2009-11-24 11:43 other.xml.gz
-rw-r--r--. 1 root root 371826 2009-11-24 11:43 primary.xml.gz
-rw-r--r--. 1 root root 984 2009-11-24 11:43 repomd.xml
The repomd.xml file contains the package sources, as well as pointers to one or more .gz files. The .gz files contains the list of packages that are available, as well as metadata to describe the package. If the repository was created successfully, you can verify the contents with the verifytree utility:
$ verifytree /var/www/html/repo/centos/5.3/updates
Loaded plugins: refresh-packagekit
Checking repodata:
verifying repomd.xml with yum
verifying filelists checksum
verifying other checksum
verifying primary checksum
Checking groups (comps.xml):
verifying comps.xml with yum
comps file missing or unparseable
Assuming all went as planned, you should now have a usable repository! To tell your clients to use it, you can create a new repository file in /etc/yum.repos.d. Here is an example:
[mattys-updates]
name=Mattys Update Server
baseurl=http://master/repo/centos/5.3/updates
enabled=1
gpgcheck=1
Now the next time you run ‘yum search’ on your clients, the packages in the newly created repository should be available. I really dig yum, and love the fact that they make the entire package management process relatively simple and straight forward!
I
have been a long time Netflix user, and love the
fact that I can rent movies through the mail and return them on my own
schedule. Netflix now allows you to stream a number of movies to your
desktop, though the streaming service requires that you are able to run
Microsoft silverlight. This is pretty awesome, though I wanted a way to
be able to watch movies on my high definition tv without having to
involve a PC. After a bit of searching, I came across the Roku digital
video
player.
These nifty little devices provide native integration with Netflix, and
provide a remote control that allows you to find movies, start movies
and pause movies all from the comfort of your favorite chair.
Now, I was extremely skeptical about these devices at first, but the glowing reviews on Amazon set me at ease and I decided to pick on up Here are the other reasons I purchased mine:
This review wouldn’t be complete without listing the downsides, which I have only found one. I have spent WAY TOO MANY HOURS watching documentaries and educational videos, which have precluded me from blogging about technology. The break has actually been kinda nice, and has allowed me learn about all kinds of stuff (history of American presidents, Rome, The Dark ages, various painters, finance related topics, etc.). If you don’t already have a device with native Netflix integration, then the Roku may be for you!
I periodically use the stock Solaris FTP server on some of my servers, especially when I need to move tons of data around. Enabling the ftp service in Solaris is a snap:
$ svcadm enable network/ftp
The default ftp configuration leaves a lot to be desired, especially when you consider that nothing is logged. To configure the FTP daemon to log logins, transferred files and the commands sent to the server, you can enter the svccfg shell and add some additional options to the in.ftpd command line:
$ svccfg
svc:>select ftp
svc:/network/ftp>setprop inetd_start/exec="/usr/sbin/in.ftpd -a -l -L -X -w"
svc:/network/ftp>listprop
The “-a” option will enable the use of the ftpaccess file, the “-l” option will log each FTP session, the “-L” option will log all commands sent to the server, the “-X” option will cause all file acesses to be logged to syslog, and the “-w” option will record the logins to the wtmpx file. Since most of this information is logged using the daemon facility and info log level, you will need to add a daemon.info entry to /etc/syslog.conf if you want the data to be logged to a file (or to a remote log server). To force the changes listed above to take effect, you will need to restart the inetd, system-log and ftp services:
$ svcadm restart inetd
$ svcadm restart network/ftp
$ svcadm restart system-log
Now each time an FTP transfer occurs, you will get entries similar to the following in the system log:
Nov 24 17:46:32 prefetch01 ftpd[9304]: [ID 716067 daemon.info] AUTH GSSAPI
Nov 24 17:46:32 prefetch01 ftpd[9304]: [ID 716067 daemon.info] AUTH KERBEROS_V4
Nov 24 17:46:32 prefetch01 ftpd[9304]: [ID 165209 daemon.info] USER prefetch
Nov 24 17:46:32 prefetch01 ftpd[9304]: [ID 125383 daemon.info] PASS password
Nov 24 17:46:32 prefetch01 ftpd[9304]: [ID 124999 daemon.info] FTP LOGIN FROM 1.2.3.4 [1.2.3.4], backup
Nov 24 17:46:32 prefetch01 ftpd[9304]: [ID 470890 daemon.info] SYST
Nov 24 17:48:42 prefetch01 ftpd[9304]: [ID 225560 daemon.info] QUIT
Nov 24 17:48:42 prefetch01 ftpd[9304]: [ID 528697 daemon.info] FTP session closed
While FTP isn’t to be relied on for 99% of what we do, it definitely still has its place.