Blog O' Matty


Global search feature added to prefetch.net

This article was posted by Matty on 2009-07-30 20:06:00 -0400 -0400

I have posted a lot of material to my website over the past few years, and needed a way to easily locate content. To make finding material super easy, I recently added a global search page. I’m not sure if folks will find this useful, but I thought I would throw it out there in case it helps anyone.

Measuring TCP and UDP throughput between Linux and Solaris hosts

This article was posted by Matty on 2009-07-30 19:48:00 -0400 -0400

I have been assisting a friend with tuning his Netbackup installation. While debugging the source of his issues, I noticed that several jobs were reporting low throughput numbers. In each case the client was backing up a number of large files, which should have been streamed at gigabit Ethernet speeds. To see how much bandwidth was available between the client and server, I installed the iperf utility to test TCP and UDP network throughput.

To begin using iPerf, you will need to download and install it. If you are using CentOS, RHEL or Fedora Linux, you can install it from their respective network repositories:

$ yum install iperf

iPerf works by running a server process on one node, and a client process on a second node. The client connects to the server using a port specified on the command line, and will stream data for 10 seconds by default (you can override this with the “-t” option). To configure the server, you need to run iperf with the “-s” (run as a server process) and “-p” (port to listen on) options, and one or more optional arguments:

$ iperf -f M -p 8000 -s -m -w 8M

To configure a client to connect to the server, you will need to run iperf with the “-c” (host to connect to) and “-p” (port to connect to) options, and one or more optional arguments:

$ iperf -c 192.168.1.6 -p 8000 -t 60 -w 8M

When the client finishes its throughput test, a report similar to the
following will be displayed:

------------------------------------------------------------
Client connecting to 192.168.1.6, TCP port 8000
TCP window size: 8.0 MByte
------------------------------------------------------------
[ 3] local 192.168.1.7 port 44880 connected with 192.168.1.6 port 8000
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-60.0 sec 6.58 GBytes 942 Mbits/sec

The output is extremely handy, and is useful for measuring the impact of larger TCP / UDP buffers, jumbo frames, and how multiple network links affect client and server communications. In my friend’s case it turned out to be a NetBackup bug, which was easy enough to locate once we knew the server and network were performing as expected. Viva la iperf!

Compiling a custom kernel on Fedora and CentOS Linux hosts

This article was posted by Matty on 2009-07-29 11:10:00 -0400 -0400

I have been experimenting with lxc-containers, which use a number of features in the latest 2.6 kernels (specifically, namespaces). To ensure that I have the latest bug fixes and performance enhancements, I have been rolling my own kernels. This has been remarkably easy, since the Makefile that ships with the kernel has an option to build RPM packages. To build a kernel and create an RPM, you will first need to download and extract the kernel source code:

$ cd /var/tmp

$ wget http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.30.tar.bz2

$ cd /usr/src

$ tar jxvf /var/tmp/linux-2.6.30.tar.bz2

$ cd linux-2.6.30

Once the source code is extracted, you can create a kernel configuration file with ‘make menuconfig’:

$ make menuconfig

If you have built a kernel previously, you should run ‘make mrproper’ to clean up old object and configuration files:

$ make mrproper

If all goes well, you should now have a clean set of kernel source and a kernel configuration file. To create an RPM that contains the kernel and map files, you can type ‘make rpm’:

$ make rpm

After the various object files are compiled and linked, you will see a
message similar to the following:

Processing files: kernel-2.6.30-1.x86_64
Provides: kernel-2.6.30
Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1
rpmlib(FileDigests) <= 4.6.0-1 rpmlib(PayloadFilesHavePrefix) <=
4.0-1
Checking for unpackaged file(s): /usr/lib/rpm/check-files
/root/rpmbuild/BUILDROOT/kernel-2.6.30-1.x86_64
Wrote: /root/rpmbuild/SRPMS/kernel-2.6.30-1.src.rpm
Wrote: /root/rpmbuild/RPMS/x86_64/kernel-2.6.30-1.x86_64.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.5PPYZh
+ umask 022
+ cd /root/rpmbuild/BUILD
+ cd kernel-2.6.30
+ rm -rf /root/rpmbuild/BUILDROOT/kernel-2.6.30-1.x86_64
+ exit 0
rm ../kernel-2.6.30.tar.gz

To install the kernel, you can run rpm with the “-i” option:

$ rpm -ivh /root/rpmbuild/SRPMS/kernel-2.6.30-1.src.rpm

This will install a kernel into /boot, but won’t generate the initrd image that is required to boot the machine. To create the image, you can run mkinitrd with the location to write the initrd image to, and the kernel version to generate the image for:

$ mkinitrd /boot/initrd-2.6.30.img 2.6.30

Once these files are in place, you can use the grubby tool to install a grub menu entry:

$ grubby --add-kernel=/boot/vmlinux-2.6.30.bz2

--title="Linux Kernel 2.6.30"
--initrd=/boot/initrd-2.6.30.img
--args="ro root=UUID=48bfe6a8-b9d3-4c98-a288-365501aa9ff0"

I tend to stay away from creating custom kernels, but occasionally they are needed to work around bugs and performance regressions. The example above showed how to build and install a 2.6.30 kernel, but this procedure will work with any kernel version assuming you adjust the versions in the commands listed above.

Continuing failed FTP and HTTP transfers with wget

This article was posted by Matty on 2009-07-28 10:43:00 -0400 -0400

As you can probably tell from my blog, I am constantly learning about new technology products. When I decide that I want to play with a new operating system release, or test out a new piece of software, I will typically retrieve the latest stable version of the software. When operating system ISO images are involved, this typically requires me to download several gigabytes of data prior to beginning my testing.

Periodically transfers will fail, leaving me with a chunk of the original file. To ensure that I’m not wasting time and bandwidth retrieving data that has already been downloaded, I have come to rely on the wget “–continue” option. When this option is present, wget will check to see if the downloaded file exists, and will proceed to grab the remainder of the file using the existing file’s size as an offset to the retrieval method:

$ ls -la Fedora-11-x86_64-DVD.iso

-rw-r--r-- 1 matty matty 543656488 2009-07-28 10:34 Fedora-11-x86_64-DVD.iso

$ wget --continue

http://www.gtlib.gatech.edu/pub/fedora.redhat/linux/releases/11/Fedora/x86_64/iso/Fedora-11-x86_64-DVD.iso**
--2009-07-28 10:41:33--
http://www.gtlib.gatech.edu/pub/fedora.redhat/linux/releases/11/Fedora/x86_64/iso/Fedora-11-x86_64-DVD.iso
Resolving www.gtlib.gatech.edu... 128.61.111.9, 128.61.111.10,
128.61.111.11, ...
Connecting to www.gtlib.gatech.edu|128.61.111.9|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location:
ftp://ftp.gtlib.gatech.edu/pub/fedora.redhat/linux/releases/11/Fedora/x86_64/iso/Fedora-11-x86_64-DVD.iso
[following]
--2009-07-28 10:41:34--
ftp://ftp.gtlib.gatech.edu/pub/fedora.redhat/linux/releases/11/Fedora/x86_64/iso/Fedora-11-x86_64-DVD.iso
=> `Fedora-11-x86_64-DVD.iso'
Resolving ftp.gtlib.gatech.edu... 128.61.111.10, 128.61.111.11,
128.61.111.9, ...
Connecting to ftp.gtlib.gatech.edu|128.61.111.10|:21... connected.
Logging in as anonymous ... Logged in!
==> SYST ... done. ==> PWD ... done.
==> TYPE I ... done. ==> CWD
/pub/fedora.redhat/linux/releases/11/Fedora/x86_64/iso ... done.
==> SIZE Fedora-11-x86_64-DVD.iso ... 4268124160
==> PASV ... done. ==> REST 543656488 ... done.
==> RETR Fedora-11-x86_64-DVD.iso ... done.
Length: 4268124160 (4.0G), 3724467672 (3.5G) remaining

12% [+++++++ ] 549,004,000 1.82M/s

In the example above we can see that we have retrieved approximately 540MB of the Fedora 11 image, and wget uses this to begin the transfer approximately 540MB into the file. Wget and curl are truly awesome!

Wide open remote root exploit on dd-wrt

This article was posted by Matty on 2009-07-25 08:08:00 -0400 -0400

As reported on Slashdot, there is a wide open exploit on dd-wrt due to how the httpd server handles and parses incoming requests without being authenticated.  The HTTP get code to execute has been posted on milw0rm. If you haven’t already, you should either update your dd-wrt installation to build 11533 (most router firmwares have already been updated to this latest build ondd-wrt’s router database) or insert the following firewall rules:

The below was taken from dd-wrt’s site directly.

The exploit can also be stopped, using a firewall rule: Go to your router’s admin interface to > Administration > Commands and enter the following text:

insmod ipt_webstr
ipt_webstr
ln -s /dev/null /tmp/exec.tmp
iptables -D INPUT -p tcp -m tcp -m webstr --url cgi-bin -j REJECT
--reject-with tcp-reset
iptables -I INPUT -p tcp -m tcp -m webstr --url cgi-bin -j REJECT
--reject-with tcp-reset</span> press "Save Firewall" and reboot your
router. This rule blocks any attempt to access sth that has "cgi-bin" in
the url. You can verify that the rule is working by entering:

in your browser. That should give a “Connection was reset” (Firefox).