Managing loop devices on CentOS and Fedora Linux hosts

Linux loop devices provide a way to mount ordinary files as block devices. This capability allows you to easily access an ISO image that resides on disk, mount and unmount encrypted devices (the dm-crypt and fuse encryption module may be a better solution for this), or test out new file systems using plain old files.

Linux loop devices are managed through the losetup utility, which has options to add, list, remove and locate unused loop devices. To associate a loop device with a file, you will first need to locate an unused loop device in /dev. This can be accomplished by running losetup with the “-f” (find an unused loop device) option:

$ losetup -f
/dev/loop0

Once you identify an available loop device, you can associate the loop device with a file by running losetup with the name of the loop device, and the file to associate with the loop device:

$ losetup /dev/loop0 /root/foo

To verify the device is attached, you can run losetup with the “-a” (show all loop devices) or “-j” (show loop devices associate with the corresponding file) option:

$ losetup -a
/dev/loop0: [0801]:12779871 (/root/foo)

$ losetup -j /root/foo
/dev/loop0: [0801]:12779871 (/root/foo)

To access the contents of a loop device, you can use the mount utility to mount the loop device to a directory that resides in an existing file system:

$ mount /dev/loop0 /mnt

This of course assumes that the underlying file contains a valid label and file system (you can run fdisk or parted to create a label, and then use your favorite mkfs variation to create a file system). Once you finish using a loop device, you can remove it by running losetup with the “-d” (remove loop device) option:

$ umount /mnt

$ losetup -d /dev/loop0

$ losetup -a

I have come to rely on loop devices for all sorts of purposes, and their simplicity makes them so so useful!

Increasing the number of available file descriptors on Centos and Fedora Linux Servers

While debugging an application a few weeks back, I noticed the following error in the application log:

Cannot open file : Too many open files

The application runs as an unprivileged user, and upon closer inspection I noticed that the maximum number of file descriptors available to the process was 1024:

$ ulimit -n
1024

Increasing the maximum number of file descriptors is a two part process. First, you need to make sure the kernel’s file-max tunable is set sufficiently. This value controls the number of files that can be open system-side, and is controlled through the file-max proc setting:

$ cat /proc/sys/fs/file-max
366207

$ echo 512000 > /proc/sys/fs/file-max

$ cat /proc/sys/fs/file-max
512000

Once you know you have enough file descriptors available, you will need to add an entry for the user (or add * if this applies to all users) to the security limits file (/etc/security/limits.conf). This file is used to control resource limits for users and groups, and is processes when a user login session is initialized. To increase the number of file descriptors for all users to 8k, the following entry can be appended to the file:

*                -       nofile          8192

If you would prefer to set the value for a specific user, you can do so by replacing the asterisk with the
username:

user1            -       nofile          8192

Once the file-max proc setting and security limits file are updated, ulimit should report the increase:

$ ulimit -n
8192

If you want to test to see how many file descriptors are available to a given user, you can run the openfds program I wrote:

$ ./openfds
Error: Too many open files
Last file descriptor successfully opened: 8189

Openfds will print the string representation of the errno value that resulted from open returning -1, as well as the last file descriptor that was successfully opened. If you are curious why 8189 is reported, it’s because STDIN, STDOUT and STDERR take up 3 entries in the file descriptor table for the process.

Managing /etc/sysctl.conf with the sysctl utility

The Linux kernel provides the sysctl interface to modify values that reside under the /proc/sys directory. Sysctl values are typically stored in /etc/sysctl.conf, and are applied using the sysctl utility. To set a sysctl variable to a specific value, you can run sysctl with the “-w” (change a specific sysctl variable) option:

$ sysctl -w net.ipv4.ip_forward=1

To apply all of the settings in /etc/sysctl.conf to a system, you can run sysctl with the “-p” (apply the sysctl values in /etc/sysctl.conf to a running server) option:

$ sysctl -p

net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 1
kernel.core_uses_pid = 1
net.core.rmem_max = 16842752
net.core.wmem_max = 16842752
net.ipv4.tcp_rmem = 4096 65535 16842752
net.ipv4.tcp_wmem = 4096 65535 16842752

The sysctl interface is pretty powerful, and can you learn more about the individual sysctl variables by perusing the Documentation/sysctl/ directory that ships with the Linux kernel source code.

Enabling IPv4 forwarding on Centos and Fedora Linux servers

When I was playing around with KeepAlived, I managed to create a few HA scenarios that mirrored actual production uses. One scenario was creating a highly available router, which would forward IPv4 traffic between interfaces. To configure a CentOS or Fedora Linux host to forward IPv4 traffic, you can set the “net.ipv4.ip_forward” sysctl to 1 in /etc/sysctl.conf:

net.ipv4.ip_forward = 1

Once the sysctl is added to /etc/sysctl.conf, you can enable it by running sysctl with the “-w” (change a specific sysctl value) option:

$ /sbin/sysctl -w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1

If routing is configured correctly on the router, packets should start flowing between the interfaces on the server. Nice!

Measuring TCP and UDP throughput between Linux and Solaris hosts

I have been assisting a friend with tuning his Netbackup installation. While debugging the source of his issues, I noticed that several jobs were reporting low throughput numbers. In each case the client was backing up a number of large files, which should have been streamed at gigabit Ethernet speeds. To see how much bandwidth was available between the client and server, I installed the iperf utility to test TCP and UDP network throughput.

To begin using iPerf, you will need to download and install it. If you are using CentOS, RHEL or Fedora Linux, you can install it from their respective network repositories:

$ yum install iperf

iPerf works by running a server process on one node, and a client process on a second node. The client connects to the server using a port specified on the command line, and will stream data for 10 seconds by default (you can override this with the “-t” option). To configure the server, you need to run iperf with the “-s” (run as a server process) and “-p” (port to listen on) options, and one or more optional arguments:

$ iperf -f M -p 8000 -s -m -w 8M

To configure a client to connect to the server, you will need to run iperf with the “-c” (host to connect to) and “-p” (port to connect to) options, and one or more optional arguments:

$ iperf -c 192.168.1.6 -p 8000 -t 60 -w 8M

When the client finishes its throughput test, a report similar to the following will be displayed:

------------------------------------------------------------
Client connecting to 192.168.1.6, TCP port 8000
TCP window size: 8.0 MByte 
------------------------------------------------------------
[  3] local 192.168.1.7 port 44880 connected with 192.168.1.6 port 8000
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-60.0 sec  6.58 GBytes    942 Mbits/sec

The output is extremely handy, and is useful for measuring the impact of larger TCP / UDP buffers, jumbo frames, and how multiple network links affect client and server communications. In my friend’s case it turned out to be a NetBackup bug, which was easy enough to locate once we knew the server and network were performing as expected. Viva la iperf!