Creating Linux bridging / tap devices with tunctl and openvpn

The more and more I play around with KVM virtualization, the more I realize just how useful Linux bridging is. In the Linux bridging world, a bridge device simulates a multiport Ethernet switch. To connect to the switch, you create a tap device that simulates a port on that switch. Once you have bridging configured on your host, there are two prevalent ways of going about creating taps. The first method is through the openvpn program:

$ openvpn –mktun –dev tap0

Fri Apr 24 15:14:26 2009 TUN/TAP device tap0 opened
Fri Apr 24 15:14:26 2009 Persist state set to: ON

This will create a tap device named tap0, which you can configure locally or assign to a virtual machine running on the host. The second way to create a tap is through tunctl:

$ tunctl -u root

Set 'tap0' persistent and owned by uid 0

This will also create a tap device named tap0, and will set the owner of the interface to root. Once a tap device is created, you can configure it just like any other Ethernet interface. Nice!

Viewing the status of NetworkManager managed links

As I mentioned in a previous post, I spent some time trying to get the NetworkManager to respect my custom DNS settings. When I was looking into this issue, I learned about the nm-tool utility. This nifty tool will print the status of each NetworkManager managed interface, as well as the connection state:

$ nm-tool

NetworkManager Tool

State: connected

- Device: eth0  [System eth0] --------------------------------------------------
  Type:              Wired
  Driver:            tg3
  State:             connected
  Default:           yes
  HW Address:        00:19:B9:3A:26:BC

  Capabilities:
    Carrier Detect:  yes
    Speed:           100 Mb/s

  Wired Properties
    Carrier:         on

  IPv4 Settings:
    Address:         192.168.1.91
    Prefix:          24 (255.255.255.0)
    Gateway:         192.168.1.254

    DNS:             192.168.1.1
    DNS:             192.168.1.2

I found the IPv4 settings section to be rather useful while I was debugging a network connectivity problem (nm-tool and ethtool make it SUPER easy to debug link problems), and will definitely be using this tool in the future!

Getting the Linux NetworkManager process to respect custom DNS server settings

I recently switched my work Desktop from Ubuntu to Fedora 11, and noticed that there are some new configuration options now that network intefaces are managed by the NetworkManager process. Two useful options are the ability to specify the DNS servers and search domains in the network-scripts files, and have those applied when a DHCP lease is acquired (this assumes you override the values provided by your DHCP server). To override the DNS servers and search domains, you can set the DNS1, DNS2 and DOMAIN variables in your favorite ifcfg-eth[0-9]+ script:

$ egrep ‘(DNS1|DNS2|DOMAIN)’ /etc/sysconfig/network-scripts/ifcfg-eth0
DNS1=192.168.1.1
DNS2=192.168.1.2
DOMAIN=”prefetch.net ops.prefetch.net”

Hopefully the NetworkManager is all it’s cracked up to be. Only time will tell of course. :)

Enabling IPv4 forwarding on Centos and Fedora Linux servers

When I was playing around with KeepAlived, I managed to create a few HA scenarios that mirrored actual production uses. One scenario was creating a highly available router, which would forward IPv4 traffic between interfaces. To configure a CentOS or Fedora Linux host to forward IPv4 traffic, you can set the “net.ipv4.ip_forward” sysctl to 1 in /etc/sysctl.conf:

net.ipv4.ip_forward = 1

Once the sysctl is added to /etc/sysctl.conf, you can enable it by running sysctl with the “-w” (change a specific sysctl value) option:

$ /sbin/sysctl -w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1

If routing is configured correctly on the router, packets should start flowing between the interfaces on the server. Nice!

Measuring TCP and UDP throughput between Linux and Solaris hosts

I have been assisting a friend with tuning his Netbackup installation. While debugging the source of his issues, I noticed that several jobs were reporting low throughput numbers. In each case the client was backing up a number of large files, which should have been streamed at gigabit Ethernet speeds. To see how much bandwidth was available between the client and server, I installed the iperf utility to test TCP and UDP network throughput.

To begin using iPerf, you will need to download and install it. If you are using CentOS, RHEL or Fedora Linux, you can install it from their respective network repositories:

$ yum install iperf

iPerf works by running a server process on one node, and a client process on a second node. The client connects to the server using a port specified on the command line, and will stream data for 10 seconds by default (you can override this with the “-t” option). To configure the server, you need to run iperf with the “-s” (run as a server process) and “-p” (port to listen on) options, and one or more optional arguments:

$ iperf -f M -p 8000 -s -m -w 8M

To configure a client to connect to the server, you will need to run iperf with the “-c” (host to connect to) and “-p” (port to connect to) options, and one or more optional arguments:

$ iperf -c 192.168.1.6 -p 8000 -t 60 -w 8M

When the client finishes its throughput test, a report similar to the following will be displayed:

------------------------------------------------------------
Client connecting to 192.168.1.6, TCP port 8000
TCP window size: 8.0 MByte 
------------------------------------------------------------
[  3] local 192.168.1.7 port 44880 connected with 192.168.1.6 port 8000
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-60.0 sec  6.58 GBytes    942 Mbits/sec

The output is extremely handy, and is useful for measuring the impact of larger TCP / UDP buffers, jumbo frames, and how multiple network links affect client and server communications. In my friend’s case it turned out to be a NetBackup bug, which was easy enough to locate once we knew the server and network were performing as expected. Viva la iperf!

Deploying Highly Available Virtual Interfaces With Keepalived

I recently played around with keepalived, and documented my experiences in an article titled Deploying Highly Available Virtual Interfaces With Keepalived. If you are interested in deploying highly available Linux routers, or just looking to failover IP addresses between servers, you may find the article useful.