I have been doing quite a bit of experimentation with the Linux network stack in the past few weeks. One thing I have always liked about Linux networking is the bonding implementation, which allows you to aggregate one or more interfaces together for high availability purposes. To create a bonded interface on a CentOS Linux host, you will first need to locate two or more NICs to use. Once you locate a couple of NICs, you will need to create an ifcfg-eth[0-9] interface file similar to the following for each interface:
$ cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
$ cat /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
The difference between these files and the typical interface definition files is the removal of the network information, and the inclusion of the MASTER and SLAVE directives. The MASTER directive indicates which bond to enslave (using ifenslave) the interface to, and the SLAVE directive indicates that this interface will be a slave interface. To configure an actual bond, you will need to create a file named ifcfg-bond[0-9] that contains something similar to the following:
$ cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
IPADDR=192.168.1.60
NETMASK=255.255.255.0
NETWORK=192.168.1.0
BROADCAST=192.168.1.255
ONBOOT=yes
BOOTPROTO=static
USERCTL=no
BONDING_OPTS="mode=active-backup fail_over_mac=1 arp_interval=500 arp_ip_target=192.168.1.1 num_grat_arp=3"
The bond interface definition file contains the name to assign to the bonded interface as well as the network configuration. You can optionally set one or more bonding options with the BONDING_OPTS directive. Now you may be asking yourself, what bonding options are available? You can view a full list on kernel.org, or by perusing the bonding.txt file that comes with the kernel source.
In the example above, I configured the mode to be active-backup (you can also adjust the mode to enable 802.3ad or a more sophisticated bonding mode to better distribute traffic over the available interfaces). I also set fail_over_mac to instruct the bond to assign the MAC address of the active interface to the bond (this is required if you are testing with VMWare server, and is not rquired if you are using physical NICs), arp_interval to control how often ARP requests are sent to test for availability, arp_ip_target to control where ARP requests are sent, and num_grat_arp to control the number of gratuitous ARPs that are issued when a network failover occurs. I have a follow up post that describes how to configure the Linux bonding implementation to work with 802.3ad, so stay tuned.
I have come to rely on the CentOS and Fedora network initialization scripts to add default routes, but a situation occurred where I needed to add one manually. The route command syntax differs between Solaris, Linux and OpenBSD, so I thought I would jot down how I manually added a static route to my Linux host for future reference:
$ ip route add 0.0.0.0/0 via 192.168.1.1 dev bond0
I kickstart’ed a Fedora Core 10 host last week, and decided to add a second NIC to the host to do some network bonding testing. Once the I added the NIC and rebooted the host, I noticed that one of the interfaces had the “_rename” extension:
eth0_rename Link encap:Ethernet HWaddr 00:0C:29:B7:1B:7B
inet addr:192.168.1.148 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:feb7:1b7b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:593 errors:0 dropped:0 overruns:0 frame:0
TX packets:363 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:52869 (51.6 KiB) TX bytes:67112 (65.5 KiB)
Interrupt:18 Base address:0x2000
eth1 Link encap:Ethernet HWaddr 00:0C:29:B7:1B:85
inet addr:192.168.1.128 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:feb7:1b85/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:50 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4738 (4.6 KiB) TX bytes:1236 (1.2 KiB)
Interrupt:16 Base address:0x2080
Network interface names are assigned by udev at boot time, and after a bit of poking around in /etc/udev I came across the following udev rule file:
# This file was automatically generated by the /lib/udev/write_net_rules
# program run by the persistent-net-generator.rules rules file.
#
# You can modify it, as long as you keep each rule on a single line.
# Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] (rule written by anaconda)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:29:a0:ad:c7", ATTR{type}=="1",
KERNEL=="eth*", NAME="eth0"
# PCI device 0x1022:0x2000 (pcnet32)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:29:b7:1b:85", ATTR{type}=="1",
KERNEL=="eth*", NAME="eth0"
# PCI device 0x1022:0x2000 (pcnet32)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:29:b7:1b:85", ATTR{type}=="1",
KERNEL=="eth*", NAME="eth1"
It appears that the the same mac address got assigned to both eth0 and eth1, which was causing one of the devices to get renamed during system initialization. To fix this issue, I updated the MAC address that was associated with eth0 rule, which allowed everything to come up cleanly. Udev is pretty sweet, and I will have to add it to my list of things to blog about it in the future.
While catching up with e-mail this morning, I noticed that the OpenSolaris community is planning to integrate the Berkeley packet filter into opensolaris:
“This case seeks to build on the Crossbow (PSARC/2006/357[7]) infrastructure and provide a new (to OpenSolaris) mechanism for capturing packets: the use of the Berkeley Packet Filter (BPF). The goal of this project is to provide a method to capture packets that has higher performance than what we have to offer today on Solaris (DLPI based schemes.) It also has the added benefit of increasing our compatibility with other software that has been built to use BPF.”
This is awesome news, and each month it seems there are fewer and fewer packages I have to bolt on to my OpenSolaris installations. Nice!
One of my friends pinged me last week and asked me how I would go about locating all hosts on a layer-2 network. Typically I would use fping with the “-g’ option, but he wanted to find all hosts including ones that were running host-based firewalls. For this specific case, I would use the Linux arping utility. This nifty utility allows you to locate hosts using ARP requests and responses, which hosts running host-based firewalls would still respond to:
$ arping 192.168.1.1
ARPING 192.168.1.1 from 192.168.1.10 eth0
Unicast reply from 192.168.1.1 [00:23:69:25:A2:4E] 0.942ms
Unicast reply from 192.168.1.1 [00:23:69:25:A2:4E] 0.725ms
Unicast reply from 192.168.1.1 [00:23:69:25:A2:4E] 0.727ms
Unicast reply from 192.168.1.1 [00:23:69:25:A2:4E] 0.722ms
Unicast reply from 192.168.1.1 [00:23:69:25:A2:4E] 0.739ms
In the sample session above, I was able to locate a host that was running iptables with a drop all incoming traffic policy. There are a ton of super useful networking utilities in the Debian package repository, and I will have to document some of the less well known utilities in future posts.