I have been assisting a friend with tuning his Netbackup installation. While debugging the source of his issues, I noticed that several jobs were reporting low throughput numbers. In each case the client was backing up a number of large files, which should have been streamed at gigabit Ethernet speeds. To see how much bandwidth was available between the client and server, I installed the iperf utility to test TCP and UDP network throughput.
To begin using iPerf, you will need to download and install it. If you are using CentOS, RHEL or Fedora Linux, you can install it from their respective network repositories:
$ yum install iperf
iPerf works by running a server process on one node, and a client process on a second node. The client connects to the server using a port specified on the command line, and will stream data for 10 seconds by default (you can override this with the “-t” option). To configure the server, you need to run iperf with the “-s” (run as a server process) and “-p” (port to listen on) options, and one or more optional arguments:
$ iperf -f M -p 8000 -s -m -w 8M
To configure a client to connect to the server, you will need to run iperf with the “-c” (host to connect to) and “-p” (port to connect to) options, and one or more optional arguments:
$ iperf -c 192.168.1.6 -p 8000 -t 60 -w 8M
When the client finishes its throughput test, a report similar to the
following will be displayed:
------------------------------------------------------------
Client connecting to 192.168.1.6, TCP port 8000
TCP window size: 8.0 MByte
------------------------------------------------------------
[ 3] local 192.168.1.7 port 44880 connected with 192.168.1.6 port 8000
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-60.0 sec 6.58 GBytes 942 Mbits/sec
The output is extremely handy, and is useful for measuring the impact of larger TCP / UDP buffers, jumbo frames, and how multiple network links affect client and server communications. In my friend’s case it turned out to be a NetBackup bug, which was easy enough to locate once we knew the server and network were performing as expected. Viva la iperf!