Configuring a caching only DNS server on Solaris hosts

While investigating a performance issue a few weeks back, I noticed that a couple of our Solaris hosts were sending 10s of thousands of DNS requests to our authoritative DNS servers. Since the application was broken and unable to cache DNS records, I decided to configure a local caching only DNS server to reduce load on our DNS servers.

Creating a caching only name server on a Solaris host is a piece of cake. To begin, you will need to create a directory to store the bind zone files:

$ mkdir -p /var/named/conf

After this directory is created, you will need to place the, localhost and root.hints file in the conf directory. You can grab the and localhost files from my site, and the root.hints file can be generated with the dig utility:

$ dig . ns > /var/named/conf/root.hints

Next you will need to create a BIND configuration file (a sample bind configuration file is also available on my site). The BIND packages that ship with Solaris check for this file in /etc/named.conf by default, so it’s easiest to create it there (you can also hack the SMF start script, but that can get overwritten in the future and wipe out your changes). To start the caching only DNS server, you can enable the dns/server SMF service:

$ svcadm enable dns/server

If things started up properly, you should see log entries similar to the following in /var/adm/messages:

Jun 18 10:26:57 server named[7819]: [ID 873579 daemon.notice] starting BIND 9.6.1-P3
Jun 18 10:26:57 server named[7819]: [ID 873579 daemon.notice] built with –prefix=/usr –with-libtool –bindir=/usr/sbin –sbindir=/usr/sbin –libdir=/usr/lib/dns –sysconfdir=/etc –localstatedir=/var –with-openssl=/usr/sfw –enable-threads=yes –enable-devpoll=yes –enable-fixed-rrset –disable-openssl-version-check -DNS_RUN_PID_DIR=0

To test the caching only DNS server, you can use our trusty friend dig:

$ dig @ a

If that returns the correct A record, it’s a safe bet that the caching only name server is doing its job! To configure the server to query the local DNS server, you will need to replace the nameserver entries in /etc/resolv.conf with the following:


This will force resolution to the DNS server bound to localhost, and allow the local machine to cache query responses. DNS caching is good stuff, and setting this up on a Solaris machine is a piece of cake!

Configuring an OpenSolaris host to use a static IP address

I installed OpenSolaris 2009.06 yesterday, and noticed that the installer doesn’t give you the option to configure a static IP address. Network address are retrieved via DHCP, which isn’t an option for this host. To configure the host to use a static IP address, I changed the /etc/nwam/llp file. Here is the file before:

$ cat /etc/nwam/llp
bge0 dhcp

And here is the file after:

$ cat /etc/nwam/llp
bge0 static

Now my host can take advantage of NWAM, and use the static IP I allocated for it!

Monitoring traffic across a Solaris 802.3ad link aggregation

I manage a number of Solaris hosts that push a fair amount of data each day. These servers utilize Solaris 802.3ad link aggregations, which contain anywhere from 4 to 8 physical NICs. Monitoring the bandwidth across the links in an aggregation is a snap with Solaris, since most of the dladm subcommands support the “-s” (show statistics) option:

$ dladm show-aggr -s -i 2 1

key:1		ipackets   rbytes       opackets   obytes       %ipkts  %opkts
 	Total	355021     531533375    60288      4944021     
	nxge0	166090     249992028    0          0              46.8     0.0  
	nxge1	120638     179830318    0          0              34.0     0.0  
	nxge4	16         1172         25728      2109696         0.0    42.7  
	nxge5	68277      101709857    34560      2834325        19.2    57.3  

key:1		ipackets   rbytes       opackets   obytes       %ipkts  %opkts
 	Total	344131     513180425    47543      3900596     
	nxge0	167398     250160702    12         1672           48.6     0.0  
	nxge1	95286      142041090    8          1330           27.7     0.0  
	nxge4	17         1320         21601      1771571         0.0    45.4  
	nxge5	81430      120977313    25922      2126023        23.7    54.5  

In the example above, dladm printed the number of bytes and packets received for each link in the aggregation that was created with key number 1. While not quite as awesome as nicstat, the statistics option is handy for getting a quick overview of the number of packets and bytes traversing each link.

Measuring TCP and UDP throughput between Linux and Solaris hosts

I have been assisting a friend with tuning his Netbackup installation. While debugging the source of his issues, I noticed that several jobs were reporting low throughput numbers. In each case the client was backing up a number of large files, which should have been streamed at gigabit Ethernet speeds. To see how much bandwidth was available between the client and server, I installed the iperf utility to test TCP and UDP network throughput.

To begin using iPerf, you will need to download and install it. If you are using CentOS, RHEL or Fedora Linux, you can install it from their respective network repositories:

$ yum install iperf

iPerf works by running a server process on one node, and a client process on a second node. The client connects to the server using a port specified on the command line, and will stream data for 10 seconds by default (you can override this with the “-t” option). To configure the server, you need to run iperf with the “-s” (run as a server process) and “-p” (port to listen on) options, and one or more optional arguments:

$ iperf -f M -p 8000 -s -m -w 8M

To configure a client to connect to the server, you will need to run iperf with the “-c” (host to connect to) and “-p” (port to connect to) options, and one or more optional arguments:

$ iperf -c -p 8000 -t 60 -w 8M

When the client finishes its throughput test, a report similar to the following will be displayed:

Client connecting to, TCP port 8000
TCP window size: 8.0 MByte 
[  3] local port 44880 connected with port 8000
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-60.0 sec  6.58 GBytes    942 Mbits/sec

The output is extremely handy, and is useful for measuring the impact of larger TCP / UDP buffers, jumbo frames, and how multiple network links affect client and server communications. In my friend’s case it turned out to be a NetBackup bug, which was easy enough to locate once we knew the server and network were performing as expected. Viva la iperf!

How does crossbow deal with MAC address conflicts?

When I gave my presentation on Solaris network virtualization a few months back, one of the folks in the audience asked me how Crossbow deals with duplicate MAC detection. I didn’t have a solid answer for the gentlemen that asked, but thanks to Nicolas Droux from Solaris kernel engineering, now I do. Here is what Nicolas had to say about this topic on the network-discuss mailing list:

“When a VNIC is created we have checks in place to ensure that the address doesn’t conflict with another MAC address defined on top of the same underlying NIC. When the MAC address is generated randomly, and the generated MAC address conflicts with another VNIC, we currently fail the whole operation. We should try another MAC address in that case, transparently to the user, I filed CR 6853771 to track this.

To reduce the risk of MAC address conflicts with physical NICs on other hosts on the network, we use by default a OUI with local bit set for random MAC addresses, and we let the administrator use a different OUI or prefix if desired. We currently don’t have a mechanism in place do perform automatic MAC address duplication detection between multiple hosts.”

I was under the impression that Crossbow used the ARP cache and DAD code to verify that a MAC address wasn’t in use on the network, but that doesn’t appear to be the case. Given this new information, I will need to modify my tools to assign a static MAC that is based off of the address assigned to th virtual NIC. Thanks Nicolas for the awesome reply to the list!

Updating inetd service manifests on Solaris hosts

I configured a jumpstart server this weekend in my home lab, and needed to enable tftp to allow clients to fetch pxegrub and the Solaris kernel. To enable the tftp service, I uncommented the tftp service in /etc/inetd.conf and ran inetconv. It later dawned on me that I needed to update the directory tftp serves files from, so I needed to adjust the tftp entry and re-run inetconv. This resulted in the following error:

$ grep tftp /etc/inetd.conf

# TFTPD - tftp server (primarily used for booting)
tftp    dgram   udp6    wait    root    /usr/sbin/in.tftpd      in.tftpd -s /bits/provisioning/boot

$ inetconv
inetconv: Notice: Service manifest for tftp already generated as /var/svc/manifest/network/tftp-udp6.xml, skipped

The fix for this was simple, and required me to run inetconv with the “-f” (force update) option to force an update:

$ inetconv -f

tftp -> /var/svc/manifest/network/tftp-udp6.xml
Importing tftp-udp6.xml ...Done

That did the trick, and my lab machines are now building via the network.