I received a comment from a reader regarding the trafshow utility. Trafshow is definitely a cool piece of software, and I use it one some of my Linux hosts. On my OpenBSD systems, I have been using the pktstat utility, which provides connection statistics for all traffic on the system:
$ pktstat
interface: sis0 total: 13.8Mb (1m34s)
cur: 147.0k (72%) [115.5k 39.4k 14.5k] min: 94.4k max: 202.3k avg: 145.6k bps
bps % b desc
134.4k 66% 12.9M tcp 192.168.1.10:8010 <-> 192.168.1.100:58720
105.6 0% 528.0 tcp 192.168.1.10:www <-> 192.168.1.100:54947
12.5k 6% 62.5k tcp 192.168.1.10:www <-> 192.168.1.100:64475
- 200 GET /index.html
Since trafshow contains numerous features that aren’t present in pktstat, I reckon I should upgrade my OpenBSD image to use trafshow. Thanks for the comment!
While performing some testing a few weeks back, I needed to enable jumbo frames on one of our Fujitsu 250s. This was accomplished with the following three steps:
1. Add the following line to /etc/system:
$ echo "set fjgi:fjgi_jumbo=1" >> /etc/systemd
2. If the default 9000 byte MTU isn’t ideal, add the preferred MTU to
the /etc/fjmtu.fjgiX file
$ echo "8192" >> /etc/fjmtu.fjgi0
3. Reboot the system.
If everything works correctly, ifconfig will report the larger mtu.
While debugging a DNS problem a few weeks back, I needed a way to measure the time it took a name server to respond to a DNS request. After poking around the OpenBSD ports collection, I came across the nsping utility. Nsping queries a DNS server passed on the command line, and reports the time it took the server to resolve a name. The following example shows how to use nsping to measure the time it takes to resolve the name prefetch.net on the name server ns2.dreamhost.com:
$ nsping -t 5 -h prefetch.net ns2.dreamhost.com
NSPING ns2.dreamhost.com (66.201.54.66): Hostname = "prefetch.net", Type = "IN A"
+ [ 0 ] 46 bytes from 66.201.54.66: 76.224 ms [ 0.000 san-avg ]
+ [ 1 ] 46 bytes from 66.201.54.66: 79.862 ms [ 78.043 san-avg ]
+ [ 3 ] 46 bytes from 66.201.54.66: 79.902 ms [ 78.663 san-avg ]
+ [ 4 ] 46 bytes from 66.201.54.66: 79.912 ms [ 78.975 san-avg ]
+ [ 6 ] 46 bytes from 66.201.54.66: 79.920 ms [ 79.164 san-avg ]
^C
Total Sent: [ 7 ] Total Received: [ 5 ] Missed: [ 2 ] Lagged [ 0 ]
Ave/Max/Min: 79.164 / 79.920 / 76.224
Each line contains the size of the response, the time it took to complete the request, and a sequence number. The summary line contains the numer of requests that were sent to the server, the number that were missing, and the average, maximum and minimum response times. If you want to use a resource record type other than “A,” (the default resource record type) you can invoke nsping with the “-T” option the resource record type to use:
$ nsping -t 5 -h prefetch.net -T mx ns1.dreamhost.com
NSPING ns1.dreamhost.com (66.33.206.206): Hostname = "prefetch.net", Type = "IN MX"
+ [ 0 ] 136 bytes from 66.33.206.206: 73.875 ms [ 0.000 san-avg ]
+ [ 1 ] 136 bytes from 66.33.206.206: 79.905 ms [ 76.890 san-avg ]
+ [ 2 ] 136 bytes from 66.33.206.206: 80.476 ms [ 78.085 san-avg ]
+ [ 3 ] 136 bytes from 66.33.206.206: 80.030 ms [ 78.572 san-avg ]
+ [ 6 ] 136 bytes from 66.33.206.206: 80.004 ms [ 78.858 san-avg ]
^C
Total Sent: [ 7 ] Total Received: [ 5 ] Missed: [ 2 ] Lagged [ 0 ]
Ave/Max/Min: 78.858 / 80.476 / 73.875
Now to figure out why DNS responses are missing!
One nifty feature that recently made it’s appearance in Solaris 10 is device in use checking. This feature is implemented by the libdiskmgt.so.1 shared library, and allows utiltiies to see if a device is being used, and what it is being used for. This is really neat, and I love the fact that format now prints what each partition on an active device is being used for:
$ format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0d0
/pci@0,0/pci-ide@11,1/ide@0/cmdk@0,0
1. c1d0
/pci@0,0/pci-ide@11,1/ide@1/cmdk@0,0
Specify disk (enter its number): 0
selecting c0d0
Controller working list found
[disk formatted, defect list found]
/dev/dsk/c0d0s0 is part of SVM volume stripe:d10. Please see metaclear(1M).
/dev/dsk/c0d0s1 is part of SVM volume stripe:d30. Please see metaclear(1M).
/dev/dsk/c0d0s3 is part of SVM volume stripe:d20. Please see metaclear(1M).
/dev/dsk/c0d0s4 is part of active ZFS pool home. Please see zpool(1M).
/dev/dsk/c0d0s7 contains an SVM mdb. Please see metadb(1M).
I digs me some Solaris!
While persuing the OpenBSD ports collection a few weeks ago, I came across the ifstat utility. This nifty utility allows you to view bandwidth totals for each interface in a server, and at specific intervals. Here is a sample run showing the bandwidth in and out of the sis0 and sis1 Ethernet interfaces, and the total bandwidth in and out of the system:
$ ifstat -TAb 5
sis0 sis1 Total
Kbps in Kbps out Kbps in Kbps out Kbps in Kbps out
129.96 4.37 3.91 131.71 133.87 136.09
130.48 5.43 4.77 131.98 135.25 137.41
132.21 4.24 3.60 133.71 135.81 137.95
This is a nifty utility, and one I plan to add to my stock OpenBSD builds!