Safari/Firefox “Site not found” errors

Numerous people have posted to the Apple discussion board regarding “site not found” errors, and web browsers requiring two attempts to load a page:

This was REALLY annoying me, so I started doing some digging to see what was wrong. When I broke out tcpdump, I noticed that OS X was sending AAAA record ( these are the IPv6 equivalent to an IPv4 A record ) requests to my DNS server:

$ sudo tcpdump -i en1 -vvvv -n -e port 53

[ … ]

08:51:49.710240 00:0d:93:83:1d:73 > 00:03:ba:05:9d:9f, ethertype IPv4 (0x0800), length 73: IP (tos 0x0, ttl 64, id 29629, offset 0, flags [none], length: 59) > [udp sum ok] 20324+ AAAA? (31)

08:51:49.712412 00:03:ba:05:9d:9f > 00:0d:93:83:1d:73, ethertype IPv4 (0x0800), length 171: IP (tos 0x0, ttl 64, id 20532, offset 0, flags [none], length: 157) > 20324 q: AAAA? 1/1/0 CNAME[|domain]

Now, why Safari is causing the name resolution libraries to query “” when I visit is beyond me ( I will have to do some more digging). Since I am on a pure IPv4 network, I tried disabling IPv6 in the network preferences tab to see if it would stop sending AAAA record requests. This was not the case, and I still had trouble loading pages. While reviewing the latest errata on the OpenBSD errata page:

I came across the following:

“BIND contains a bug which results in BIND trying to contact nameservers via IPv6, even in cases where IPv6 connectivity is non-existent. This results in unneccessary timeouts and thus slow DNS queries.”

Well hot dog, this seems to align with what I was seeing and experiencing!!!! I applied the patch to my OpenBSD name server, restarted named, and the problem seems to be fixed. Several of the folks on the discussion board also mentioned hard coding the DNS servers, which may or may not fix the issue ( if this is a BIND specific issue, then your ISP will need to patch their servers). Once I get some additional time, I will check to see if this is BIND or OpenBSD specific. Stay tuned!

Checking for OpenLDAP unindexed searches

I was checking my openldap logfiles today, and noticed that the “cn” attribute wasn’t indexed. I found this by checking for the “index_param” string in my OpenLDAP logfiles:

$ grep “index_param failed” /var/log/openldap

Dec 25 13:37:19 winnie slapd[730]: [ID 635189 local4.debug] < = bdb_substring_candidates: (cn) index_param failed (18) To fix this problem, I added an "index" statement to my slapd.conf: index cn,mail,sn eq,pres,sub Once the index was added, I rebuilt the indexes with the "slapdindex" utility: $ slapindex -f /usr/local/openldap-common/etc/slapd.conf -b “dc=synackfin,dc=com”

The OpenLDAP documentation has more info in case your interested in learning more:

Bash arrays

I have been trying to get a better grasp of some advanced bash concepts, and have been reading through the following reference manual:

I am pretty familiar with C and perl arrays, but have never had a need to use arrays in a bash script. The syntax for a bash array is almost identical to Perl:

echo ${array[1]}

This assigns the value 12 to the first slot in the array. Since bash variables are untyped, we can assign a string to the same array:

array[2]=”my string”
echo ${array[2]}

This assigns the string “my string” to slot two in the array. Useful stuff!

Solaris Entropy statistics

I exchanged an email or two with Andy Tucker regarding Solaris 9 entropy pools, and found out that entropy statistics are available through mdb’s (modular debugger) “rnd_stats” dcmd:

$ uname -a
SunOS winnie 5.9 Generic_117171-14 sun4u sparc SUNW,Ultra-5_10

$ mdb -k

Loading modules: [ unix krtld genunix ip lofs nfs random ptm ]

> ::rnd_stats
Random number generator statistics:
    8192 bits of entropy estimate
       0 bytes generated for /dev/random
 5998456 bytes generated for /dev/urandom
 2277764 bits of entropy added to the pool
   94006 bits of entropy extracted from the pool
 4849216 bytes added to the random pool
     240 bytes extracted from the random pool

With Solaris 10, you can use the “swrand_stats” and “rnd_stats” dcmds to get entropy statistics:

$ uname -a
SunOS sparky 5.10 s10_69 i86pc i386 i86pc

$ mdb -k

Loading modules: [ unix krtld genunix specfs dtrace ufs ip sctp uhci usba nca random lofs sppp nfs crypto ptm ]

> ::swrand_stats                      
Software-based Random number generator statistics:
    8192 bits of entropy estimate
  861095 bits of entropy added to the pool
    8480 bits of entropy extracted from the pool
 2318888 bytes added to the random pool
    1060 bytes extracted from the random pool

> ::rnd_stats
Random number device statistics:
       0 bytes generated for /dev/random
       0 bytes read from /dev/random cache
      36 bytes generated for /dev/urandom

I wish there was a way to tell if an application blocked because of a depleted pool in Solaris 9 ( dtrace may solve this problem in Solaris 10).

Solaris’s maxcontig setting

After reading through the following UFS tuning information:

I started playing with the UFS “maxcontig” tunable. This value controls the number of file system blocks that will be read or written in a single operation. Each UFS file system contains a maxcontig value, which can be printed with the Solaris “fstyp” command:

$ fstyp -v /dev/md/dsk/d0 |more

magic   11954   format  dynamic time    Fri Jan 14 09:47:19 2005
sblkno  16      cblkno  24      iblkno  32      dblkno  832
sbsize  2048    cgsize  8192    cgoffset 128    cgmask  0xfffffff0
ncg     2191    size    116165760       blocks  114377853
bsize   8192    shift   13      mask    0xffffe000
fsize   1024    shift   10      mask    0xfffffc00
frag    8       shift   3       fsbtodb 1
minfree 1%      maxbpg  2048    optim   time
maxcontig 16    rotdelay 0ms    rps     90
csaddr  832     cssize  35840   shift   9       mask    0xfffffe00
ntrak   16      nsect   255     spc     4080    ncyl    56944
cpg     26      bpg     6630    fpg     53040   ipg     6400
nindir  2048    inopb   64      nspf    2
nbfree  7624085 ndir    12806   nifree  13904878        nffree  90216
cgrotor 1454    fmod    0       ronly   0       logbno  1824
version 0

To see if maxcontig needs to be increased, you can run “iostat,” and watch the transfer sizes:

$ iostat -zxn 5

                 extended device statistics              
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0   63.2    0.0 7782.9 52.3  2.0  827.9   31.4  97  99 c0t0d0
    0.2   63.2    1.6 7782.9 52.9  2.0  834.1   31.4  98 100 c0t2d0
    0.2   62.4    1.6 7782.5  0.2 54.7    3.6  873.1  23 100 d0
    0.0   62.4    0.0 7782.5  0.0 54.1    0.0  866.7   0  99 d1
    0.2   62.4    1.6 7782.5  0.0 54.7    0.0  873.0   0 100 d2

If we divide writes per second (w/s) by the total bytes written (kr/s), we can
derive the average size of each physical write:

$ bc

Give or take a few bytes, we are pushing maxcontig bytes during each write operation. If you have sequential workloads, increasing the value of maxcontig may allow your Solaris box to read or write more data at once (reducing the total number of I/O operations). You can adjust the size of maxcontig with the “tunefs” utility:

$ tunefs -a 128 /dev/md/dsk/d0
maximum contiguous block count changes from 16 to 128

This will cause 128 file system blocks (1 MB) to be read and written with each I/O operation. In order for this value to be effective, you need to increase the maximum size of a SCSI/SVM I/O operation. This is done by adding the following tunables to /etc/system:

set maxphys=1048576
set md:md_maxphys=1048576

ALL tunables should be tested on a development/QE box before implementing on important systems. I tried bumping maxcontig to 128 on my Ultra5, and immediately saw corruption on several meta devices. Digging through, I learned that maxcontig can only be set to “16” on IDE devices, and “128” for SCSI devices:

Luckily the Ultra5 was a test system, so recovering was relatively straight forward. Test all tunables before you deploy them :)

OpenBSD PF: Filtering traffic by Operating System

I was reading through the PF manual, and came across a section on filtering traffic with “Passive Operating System Fingerprinting”:

PF contains dozens of Operating System fingerprints. The full list of fingerprints can be printed with the pfctl utility:

$ pfctl -s osfp | tail -5
Windows XP RFC1323
Windows XP SP1
Windows XP SP3
Zaurus 3.10

or with one of the available UNIX pagers:

$ tail -5 /etc/pf.os
*:128:1:48:M536,N,N,S: @Windows:98::Windows 98
*:128:1:48:M*,N,N,S: @Windows:XP::Windows XP/2000
*:128:1:48:M*,N,N,S: @Windows:2000::Windows XP/2000

Using the fingerprints listed here, we can filter inbound connections by IP address, TCP/UDP ports, and Operating System:

pass in quick on $ext proto tcp from to any port 22 os OpenBSD keep state

This example will allow OpenBSD systems with an IP address in the network to ssh to any machine on our network. This has some interesting uses.