Blog O' Matty


LDAP client deficiencies

This article was posted by Matty on 2007-05-21 18:45:00 -0400 -0400

I have been spending a bit of time lately configuring Solaris and Linux hosts to authenticate against LDAP. Authentication works well on the surface, but the actual client implementations are somewhat lacking. Let’s take the Linux pam_ldap module for instance. To authenticate a single session, the pam_ldap module performs thirty-three operations, which includes 7 TCP connections and a number of redundant searches. Here is what I see in my logfile for each login session that is authenticated through pam_ldap:

1. New connection
conn=801
LDAP connection from 10.10.20.3 to 10.10.20.2

2. Anonymous BIND
conn=801
BIND dn="" method=128 version=3

3. Search
conn=801
SRCH base="ou=people,dc=prefetch,dc=net"
FILTER="(&(objectClass=posixAccount)(uid=test))"
ATTRIBUTES="uid userPassword uidNumber gidNumber cn homeDirectory
loginShell gecos description objectClass"
nentries => 1

4. New connection
conn=802
LDAP connection from 10.10.20.3 to 10.10.20.2

5. Anonymous BIND
conn=802
BIND dn="" method=128 version=3

6. Search
conn=802
SRCH base="ou=people,dc=prefetch,dc=net"
FILTER="(&(objectClass=posixAccount)(objectClass=posixAccount)(uid=test))"
ATTRIBUTES=ALL
nentries => 1

7. BIND as user
conn=802
BIND dn="uid=test,ou=People,dc=prefetch,dc=net" method=128 version=3

8. Anonymous BIND
conn=802
BIND dn="" method=128 version=3

9. Close connection 801

10. New connection
conn=803
LDAP connection from 10.10.20.3 to 10.10.20.2

11. Anonymous BIND
conn=803
BIND dn="" method=128 version=3

12. Search
conn=803
SRCH base="ou=people,dc=prefetch,dc=net"
FILTER="(&(objectClass=posixAccount)(uid=test))"
ATTRIBUTES=ALL
nentries => 1

13. Search
conn=803
SRCH base="ou=people,dc=prefetch,dc=net"
FILTER="(&(objectClass=posixGroup)(|(memberUid=test)
(uniqueMember=uid=test,ou=People,dc=prefetch,dc=net)))"
attrs="gidNumber"
nentries => 0

14. Close connection 802

15. New connection
conn=804
LDAP connection from 10.10.20.3 to 10.10.20.2

16. Anonymous BIND
conn=804
BIND dn="" method=128 version=3

17. Search
conn=804
SRCH base="ou=people,dc=prefetch,dc=net"
FILTER="(&(objectClass=posixAccount)(uidNumber=100))"
ATTRIBUTES="uid userPassword uidNumber gidNumber cn homeDirectory
loginShell gecos description objectClass"
nentries => 1

18. Search
conn=804
SRCH base="ou=people,dc=prefetch,dc=net"
FILTER="(&(objectClass=posixAccount)(uid=test))"
ATTRIBUTES="uid userPassword uidNumber gidNumber cn homeDirectory
loginShell gecos description objectClass"
nentries => 1

19. New connection
conn=805
LDAP connection from 10.10.20.3 to 10.10.20.2

20. Anonymous BIND
conn=805
BIND dn="" method=128 version=3

21. Search
conn=805
SRCH base="ou=people,dc=prefetch,dc=net"
FILTER="(&(objectClass=posixAccount)(uidNumber=100))"
ATTRIBUTES="uid userPassword uidNumber gidNumber cn homeDirectory
loginShell gecos description objectClass"
nentries => 1

22. New connection
conn=806
LDAP connection from 10.10.20.3 to 10.10.20.2

23. Anonymous BIND
conn=806
BIND dn="" method=128 version=3

24. Search
conn=806
SRCH base="ou=people,dc=prefetch,dc=net"
FILTER="(&(objectClass=posixAccount)(uidNumber=100))"
ATTRIBUTES="uid userPassword uidNumber gidNumber cn homeDirectory
loginShell gecos description objectClass"
nentries => 1


25. Close connection 806

26. New connection
conn=807
LDAP connection from 10.10.20.3 to 10.10.20.2

27. Anonymous BIND
conn=807
BIND dn="" method=128 version=3

28. Search
conn=807
SRCH base="ou=people,dc=prefetch,dc=net"
FILTER="(&(objectClass=posixAccount)(uidNumber=100))"
ATTRIBUTES="uid userPassword uidNumber gidNumber cn homeDirectory
loginShell gecos description objectClass"
nentries => 1

29. Close connection 807

30. Close connection 805

31. Search
conn=804
SRCH base="ou=people,dc=prefetch,dc=net"
FILTER="(&(objectClass=posixAccount)(uid=test))"
ATTRIBUTES="uid userPassword uidNumber gidNumber cn homeDirectory
loginShell gecos description objectClass"
nentries => 1

32. Close connection 803

33. Close connection 804

The Sun native LDAP client is not much better, and I am somewhat curious why the PAM modules couldn’t be written to use persistent connections, and to cache data between PAM phases. If you are using either of these implementations to authenticate users, I sure hope you’re using the name service caching daemon (nscd). :)

An MD device that went too far

This article was posted by Matty on 2007-05-16 19:46:00 -0400 -0400

Recently I was approached to help debug a problem with a Linux MD device that wouldn’t start. When I ran raidstart to start the device, it spit out a number of errors on the console, and messages similar to the following were written to the system log:

ide: failed opcode was: unknown
hda: read_intr: status=0x59 { DriveReady SeekComplete DataRequest Error }
hda: read_intr: error=0x10 { SectorIdNotFound }, LBAsect=120101895, sector=120101895

After pondering the error for a bit, it dawned on me that the partition table might be fubar. My theory proved correct, since partition six (the one associated with the md device that wouldn’t start) had an ending cylinder count (7476) that was greater than the number of physical cylinders (7294) on the drive:

$ fdisk -l /dev/hda

Disk /dev/hda: 60.0 GB, 60000000000 bytes
255 heads, 63 sectors/track, 7294 cylinders
Units = cylinders of 16065 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/hda1 1 32 257008+ fd Linux raid autodetect
/dev/hda2 33 65 265072+ fd Linux raid autodetect
/dev/hda3 66 327 2104515 fd Linux raid autodetect
/dev/hda4 328 7476 57424342+ 5 Extended
/dev/hda5 328 458 1052226 fd Linux raid autodetect
/dev/hda6 459 720 2104483+ fd Linux raid autodetect
/dev/hda7 721 7476 54267538+ fd Linux raid autodetect

Once I corrected the end cyclinder and recreated the file system, everything worked as expected. Now I have no idea how the system got into this state (I didn’t build the system), since the installers I tested display errors when you specify an ending cylinder count that is larger than the maximum number of cylinders available.

Adding status support to the Solaris dd utility

This article was posted by Matty on 2007-05-14 18:23:00 -0400 -0400

I really dig Solaris, but it is missing a few basic features I have come to rely on. One of these features is the ability to send the dd utility a signal to get the status of a copy operation. Since Solaris is now opensource (or mostly opensource), I thought I would hack some code together to implement this feature. After a bit of coding (and testing), I requested a sponsor to putback my change into opensolaris. One was assigned, and I sent my changes to him to review. I haven’t heard back from him in the past month or two, so I reckon he’s too busy to help me putback my changes. In case others are interested in this feature, I placed a diff with my changes on my website.

Solaris fibre channel management

This article was posted by Matty on 2007-05-09 05:10:00 -0400 -0400

With the introduction of Solaris 10, storage management has changed considerably. The storage foundation kit is now integrated into the base OS, the leadville driver has been expanded to include HBAs from Emulex, JNI and Qlogic, and the fcinfo utility as well as several mdb DCMDS were added to view fibre channel connectivity information. fcinfo is especially useful, since it provides a tool with the base Operating System to view HBA and connectivity information.

The fcinfo utility has two main options. The first option, “hba-port,” will display HBA port speeds, WWPNs, the HBA manufacturer, and the state of the HBA:

$ fcinfo hba-port

HBA Port WWN: 10000000c9327592 OS Device Name: /dev/cfg/c2 Manufacturer: Emulex Model: LP9002L Type: N-port State: online Supported Speeds: 1Gb 2Gb Current Speed: 2Gb Node WWN: 20000000c9327592

HBA Port WWN: 10000000c9327593 OS Device Name: /dev/cfg/c3 Manufacturer: Emulex Model: LP9002L Type: N-port State: online Supported Speeds: 1Gb 2Gb Current Speed: 2Gb Node WWN: 20000000c9327593

If the “-l” option is specified, fcinfo will also print link statistics (e.g., link failure, signal loss, etc.):

$ fcinfo hba-port -l

HBA Port WWN: 10000000c9327592 OS Device Name: /dev/cfg/c2 Manufacturer: Emulex Model: LP9002L Type: N-port State: online Supported Speeds: 1Gb 2Gb Current Speed: 2Gb Node WWN: 20000000c9327592 Link Error Statistics: Link Failure Count: 0 Loss of Sync Count: 5 Loss of Signal Count: 0 Primitive Seq Protocol Error Count: 0 Invalid Tx Word Count: 0 Invalid CRC Count: 0

HBA Port WWN: 10000000c9327593 OS Device Name: /dev/cfg/c3 Manufacturer: Emulex Model: LP9002L Type: N-port State: online Supported Speeds: 1Gb 2Gb Current Speed: 2Gb Node WWN: 20000000c9327593 Link Error Statistics: Link Failure Count: 0 Loss of Sync Count: 5 Loss of Signal Count: 0 Primitive Seq Protocol Error Count: 0 Invalid Tx Word Count: 0 Invalid CRC Count: 0

The second fcinfo option is “remote-port,” which will display information on remote targets. This information includes the storage manufacturer, the storage product type, WWPNs, and all of the SCSI targets that have been presented to the host:

$ fcinfo remote-port -slp 10000000c9327592

Remote Port WWN: 50060160082006e2 Active FC4 Types: SCSI SCSI Target: yes Node WWN: 50060160882006e2 Link Error Statistics: Link Failure Count: 0 Loss of Sync Count: 0 Loss of Signal Count: 0 Primitive Seq Protocol Error Count: 0 Invalid Tx Word Count: 255 Invalid CRC Count: 0 LUN: 0 Vendor: DGC Product: RAID 5 OS Device Name: /dev/rdsk/c4t6006016061B71000AD0810C9979CD911d0s2 LUN: 1 Vendor: DGC Product: RAID 5 OS Device Name: /dev/rdsk/c4t6006016061B7100055B12704989CD911d0s2 Remote Port WWN: 50060168082006e2 Active FC4 Types: SCSI SCSI Target: yes Node WWN: 50060160882006e2 Link Error Statistics: Link Failure Count: 0 Loss of Sync Count: 0 Loss of Signal Count: 0 Primitive Seq Protocol Error Count: 0 Invalid Tx Word Count: 255 Invalid CRC Count: 0 LUN: 0 Vendor: DGC Product: RAID 5 OS Device Name: /dev/rdsk/c4t6006016061B71000AD0810C9979CD911d0s2 LUN: 1 Vendor: DGC Product: RAID 5 OS Device Name: /dev/rdsk/c4t6006016061B7100055B12704989CD911d0s2

These changes should make managing storage on Solaris a bit easier.

Concert review Buckcherry and Saliva

This article was posted by Matty on 2007-05-09 00:32:00 -0400 -0400

This past week I continued my quest to see every band live, and ventured out to see Saliva and Buckcherry at a relatively small venue. The music started around 9pm, when the Saliva lead singer came to the stage with one of the coolest mohawks I have seen in quite some time (I think it was colored bright pink). The band wasted no time getting the crowd energized, and cranked out one hit after another. This included their mega hit “Ladies And Gentlemen,” as well as “Click Click Boom,” “Always,” “Your disease,” “Survival Of The Sickest,” “Broken,” and “Raise Up.” Each song was filled with passion and a lot of aggression, and you could tell the crowd was loving the music!!

After a short intermission, Buckcherry took the stage. Prior to attending the concert, I think I heard two or three of the band’s songs. I knew next to nothing about them, but being an open minded person, I gave them the chance to make me a Buckcherry fan. Well, the band was absolutely incredible, and the lead singer Josh Todd has an amazing stage presence (even though he didn’t talk all that much). The band performed for about an hour and a half, and played their classi hit “Crazy Bitch,” as well as “Everything” and “Lit Up.” I especially enjoyed “Lit Up,” since the band played it with an amazing level of intensity, which got the crowd into the song (I don’t have the best vocals, but I was singing along with everyone around me).

Now I don’t give many show the Matty two thumbs of approval, but I can safely say this show gets it. The music was hard, both bands played with the utmost intensity, and most importantly, the crowd was into each and every song. If either of these bands wanders back down to the south to perform again, I will be sitting front and center watching them crank out some more awesome music! Niiiiiiiiiiiiiiiiiiiiiiiiice!