Encrypting data with the FUSE encryption module

I use a laptop to conduct most of system administration duties, and periodically need to store sensitive information in my home directory on my laptop. To ensure that this information can’t be used for malicious purposes, I use the FUSE encryption module to encrypt anything I think is sensitive (I probably go overboard when it comes to encrypting data, but you can never be too safe with your (or your companies) data!).

Now you may be asking yourself why not GNUPG? Well, the FUSE encryption module allows transparent access to file and directories, so you don’t have to manually key in a symmetric key each time you need to access a file (you only need to type the key in when you mount the encrypted folder). This provides a fair amount of flexibility, and helps me ensure that I won’t accidentally forget to reencrypt a file when I am done using it.

Configuring the FUSE encryption module is a simple process. To create a new source directory (this is the directory where the encrypted data goes) and mount it on a destination directory (this is the place where you read and write data to), the encfs utility can be run with the full path to the source directory, and the full path to the destination directory:

$ encfs /home/matty/source /home/matty/encrypt/

The first time you run the encfs utility, it will ask you to pick a setup mode, and a symmetric key to encrypt data that is added to the destination directory. Here are the screens I was presented with when I ran the encfs command line listed above::

Creating new encrypted volume.
Please choose from one of the following options:
 enter "x" for expert configuration mode,
 enter "p" for pre-configured paranoia mode,
 anything else, or an empty line will select standard mode.
?> p

Paranoia configuration selected.

Configuration finished.  The filesystem to be created has
the following properties:
Filesystem cipher: "ssl/aes", version 2:1:1
Filename encoding: "nameio/block", version 3:0:1
Key Size: 256 bits
Block Size: 512 bytes, including 8 byte MAC header
Each file contains 8 byte header with unique IV data.
Filenames encoded using IV chaining mode.
File data IV is chained to filename IV.

-------------------------- WARNING --------------------------
The external initialization-vector chaining option has been
enabled.  This option disables the use of hard links on the
filesystem. Without hard links, some programs may not work.
The programs 'mutt' and 'procmail' are known to fail.  For
more information, please see the encfs mailing list.
If you would like to choose another configuration setting,
please press CTRL-C now to abort and start over.

Now you will need to enter a password for your filesystem.
You will need to remember this password, as there is absolutely
no recovery mechanism.  However, the password can be changed
later using encfsctl.

New Encfs Password: 
Verify Encfs Password: 

Once this operation completes, the source directory will be initialized, and mounted on the destination directory:

$ df -h

Filesystem            Size  Used Avail Use% Mounted on
/dev/hda1              19G  2.6G   16G  15% /
tmpfs                 314M     0  314M   0% /dev/shm
/dev/hda2              19G  177M   18G   1% /home
encfs                  19G  177M   18G   1% /home/matty/encrypt

The destination directory can be accessed just like any other directory on the server, but it’s contents will only be viewable while the directory is mounted. To unmount the destination directory so it’s contents can no longer be viewed, the fusermount utility can be run with the “-u” (unmount) option and the directory to unmount:

$ fusermount -u /home/matty/encrypt

I really dig FUSE, and am hopeful the Solaris port will complete in the near future. :)

Generating statistics from the Sun directory server access logs

I have been managing Sun’s directory server for close to four years, and it is one of the few products I use that has all the bells and whistles out of the box. One extremely useful feature is the ability to generate statistics from the server access logs. This capability is built into the logconv.pl perl script, which is part of the Sun directory server resource kit (as a side note, logconv.pl is what motivated me to write ldap-stats.pl to analyze OpenLDAP log files).

To use the logconv.pl statistics program to analyze your server access logs, you will first need to grab the resource kit from the Sun download page. Once you download and extract the resource kit, you can cd into the $RESOURCE_KIT_HOME/perl directory, and run logconv.pl with the optional “-V” (verbose) option, and the names of the log(s) to analyze:

$ logconv.pl -V access

This will produce a report similar to the following:

SunOne Access Log Analyzer 4.71

Initializing Variables...
Processing 1 Access Log(s)...

  /home/matty/access (Total Lines: 135987) 
       1000 Lines Processed
       2000 Lines Processed
          < ..... >
     134000 Lines Processed
     135000 Lines Processed
*    135987 Lines Processed                     Total Lines Processed:       135987

* Total Lines Analyzed:  135987

----------- Access Log Output ------------

Start of Log:  15/May/2007:16:13:30
End of Log:    16/May/2007:16:13:55

Restarts:                     0

Opened Connections:           16967
Closed Connections:           0
Total Operations:             34183
Total Results:                34183
Overall Performance:          100.0%
Most Pending Operations:      3

Searches:                     16552
Modifications:                13
Adds:                         4
Deletes:                      1
Mod RDNs:                     0
Compares:                     0

5.x Stats 
Persistent Searches:          0
Internal Operations:          0
Entry Operations:             0
Extended Operations:          658
Abandoned Requests:           0
Smart Referrals Received:     0

VLV Operations:               0
VLV Unindexed Searches:       0
SORT Operations:              0
SSL Connections:              0

Entire Search Base Queries:   1
Unindexed Searches:           0

FDs Taken:                    16967
FDs Returned:                 0
Highest FD Taken:             472

Broken Pipes:                 0
Connections Reset By Peer:    0
Resource Unavailable:         0

Binds:                        16955
Unbinds:                      16738

 LDAP v2 Binds:               2
 LDAP v3 Binds:               16953
 Expired Password Logins:     0
 SSL Client Binds:            0
 Failed SSL Client Binds:     0
 SASL Binds:                  0

 Directory Manager Binds:     0
 Anonymous Binds:             2
 Other Binds:                 16953

----- Errors -----

err=0                 33985    Successful Operations   
err=49                  148    Invalid Credentials (Bad Password)
err=32                   50    No Such Object          

----- Top 20 Failed Logins ------

9           uid=foo,ou=people,dc=prefetch,dc=net
9           uid=bar,ou=people,dc=prefetch,dc=net

 < ..... >

----- Total Connection Codes -----

U1                    16738    Cleanly Closed Connections              
B1                      220    Bad Ber Tag Encountered                 

 < ..... >

----- Top 20 Clients -----

Number of Clients:  9

                   7124 -  U1   Cleanly Closed Connections
                     25 -  B1   Bad Ber Tag Encountered

                   5322 -  U1   Cleanly Closed Connections
                     18 -  B1   Bad Ber Tag Encountered

 < ..... >

----- Top 20 Bind DN's -----

Number of Unique Bind DN's: 398

3857            uid=foo,ou=people,dc=prefetch,dc=net
2761            uid=bar,ou=people,dc=prefetch,dc=net

  < ..... >

----- Top 20 Search Bases -----

Number of Unique Search Bases: 25

7716            ou=people,dc=prefetch,dc=net
5302            ou=groups,dc=prefetch,dc=net

 < ..... >

----- Top 20 Search Filters -----

Number of Unique Search Filters: 619

5324            (&(objectclass=organizationalperson)(uid=foo))
2761            (&(objectclass=organizationalperson)(entrydn=uid=bar,ou=people,dc=prefetch,dc=net))

 < ..... >

----- Top 20 Most Frequent etimes -----

34049           etime=0     
134             etime=1     

----- Top 20 Longest etimes -----

etime=1         134       
etime=0         34049     

----- Top 20 Largest nentries -----

nentries=5                      1
nentries=1                  16537
nentries=0                     14

----- Top 20 Most returned nentries -----

16537           nentries=1    
14              nentries=0    
1               nentries=5    

----- 5.x Extended Operations -----

302     Other                                                       
302     Other                                                       
18     Other                                                       
18     Other                                                       
18     Other                                                       

----- Top 20 Most Requested Attributes -----

8127        nsRoleDN           
8123        displayName        

 < ..... >

----- Recommendations -----


As you can see from the output above, there are numerous useful statistics, including the number of unindexed searches, a list of errors, operations by client, etc. These statistics are an invaluable tool, and extremely useful for proactively finding problems in your infrastructure.

Debugging apache article

If you subscribe to SysAdmin magazine, you might be interested in my article Debugging Apache Web Server Problems, which is available in the July issue. Here is the list of topics I covered in the article:

– Using single process mode to isolate problems.

– Using httpd flags to debug configuration errors.

– Debugging CGI script execution problems with ScriptLog.

– Using mod_backtrace to determine why a process crashed.

– Dumping HTTP requests and responses with mod_dumpio.

– Using ptools and gdb to see why an httpd process hung.

– Enabling maintainer mode to simplify debugging.

– Using the source code to locate problems.

I have been a long time subscriber to SysAdmin, and it’s one of the few magazines I read each month from cover to cover. If you happen to read my article, I would love to hear your thoughts (good or bad).

Catching SIGSEGVs as they happen

Periodically situations arise where applications will write to memory that isn’t mapped into their address space. On UNIX systems, this results in a SIGSEGV signal being sent to the offending process. If for some reason you can’t get a core file, you can run the application under the control of the catchsegv utility. The following example shows the results that are displayed when a SIGSEGV signal is received, and the program was run under the control of the catchsegv script:

$ catchsegv ./coreme 10000000

Calling malloc() to allocate 10000000 bytes of heap space
*** Segmentation fault
Register dump:

 EAX: fffffffc   EBX: bff12634   ECX: bff12634   EDX: 00bd3ff4
 ESI: bff12634   EDI: 00000000   EBP: bff12648   ESP: bff12488

 EIP: 0034a402   EFLAGS: 00200246

 CS: 0073   DS: 007b   ES: 007b   FS: 0000   GS: 0033   SS: 007b

 Trap: 00000000   Error: 00000000   OldMask: 00000000
 ESP/signal: bff12488   CR2: 00000000


Memory map:

0034a000-0034b000 r-xp 0034a000 00:00 0 [vdso]
003d3000-003d6000 r-xp 00000000 08:01 1426324 /lib/libSegFault.so
003d6000-003d7000 r-xp 00002000 08:01 1426324 /lib/libSegFault.so
003d7000-003d8000 rwxp 00003000 08:01 1426324 /lib/libSegFault.so
00a47000-00a52000 r-xp 00000000 08:01 1427470 /lib/libgcc_s-4.1.1-20070105.so.1
00a52000-00a53000 rwxp 0000a000 08:01 1427470 /lib/libgcc_s-4.1.1-20070105.so.1
00a7e000-00a97000 r-xp 00000000 08:01 1427446 /lib/ld-2.5.so
00a97000-00a98000 r-xp 00018000 08:01 1427446 /lib/ld-2.5.so
00a98000-00a99000 rwxp 00019000 08:01 1427446 /lib/ld-2.5.so
00a9b000-00bd2000 r-xp 00000000 08:01 1427447 /lib/libc-2.5.so
00bd2000-00bd4000 r-xp 00137000 08:01 1427447 /lib/libc-2.5.so
00bd4000-00bd5000 rwxp 00139000 08:01 1427447 /lib/libc-2.5.so
00bd5000-00bd8000 rwxp 00bd5000 00:00 0
08048000-08049000 r-xp 00000000 08:03 1442166 /home/matty/coreme
08049000-0804a000 rw-p 00000000 08:03 1442166 /home/matty/coreme
095bd000-095e2000 rw-p 095bd000 00:00 0
b75f4000-b7f7f000 rw-p b75f4000 00:00 0
b7f8c000-b7f8e000 rw-p b7f8c000 00:00 0
bfefe000-bff13000 rw-p bfefe000 00:00 0 [stack]

This is a nifty utility, and can be useful for viewing the environment of a process at the time the segmentation violation occurred.

Implementing shared memory resource controls on Solaris hosts

With the availability of the Solaris 10 operating system, the way IPC facilities (e.g., shared memory, message queues, etc.) are managed changed. In previous releases of the Solaris operating system, editing /etc/system was the recommended way to increase the values of a given IPC tunable. With the release of Solaris 10, IPC tunables are now managed through the Solaris resource manager. The resource manager makes each tunable available through one or more resource controls, which provide an upper bound on the size of a given resource.

Merging the management of the IPC facilities into the resource manager has numerous benefits. The biggest benefit is the ability to increase and decrease the size of a resource control on the fly (prior to Solaris 10, you had to reboot the system if you made changes to the IPC tunables in /etc/system). The second major benefit is that the default values were increased to sane defaults, so you no longer need to fiddle with adjusting the number of message queues and semaphores when you configure a new Oracle database. That said, there are times when the defaults need to be increased. If your running Oracle, this is especially true, since the default size of the shared memory resource control (project.max-shm-memory) is sized a bit low (the default value is 254M).

There are two ways to increase the default value of a resource control. If you want to increase the default for the entire system, you can add a resource control with the desired value to the system project (projects are used to group resource controls). If you want to increase the value of a resource control for a specific user or group, you can create a new project and then assign one or more resource controls to that project.

So lets say that you just installed Oracle, and got the lovely “out of memory” error when attempting to create a new database instance. To fix this issue, you need to increase the amount of shared memory that is available to the oracle user. Since you don’t necessarily want all users to be able to allocate gobs of shared memory, you can create a new project, assign the oracle user to that project and then add the desired amount of shared memory to the shared memory resource control in that project

To create a new project, the projadd utility can be executed with the name of the project to create:

$ projadd user.oracle

Once a project is created, the projmod utility can be used to add a resource control to the project (you can also edit the projects file directly if you want). The following example shows how to add a shared memory resource control with an upper bounds of 1GB to the user.oracle project we created above:

$ projmod -sK “project.max-shm-memory=(privileged,1073741824,deny)” user.oracle

To verify that a resource control was added to the system, the grep utility can be run with the project name and the name of the system project file (project information is stored in /etc/project):

$ grep user.oracle /etc/project

You can also check the value of a resource control with the prctl utility. Prctl takes a pid as an argument and optionally the name of the resource control to print (prctl will print the value of all resource controls by default):

$ prctl -n project.max-shm-memory $$

process: 585: -sh
NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
        privileged      1.00GB      -   deny                                 -
        system          16.0EB    max   deny                                 -

The resource control facility is amazingly cool, and the folks who manage the Princeton Solaris support repository did a great job documenting it.

LDAP client deficiencies

I have been spending a bit of time lately configuring Solaris and Linux hosts to authenticate against LDAP. Authentication works well on the surface, but the actual client implementations are somewhat lacking. Let’s take the Linux pam_ldap module for instance. To authenticate a single session, the pam_ldap module performs thirty-three operations, which includes 7 TCP connections and a number of redundant searches. Here is what I see in my logfile for each login session that is authenticated through pam_ldap:

1. New connection
  LDAP connection from to

2. Anonymous BIND
  BIND dn="" method=128 version=3

3. Search
   SRCH base="ou=people,dc=prefetch,dc=net" 
   ATTRIBUTES="uid userPassword uidNumber gidNumber cn homeDirectory 
              loginShell gecos description objectClass"
   nentries => 1

4. New connection
    LDAP connection from to

5. Anonymous BIND
   BIND dn="" method=128 version=3

6. Search
   SRCH base="ou=people,dc=prefetch,dc=net" 
   nentries => 1

7. BIND as user
   BIND dn="uid=test,ou=People,dc=prefetch,dc=net" method=128 version=3

8. Anonymous BIND
   BIND dn="" method=128 version=3

9. Close connection 801

10. New connection
    LDAP connection from to

11. Anonymous BIND
    BIND dn="" method=128 version=3

12. Search
    SRCH base="ou=people,dc=prefetch,dc=net" 
    nentries => 1
13. Search
    SRCH base="ou=people,dc=prefetch,dc=net" 
    nentries => 0

14. Close connection 802

15. New connection
    LDAP connection from to

16. Anonymous BIND
    BIND dn="" method=128 version=3

17. Search
    SRCH base="ou=people,dc=prefetch,dc=net" 
    ATTRIBUTES="uid userPassword uidNumber gidNumber cn homeDirectory 
               loginShell gecos description objectClass"
    nentries => 1

18. Search
    SRCH base="ou=people,dc=prefetch,dc=net" 
    ATTRIBUTES="uid userPassword uidNumber gidNumber cn homeDirectory 
               loginShell gecos description objectClass"
    nentries => 1

19. New connection
    LDAP connection from to

20. Anonymous BIND
    BIND dn="" method=128 version=3

21. Search
    SRCH base="ou=people,dc=prefetch,dc=net" 
    ATTRIBUTES="uid userPassword uidNumber gidNumber cn homeDirectory 
               loginShell gecos description objectClass"
    nentries => 1

22. New connection
    LDAP connection from to

23. Anonymous BIND
    BIND dn="" method=128 version=3

24. Search
    SRCH base="ou=people,dc=prefetch,dc=net" 
    ATTRIBUTES="uid userPassword uidNumber gidNumber cn homeDirectory 
               loginShell gecos description objectClass"
    nentries => 1

25. Close connection 806

26. New connection
    LDAP connection from to

27. Anonymous BIND
    BIND dn="" method=128 version=3

28. Search
    SRCH base="ou=people,dc=prefetch,dc=net" 
    ATTRIBUTES="uid userPassword uidNumber gidNumber cn homeDirectory 
               loginShell gecos description objectClass"
    nentries => 1

29. Close connection 807

30. Close connection 805

31. Search
    SRCH base="ou=people,dc=prefetch,dc=net" 
    ATTRIBUTES="uid userPassword uidNumber gidNumber cn homeDirectory 
               loginShell gecos description objectClass"
    nentries => 1

32. Close connection 803

33. Close connection 804

The Sun native LDAP client is not much better, and I am somewhat curious why the PAM modules couldn’t be written to use persistent connections, and to cache data between PAM phases. If you are using either of these implementations to authenticate users, I sure hope you’re using the name service caching daemon (nscd). :)