Using exec-shield to protect your Linux servers from stack, heap and integer overflows

I’ve been a long time follower of the OpenBSD project, and their amazing work on detecting and protecting the kernel and applications from stack and heap overflows. Several of the concepts that were developed by the OpenBSD team were made available in Linux, and came by way of the exec-shield project. Of the many useful security features that are part of exec-shield, the two features that can be controlled by a SysAdmin are kernel virtual address space randomizations and the exec-shield operating mode.

Address space randomization are controlled through the kernel.randomize_va_space sysctl tunable, which defaults to 1 on my CentOS systems:

$ sysctl kernel.randomize_va_space
kernel.randomize_va_space = 1

The exec-shield operating mode is controlled through the kernel.exec-shield sysctl value, and can be set to one of the following four modes (the descriptions below came from Steve Grubb’s excellent post on exec-shield operating modes):

– A value of 0 completely disables ExecShield and Address Space Layout Randomization
– A value of 1 enables them ONLY if the application bits for these protections are set to “enable”
– A value of 2 enables them by default, except if the application bits are set to “disable”
– A value of 3 enables them always, whatever the application bits

The default exec-shield value on my CentoOS servers is 1, which enables exec-shield for applications that have been compiled to support it:

$ sysctl kernel.exec-shield
kernel.exec-shield = 1

To view the list of running processes that have exec-shield enabled, you can run Ingo Molnar and Ulrich Drepper’s lsexec utility:

$ lsexec –all |more

init, PID      1, UID root: no PIE, no RELRO, execshield enabled
httpd, PID  11689, UID apache: DSO, no RELRO, execshield enabled
httpd, PID  11691, UID apache: DSO, no RELRO, execshield enabled
httpd, PID  11692, UID apache: DSO, no RELRO, execshield enabled
httpd, PID  11693, UID apache: DSO, no RELRO, execshield enabled
httpd, PID  12224, UID apache: DSO, no RELRO, execshield enabled
httpd, PID  12236, UID apache: DSO, no RELRO, execshield enabled
pickup, PID  16181, UID postfix: DSO, partial RELRO, execshield enabled
appLoader, PID   2347, UID root: no PIE, no RELRO, execshield enabled
auditd, PID   2606, UID root: DSO, partial RELRO, execshield enabled
audispd, PID   2608, UID root: DSO, partial RELRO, execshield enabled
restorecond, PID   2629, UID root: DSO, partial RELRO, execshield enabled

In this day and age of continuos security threats there is little to no reason that you shouldn’t be using these amazing technologies. When you combine exec-shield, SELinux and proper patching and security best practices you can really limit the attack vectors that can be used to break into your systems.

Securing your Linux vsftp installations by locking down your server and chroot()’ing users

As much as we all hate FTP and the insecurities of the protocol, I’ve given up on the fact that it’s going to be retired anytime soon. A lot of old legacy systems (mainframes, AS400s, etc.) don’t support SSH, but they so support the infamous FTP protocol. These two factors force a lot of companies to continue to use it, so we need to take every measure we can to protect the FTP servers that receive files from these systems.

I’ve been using vsftpd for quite some time, and it has one of the best security track records of the various FTP server implementations. When I’m forced to use FTP, I always install vsftp and perform a number of actions to lock down my FTP server installation. Here is a short list:

– Enable SELinux
– Change the default vsftp banner (“ftpd_banner” controls the string displayed)
– Limit connections to known IP addresses (tcp_wrappers and iptables can help with this)
– Disable anonymous logins (“anonymous_enable” controls this behavior)
– Tighten up the umask to disable writeable files (“local_umask” controls the default umask to use)
– Increase logging and use centralized log servers (“xferlog_enable” and syslog-ng can help with this)
– Validate all identities in /etc/passwd and remove unneeded system accounts
– Disallow ALL system accounts from logging in
– Chroot all users to their home directory

The last item is especially important, since you don’t want users wandering around your file systems looking for files and directories that *could* be exploited through a software bug or misconfiguration. Chroot support is built into vsftpd, which is now the default FTP daemon in Redhat and CentOS Linux. Enabling chroot support is super easy, since you only need to uncomment the following line:

chroot_local_user=YES

Once enabled, users will only be able to see the files and directories in their home directory.

$ ftp ftp.prefetch.net
Connected to localhost (127.0.0.1).
220 Welcome to Matty’s FTP server. Unauthorized access prohibited!
ftp> user bingo
331 Please specify the password.
Password:
230 Login successful.
ftp> pwd
257 “/”

By default all users will be chroot’ed to their home directories, which may not be ideal in some situations. If you need to selectively allow access to directories outside of the chroot, you can enable “chroot_local_user” and add the usernames you want to be allowed to “browse” to /etc/vsftpd/chroot_list. If on the other hand you want to allow all users to access the server and only chroot a few, you can set chroot_list_enable to YES and list the user’s you want to chroot in the /etc/vsftpd/chroot_list. The location of the file that lists the users (/etc/vsftpd/chroot_list in the examples above) is controlled by the chroot_list_file variable, which can be set to the absolute path of a file that contains a list of users. While FTP sucks, it’s going to be with us for some time to come. If we have to support it, we might as well do all we can to secure it!

Forcing your Linux users to use strong passwords

All SysAdmins know the importance of using strong passwords. These are the life blood of our systems, since a weak password will allow an adversary to enter our systems with a minimal amount of work. There are dozens of tools that can generate strong passwords, as well as a number of tools that can be used to force users to select strong passwords when they change their passwords.

The most common way to enforce strong passwords is through the pam_cracklib.so PAM plug-in. This useful module checks the input password against a series of rules. The rules cover a wide variety of criteria, including:

1. Is the password a palindrome?

2. Is the only difference between the new and old password a change of case?

3. Is the new password similar to the old password?

4. Is the new password too small?

5. Is the new password a rotated version of the old password?

6. Does the new password contain the user’s name?

The pam_cracklib.so shared library contains a number of options to control the size and strength of the password as well as the number of times the user can retry changing their password after a failure. These options are passed to the pam_cracklib.so plug-in via one more options specified in the file for each facility you need to enforce strong passwords on. Here is one example:

$ cd /etc/pam.d && grep pam_cracklib.so password-auth
password requisite pam_cracklib.so try_first_pass retry=3 type=

All of the options are documented in the pam_cracklib(8) manual page, so I won’t go into any additional detail on them. While I was reading about this module I found out that the libcrack.so library is the heart and sole of password complexity checking, and there is a good amount of documentation that describes how to integrate this with your software. It’s also neat to see installers taking advantage of this. I recently input a weak Fedora password to see what would happen, and to my amazement Fedora immediately printed a warning tell me that I was using a weak password. We all know we need to use strong passwords, and pam_cracklib.so can ensure that you and your users are actively doing so!

Firewalling a Linux NFS server with iptables

When it comes to firewalling services, NFS has to be one of the most complex to get operational. By default the various NFS services (lockd, statd, mountd, etc.) will request random port assignments from the portmapper (portmap), which means that most administrators need to open up a range of ports in their firewall rule base to get NFS working. On Linux hosts there is a simple way to firewall NFS services, and I thought I would walk through how I got iptables and my NFS server to work together.

Getting NFS working with iptables is a three step process:

1. Hard strap the ports the NFS daemons use in /etc/sysconfig/nfs.

2. Add the ports from step 1 to your iptables chains.

3. Restart the portmap, nfs and iptables services to pick up the changes.

To hard strap the ports that the various NFS services will use, you can assign your preferred ports to the MOUNTD_PORT, STATD_PORT, LOCKD_TCPPORT, LOCKD_UDPPORT, RQUOTAD_PORT and STATD_OUTGOING_PORT variables in /etc/sysconfig/nfs. Here are the settings I am using on my server:

MOUNTD_PORT=”10050″
STATD_PORT=”10051″
LOCKD_TCPPORT=”10052″
LOCKD_UDPPORT=”10052″
RQUOTAD_PORT=”10053″
STATD_OUTGOING_PORT=”10054″

Once ports have been assigned, you will need to restart the portmap and nfs services to pick up the changes:

$ service portmap restart

Stopping portmap:                                          [  OK  ]
Starting portmap:                                          [  OK  ]

$ service nfslock restart

Stopping NFS locking:                                      [  OK  ]
Stopping NFS statd:                                        [  OK  ]
Starting NFS statd:                                        [  OK  ]

$ service nfs restart

Shutting down NFS mountd:                                  [  OK  ]
Shutting down NFS daemon:                                  [  OK  ]
Shutting down NFS quotas:                                  [  OK  ]
Shutting down NFS services:                                [  OK  ]
Starting NFS services:                                     [  OK  ]
Starting NFS quotas:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting NFS mountd:                                       [  OK  ]

If you query the portmap daemon with rpcinfo, you will see that the various services are now registered on the ports that were assigned in /etc/sysconfig/nfs:

$ rpcinfo -p

   program vers proto   port
    100000    2   tcp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  10051  status
    100024    1   tcp  10051  status
    100011    1   udp  10053  rquotad
    100011    2   udp  10053  rquotad
    100011    1   tcp  10053  rquotad
    100011    2   tcp  10053  rquotad
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100021    1   udp  10052  nlockmgr
    100021    3   udp  10052  nlockmgr
    100021    4   udp  10052  nlockmgr
    100021    1   tcp  10052  nlockmgr
    100021    3   tcp  10052  nlockmgr
    100021    4   tcp  10052  nlockmgr
    100003    2   tcp   2049  nfs
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100005    1   udp  10050  mountd
    100005    1   tcp  10050  mountd
    100005    2   udp  10050  mountd
    100005    2   tcp  10050  mountd
    100005    3   udp  10050  mountd
    100005    3   tcp  10050  mountd

Next up, we need to adjust the appropriate iptables chains to allow inbound connections to the NFS service ports. Here are the entries I added to /etc/sysconfig/iptables to allow NFS to work with iptables:

# Portmap ports
-A INPUT -m state –state NEW -p tcp –dport 111 -j ACCEPT
-A INPUT -m state –state NEW -p udp –dport 111 -j ACCEPT
# NFS daemon ports
-A INPUT -m state –state NEW -p tcp –dport 2049 -j ACCEPT
-A INPUT -m state –state NEW -p udp –dport 2049 -j ACCEPT
# NFS mountd ports
-A INPUT -m state –state NEW -p udp –dport 10050 -j ACCEPT
-A INPUT -m state –state NEW -p tcp –dport 10050 -j ACCEPT
# NFS status ports
-A INPUT -m state –state NEW -p udp –dport 10051 -j ACCEPT
-A INPUT -m state –state NEW -p tcp –dport 10051 -j ACCEPT
# NFS lock manager ports
-A INPUT -m state –state NEW -p udp –dport 10052 -j ACCEPT
-A INPUT -m state –state NEW -p tcp –dport 10052 -j ACCEPT
# NFS rquotad ports
-A INPUT -m state –state NEW -p udp –dport 10053 -j ACCEPT
-A INPUT -m state –state NEW -p tcp –dport 10053 -j ACCEPT

Then I restarted iptables:

$ service iptables restart

Flushing firewall rules:                                   [  OK  ]
Setting chains to policy ACCEPT: filter                    [  OK  ]
Unloading iptables modules:                                [  OK  ]
Applying iptables firewall rules:                          [  OK  ]

In addition to the rules listed above, I have entries to track state (using the conntrack module) and allow established connections. If everything went as expected, you should be able to mount your file systems without issue. To debug issues, you can use the following steps:

1. Add a LOG statement to your iptables INPUT chain to log drop packets.

2. Run tcpdump -i host X.X.X.X (host should be the client IP that is trying to mount / access your exported file system) and check to see if connections are making it to the NFS server.

3. Run rpcinfo -p to see if the correct ports were assigned.

With just a few steps, you can get NFS working with iptables. If you have any suggestions or comments, feel free to leave me a comment! I’d love to hear folks thoughts on this.

Using TCP Wrappers to protect Linux and Solaris services

I have been using tcp wrappers for years, and it’s a very simple way to allow and deny network access to applications. TCP wrapper functionality is built into the system libwrap.so module, which various applications are linked against. To see if a given application supports tcp wrappers, you can use the ldd utility:

$ ldd `which sshd` | grep wrap
libwrap.so.0 => /lib64/libwrap.so.0 (0x00002ac16fe0f000)

TCP wrappers is configured through the /etc/hosts.allow and /etc/hosts.deny files. The hosts.allow file allows you to control which services will be accepted, and the hosts.deny file allows you to control which services will be denied. Both files use a format similar to the following:

DAEMON_LIST : CLIENT_LIST [ : SHELL_COMMAND ]

The DAEMON_LIST contains the name of the executable you are protecting, which could be sshd, sendmail or any other daemon that you are trying to protect. The CLIENT_LIST contains the hosts or domain names you wish to allow or deny access to, and they can take various forms:

ALL — matches everything
.prefetch.net — matches everything in the prefetch.net domain
192.168.0.0/255.255.0.0 — matches everything in the 192.168 /16 IP address space
192.168.1.1 — matches a single IP address

SHELL_COMMAND allows you to run a command when the rule matches. This could be used to run a notification script, block an IP with iptables or to provide some more extensive logging. To put this into action, we can set up our hosts.allow and hosts.deny files to limit access to our SSH daemon. The following hosts.allow will allow connections from the IP 192.168.1.100, and deny access from everyone else:

$ cat /etc/hosts.allow
sshd : 192.168.1.100

$ cat /etc/hosts.deny
ALL : ALL

When libwrap processes these files, it will first look for matches in /etc/hosts.allow by sequentially evaluating the rules. If a match isn’t found, it will then consult the hosts.deny file. If a connection is denied, you should see a message similar to the following in the messages file:

Apr 16 13:16:18 localhost sshd[3628]: refused connect from ::ffff:192.168.1.8 (::ffff:192.168.1.8)

TCP wrappers is an invaluable tool, and provides a simple and intuitive way to secure your services. It’s no substitute for a properly functioning host firewall, but an additional tool that can be used to protect your critical services.

A couple useful tidbits about the Linux /dev/random and /dev/urandom devices

Linux contains two devices that provide a source of entropy for the system. The first device is /dev/random, and the second is /dev/urandom. /dev/random is a character special device that provides a source of entropy until the system-wide entropy pool is exhausted, at which time it will block until additional entropy is available. /dev/urandom is a character device that uses the system entropy pool until it is depleted, then falls back to a pseudo-random number generator.

To gain access to the system wide entropy pool, you can use the openssl utilities “rand” option:

$ openssl rand -base64 16
4T+aLG9TA5hGoa7pPhWhJQ==

Or dump out the /dev/random and /dev/urandom devices with cat and company.