While debugging an application a few weeks back, I noticed the following error in the application log:
Cannot open file : Too many open files
The application runs as an unprivileged user, and upon closer inspection I noticed that the maximum number of file descriptors available to the process was 1024:
$ ulimit -n
1024
Increasing the maximum number of file descriptors is a two part process. First, you need to make sure the kernel’s file-max tunable is set sufficiently. This value controls the number of files that can be open system-side, and is controlled through the file-max proc setting:
$ cat /proc/sys/fs/file-max
366207
$ echo 512000 > /proc/sys/fs/file-max
$ cat /proc/sys/fs/file-max
512000
Once you know you have enough file descriptors available, you will need to add an entry for the user to the security limits file (/etc/security/limits.conf). This file is used to control resource limits for users and groups, and is processes when a user login session is initialized. To increase the number of file descriptors for all users to 8k, the following entry can be appended to the file:
nofile 8192
If you would prefer to set the value for a specific user, you can do so by replacing the asterisk with the username:
user1 - nofile 8192
Once the file-max proc setting and security limits file are updated, ulimit should report the increase:
$ ulimit -n
8192
If you want to test to see how many file descriptors are available to a given user, you can run the openfds program I wrote:
$ ./openfds
Error: Too many open files
Last file descriptor successfully opened: 8189
Openfds will print the string representation of the errno value that resulted from open returning -1, as well as the last file descriptor that was successfully opened. If you are curious why 8189 is reported, it’s because STDIN, STDOUT and STDERR take up 3 entries in the file descriptor table for the process.