Securing your Linux vsftp installations by locking down your server and chroot()’ing users

As much as we all hate FTP and the insecurities of the protocol, I’ve given up on the fact that it’s going to be retired anytime soon. A lot of old legacy systems (mainframes, AS400s, etc.) don’t support SSH, but they so support the infamous FTP protocol. These two factors force a lot of companies to continue to use it, so we need to take every measure we can to protect the FTP servers that receive files from these systems.

I’ve been using vsftpd for quite some time, and it has one of the best security track records of the various FTP server implementations. When I’m forced to use FTP, I always install vsftp and perform a number of actions to lock down my FTP server installation. Here is a short list:

– Enable SELinux
– Change the default vsftp banner (“ftpd_banner” controls the string displayed)
– Limit connections to known IP addresses (tcp_wrappers and iptables can help with this)
– Disable anonymous logins (“anonymous_enable” controls this behavior)
– Tighten up the umask to disable writeable files (“local_umask” controls the default umask to use)
– Increase logging and use centralized log servers (“xferlog_enable” and syslog-ng can help with this)
– Validate all identities in /etc/passwd and remove unneeded system accounts
– Disallow ALL system accounts from logging in
– Chroot all users to their home directory

The last item is especially important, since you don’t want users wandering around your file systems looking for files and directories that *could* be exploited through a software bug or misconfiguration. Chroot support is built into vsftpd, which is now the default FTP daemon in Redhat and CentOS Linux. Enabling chroot support is super easy, since you only need to uncomment the following line:

chroot_local_user=YES

Once enabled, users will only be able to see the files and directories in their home directory.

$ ftp ftp.prefetch.net
Connected to localhost (127.0.0.1).
220 Welcome to Matty’s FTP server. Unauthorized access prohibited!
ftp> user bingo
331 Please specify the password.
Password:
230 Login successful.
ftp> pwd
257 “/”

By default all users will be chroot’ed to their home directories, which may not be ideal in some situations. If you need to selectively allow access to directories outside of the chroot, you can enable “chroot_local_user” and add the usernames you want to be allowed to “browse” to /etc/vsftpd/chroot_list. If on the other hand you want to allow all users to access the server and only chroot a few, you can set chroot_list_enable to YES and list the user’s you want to chroot in the /etc/vsftpd/chroot_list. The location of the file that lists the users (/etc/vsftpd/chroot_list in the examples above) is controlled by the chroot_list_file variable, which can be set to the absolute path of a file that contains a list of users. While FTP sucks, it’s going to be with us for some time to come. If we have to support it, we might as well do all we can to secure it!

Printing the current sector size of a device in Linux

In the past year, a number of disk drives started shipping with 4K sector sizes. To see if your disk drive is using 512-byte or 4K sectors, you can use the blktool utility to print the sector size of a device:

$ blktool /dev/sda sector-sz
512

You can also look at the hw_sector_size value for a given device in /sys, but who wants to do that when a sweet little utility like blktool exists. There are various other ways to do this, and you comments and suggestions are welcome. :)

Undeleting a file that resides on a Linux EXT3 or EXT4 file system

I have on a couple of occasions been asked to restore files that were deleted. I’ve been fortunate up to this point that I have always been able to get the deleted file back, either through file system manipulation, dd’ing of a device or by restoring the file from a previous back up. One thing I’ve never quite understood is why UNIX and Linux Operating Systems don’t ship with a undelete utility for each file system type. Assuming you don’t zero out the metadata and the data blocks haven’t been re-used, restoring a file is pretty straight forward.

My luck almost came to an end the other day when I accidentally deleted a Nagios configuration file on my desktop. This was only a test installation, so I didn’t take the time to back everything up to a remote destination. I talked about one way to undelete a file on an EXT file system in the past, and was curious if any additional tools had been written to recover files. I found the amazingly cool extundelete utility, and after several tests it appears to be true to its name!

Extundelete requires an unmounted file system to operate on, so you will need to unmount the file system that contains the deleted files prior to recovering these files. To use the tool, you will need to pass extundelete one or more options and the block device associated with the filesystem. The options will tell extundelete which files to undelete, and will allow you to recover a single file, a directory and its contents, or ALL of the files that have been deleted. Be careful with the last one. ;)

So say we have a file system named /mnt, and we accidentally deleted a file named /services:

$ cd /mnt && rm services

If we use the “–restore-file” option, we can restore the file named /services:

$ umount /mnt

$ extundelete –restore-file ‘/services’ /dev/sdc1
WARNING: Extended attributes are not restored.
Loading filesystem metadata … 13 groups loaded.
Loading journal descriptors … 22 descriptors loaded.
Writing output to directory RECOVERED_FILES/
Restored inode 12 to file RECOVERED_FILES/services

$ ls -la RECOVERED_FILES/

total 20
drwxr-xr-x.  2 root root          4096 Apr 15 14:49 .
dr-xr-x---. 16 root root          4096 Apr 15 14:49 ..
-rw-r--r--.  1 root root 2241973580461 Apr 15 14:49 services

$ more RECOVERED_FILES/services
# /etc/services:
# $Id: services,v 1.51 2010/11/12 12:45:32 ovasik Exp $
#
# Network services, Internet style
# IANA services version: last updated 2010-11-09

……..

Now let’s say we have three critical files in /mnt named 1, 2 and 3 that are blown away:

$ cd /mnt && ls -la
total 1943
drwxr-xr-x. 3 root root 1024 Apr 15 14:50 .
dr-xr-xr-x. 28 root root 4096 Apr 15 14:48 ..
-rw-r–r–. 1 root root 651949 Apr 15 14:50 1
-rw-r–r–. 1 root root 651949 Apr 15 14:50 2
-rw-r–r–. 1 root root 651949 Apr 15 14:50 3
drwx——. 2 root root 12288 Apr 15 14:44 lost+found

$ cd /mnt && rm 1 2 3

Yikes! No need to fear though, we can run extundelete with the “–restore-all” option to restore all of the files deleted in the file system:

$ umount /mnt

$ extundelete –restore-all /dev/sdc1
WARNING: Extended attributes are not restored.
Loading filesystem metadata … 13 groups loaded.
Loading journal descriptors … 34 descriptors loaded.
Searching for recoverable inodes in directory / …
3 recoverable inodes found.
Looking through the directory structure for deleted files …
Restored inode 12 to file RECOVERED_FILES/1
Restored inode 13 to file RECOVERED_FILES/2
Restored inode 14 to file RECOVERED_FILES/3
0 recoverable inodes still lost.

$ ls -la RECOVERED_FILES/

total 44
drwxr-xr-x.  2 root root          4096 Apr 15 14:51 .
dr-xr-xr-x. 28 root root          4096 Apr 15 14:48 ..
-rw-r--r--.  1 root root 2241973580461 Apr 15 14:51 1
-rw-r--r--.  1 root root 2241973580461 Apr 15 14:51 2
-rw-r--r--.  1 root root 2241973580461 Apr 15 14:51 3

You may have noticed that the recovered files are showing up as 2.1TB in size, and for reasons I am not yet clear on the files are restored as HUGE sparse files (I’ll post an update once I figure this out). They aren’t actually 2.1TB:

$ cd RECOVERED_FILES && du -sh *

12K	1
12K	2
12K	3

As with any file system undelete utility, it can’t work if the file system is active and re-using metadata and data blocks. This utility was able to restore all of the files I removed, but then again I didn’t test it on a system that is performing 100s or 1000s of file operations per minute. This is no way a replacement for a quality backup system, but a tool you bring in once all other measures fail. I would like to thank Nic for the awesome work he did on this utility. I haven’t studied the internal of the program yet, so use at your own risk (I haven’t seen any adverse effects from it, but that’s not to say there aren’t any!).

My quest for the perfect cup of coffee

Like most techies, I love to enjoy a couple cups of coffee each day. I’m not an espresso person like my blogging partner Mike, but am a simple drip guy. For the past 3 – 4 years I’ve been using a Cuisineart 12-cup drip coffee maker with natural unbleached filters. I THOUGHT this would provide a good cup of joe, but oh how wrong I was.

A few weeks back I got to talking about coffee with a good friend of mine. He gave me the ins and outs of preserving coffee, his thoughts on roasting the perfect bean, and then he schooled me on the best ways to craft a cup of joe. He also recommended replacing my drip coffee maker with a french press, which would bring in more of the natural flavors of the coffee. I had never really studied or read about this stuff before, so hearing how to store my beans and how beans decay over time was rather interesting.

With this wealth of new information in hand, I ventured off to see what kind of French presses Amazon had. After reading tons of reviews and seeing the Bodum Chambord Coffee Press come up on several sites, I decided to bite the bullet and buy one. I got the 32-ounce coffee press, and after reading the directions I decided to grind up some beans and give my new coffee maker a whirl.

I decided to start small, and put in 2 scoops of coffee along with 8 ounces of how water. The coffee aroma smelled real nice like, and after four minutes of brewing time I poured my first cup of joe. When I took my first sip I almost gagged, since I had ingested a mouth full of coffee grinds. Gak! I figured I did something wrong, and after a little googling I found out that you need to use COARSE ground coffee instead of fine ground coffee. Duh! My electronic grinder didn’t have a coarse setting (I could fudge it by grinding less, but the end product was not ideal), so I decided to order a hand grinder that I could set to coarse.

After once again reading tons of reviews I decided to order a Kyocera Ceramic Coffee Grinder. I decided on a hand grinder since I could take it camping, or use it while the power was out. It landed on my door step the other day, and after opening up the box I was very impressed with how durable it was. Still itching to see what my buddy meant by a “killer cup of joe”, I coarsely ground two scoops of coffee and then started the brewing process a second time. When I poured out my cup of coffee this time, there were no grounds in it and it smelled like something a high end coffee barista would make.

So what do I think about the taste difference? There is absolutely no comparison. I can actually taste the coffee flavor now, and when a coffee says bold it is indeed bold. I’m still trying to figure out which beans and regions I prefer, but my initial foray into the french press coffee making business has been an almost complete success! I say almost since I wound up with a mouth full of grinds the first time through, and then learned that Whole Foods isn’t joking with their happy morning buzz beans! Those had me wired and attentive for hours straight. ;)

So if you are a drip guy or gal, you may want to look into using a french press. If you are already brewing delicious coffee using a press or some other brewing process, please leave me a comment and let me know which beans and brewing process you are using. I’m planning to experiment a lot over the next few months, and your input would be greatly appreciated! Now to save up for a 3-cup Bodum Chambord Coffee Press for work. That will definitely help me stay attentive and alert after lunch, and experiment with more flavors of coffee and tea. Happy brewing!