With the help of various contributors, I’ve integrated some new features and a number of bug fixes to ssl-cert-check over the past couple of months. If you aren’t familiar with this tool, it’s a bash script that you can use to notify you prior to your certificates expiring. You can read more about the script by surfing over to the ssl-cert-check documentation page.
While preparing for my RHCE exam, I wanted to install of the system-config-* GUIs to see what functionality they provided. I used the yum groupinstall option to install the GNOME desktop:
$ yum groupinstall 'GNOME Desktop Environment'
and then proceeded to add my preferred desktop environment to /etc/sysconfig/desktop:
$ cat /etc/sysconfig/desktop
DISPLAYMANAGER="GNOME"
Once these items were installed I ran ‘init 5’ and was greeted with the following message in /var/log/messages:
init:x respawning too fast, disabled for 5 minutes.
After reading through various logs and scripts, I noticed that the gdm display manager wasn’t installed. I thought groupinstalling the GNOME desktop would force a display manager to be installed, but alas that isn’t the case. To get everything working I fired up yum and installed gdm:
$ yum install gdm
Everything worked as expected once gdm was installed, and I could fire up the GUIs without issue.
I took the RHCE exam this past week, and was fortunate to pass both the RHCT and RHCE sections with a score of 100%. While I can’t discuss what was on the test, I figured I would share the process I used to prepare for the test. When I decided to take the exam, I picked up a copy of the Red Hat Certified Engineer Linux Study Guide and read it from cover-to-cover. Michael Jang did a great job with the book, and it shed some light on things I never use (e.g., Linux printing).
Once I finished reading Michael’s book I printed the RHCE objectives. For each objective I did the following:
I used two virtual machines to study with, one acting as a server and the other a client. The items listed above took quite a bit of time to master, but I can definitely say I’m a better admin because of it. I learned a bunch of new things about RHEL/CentOS, and can definitely troubleshoot things faster now. Best of luck if you decide to take the exam!
When it comes to firewalling services, NFS has to be one of the most complex to get operational. By default the various NFS services (lockd, statd, mountd, etc.) will request random port assignments from the portmapper (portmap), which means that most administrators need to open up a range of ports in their firewall rule base to get NFS working. On Linux hosts there is a simple way to firewall NFS services, and I thought I would walk through how I got iptables and my NFS server to work together.
Getting NFS working with iptables is a three step process:
To hard strap the ports that the various NFS services will use, you can assign your preferred ports to the MOUNTD_PORT, STATD_PORT, LOCKD_TCPPORT, LOCKD_UDPPORT, RQUOTAD_PORT and STATD_OUTGOING_PORT variables in /etc/sysconfig/nfs. Here are the settings I am using on my server:
MOUNTD_PORT="10050"
STATD_PORT="10051"
LOCKD_TCPPORT="10052"
LOCKD_UDPPORT="10052"
RQUOTAD_PORT="10053"
STATD_OUTGOING_PORT="10054"
Once ports have been assigned, you will need to restart the portmap and nfs services to pick up the changes:
$ service portmap restart
Stopping portmap: [ OK ]
Starting portmap: [ OK ]
$ service nfslock restart
Stopping NFS locking: [ OK ]
Stopping NFS statd: [ OK ]
Starting NFS statd: [ OK ]
$ service nfs restart
Shutting down NFS mountd: [ OK ]
Shutting down NFS daemon: [ OK ]
Shutting down NFS quotas: [ OK ]
Shutting down NFS services: [ OK ]
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS daemon: [ OK ]
Starting NFS mountd: [ OK ]
If you query the portmap daemon with rpcinfo, you will see that the various services are now registered on the ports that were assigned in /etc/sysconfig/nfs:
$ rpcinfo -p
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 10051 status
100024 1 tcp 10051 status
100011 1 udp 10053 rquotad
100011 2 udp 10053 rquotad
100011 1 tcp 10053 rquotad
100011 2 tcp 10053 rquotad
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100021 1 udp 10052 nlockmgr
100021 3 udp 10052 nlockmgr
100021 4 udp 10052 nlockmgr
100021 1 tcp 10052 nlockmgr
100021 3 tcp 10052 nlockmgr
100021 4 tcp 10052 nlockmgr
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100005 1 udp 10050 mountd
100005 1 tcp 10050 mountd
100005 2 udp 10050 mountd
100005 2 tcp 10050 mountd
100005 3 udp 10050 mountd
100005 3 tcp 10050 mountd
Next up, we need to adjust the appropriate iptables chains to allow inbound connections to the NFS service ports. Here are the entries I added to /etc/sysconfig/iptables to allow NFS to work with iptables:
# Portmap ports
-A INPUT -m state --state NEW -p tcp --dport 111 -j ACCEPT
-A INPUT -m state --state NEW -p udp --dport 111 -j ACCEPT
# NFS daemon ports
-A INPUT -m state --state NEW -p tcp --dport 2049 -j ACCEPT
-A INPUT -m state --state NEW -p udp --dport 2049 -j ACCEPT
# NFS mountd ports
-A INPUT -m state --state NEW -p udp --dport 10050 -j ACCEPT
-A INPUT -m state --state NEW -p tcp --dport 10050 -j ACCEPT
# NFS status ports
-A INPUT -m state --state NEW -p udp --dport 10051 -j ACCEPT
-A INPUT -m state --state NEW -p tcp --dport 10051 -j ACCEPT
# NFS lock manager ports
-A INPUT -m state --state NEW -p udp --dport 10052 -j ACCEPT
-A INPUT -m state --state NEW -p tcp --dport 10052 -j ACCEPT
# NFS rquotad ports
-A INPUT -m state --state NEW -p udp --dport 10053 -j ACCEPT
-A INPUT -m state --state NEW -p tcp --dport 10053 -j ACCEPT
Then I restarted iptables:
$ service iptables restart
Flushing firewall rules: [ OK ]
Setting chains to policy ACCEPT: filter [ OK ]
Unloading iptables modules: [ OK ]
Applying iptables firewall rules: [ OK ]
In addition to the rules listed above, I have entries to track state (using the conntrack module) and allow established connections. If everything went as expected, you should be able to mount your file systems without issue. To debug issues, you can use the following steps:
With just a few steps, you can get NFS working with iptables. If you have any suggestions or comments, feel free to leave me a comment! I’d love to hear folks thoughts on this.
The Linux kernel has supported NFS for as long as I can remember. All of the major distributions (Redhat, CentOS, Fedora, Suse, Ubunut) ship with NFS client and server support and have all of the user land daemons and tools needed to configure and debug NFS. I spent some time this past weekend bringing up a new NFS server in a SELinux-managed environment, and thought I would share my experience with my readers.
Setting up a Linux NFS server with SELinux can be done in just a few simple steps:
SELinux does not allow remote content to be accessed by default. This can easily be fixed by enabling one of the three SELinux booleans listed below:
nfs_export_all_ro -- allows file systems to be exported read-only
nfs_export_all_rw -- allows file systems to be exported read-write
use_nfs_home_dirs -- allows home directories to be exported over NFS
To set a boolean you can use the setsebool utility:
$ setsebool -P nfs_export_all_rw 1
Once SELinux has been instructed to allow NFS exports, you can add the file systems you want to export to /etc/exports. This file has the following format:
FILE_SYSTEM_TO_EXPORT CLIENT_LIST(EXPORT_OPTIONS)
To export the file system/exports/nfs to clients on the 192.168.1.0/24 network or in the prefetch.net domain, we can add an entry similar to the following to /etc/exports
/export/nfs 192.168.1.0/255.255.255.0(rw,sync) *.prefetch.net(rw,sync)
To start the NFS services on your server, you will need to enable the portmap and nfs services. This can be accomplished with chkconfig and service:
$ chkconfig portmap on
$ chkconfig nfs on
$ chkconfig nfslock on
$ service portmap start
Starting portmap: [ OK ]
$ service nfs start
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS daemon: [ OK ]
Starting NFS mountd: [ OK ]
$ service nfslock start
Starting NFS statd: [ OK ]
The file systems listed in /etc/exports should now be exported, and you can verify this with the exportfs utility:
$ exportfs
/export/nfs 192.168.1.0/255.255.255.0
/export/nfs *.prefetch.net
To verify a mount is functioning, you can try mounting it from a client that falls inside the ACL:
$ mount server:/export/nfs /export/nfs
If this fails or you receive a permission denied error you can check the following things:
If everything is working as expected, you should pat yourself on the back for a job well done!