Normalizing date strings with the Python dateutil module

I recently attended a Python workshop with Jeff Cohen and he answered a ton of random questions from students. One student mentioned that he was overwhelmed with the number of Python modules and Jeff told us that he has evolved his Python skills by learning at least one new module each week. I’ve started doing this as well and it’s been a HUGE help. Each Sunday I find a module I’m not familiar with in PiPY or the standard library and read through the documents. Then I do a number of coding exercises to see how the module works. Once I’m comfortable using it I try to read through the source code to see how it’s implemented under the covers. The last part is time consuming but it’s a great way to really understand how the module works.

While perusing the date modules last weekend I came across dateutil. This handy little module provides a built-in parser to take arbitrary dates and normalize them into a datetime object. If you are dealing with different data sources without a common set of formatting standards you will love this little guy! To see how this works say you have two dates and need to get the number of days between them. The following snippet does this.

>>> import dateutil.parser

>>> date1 = "2020-06-23T16:56:05Z"

>>> date2 = "June 22 2018 09:23:45"

>>> d1 = dateutil.parser.parse(date1, ignoretz=True)

>>> d2 = dateutil.parser.parse(date2, ignoretz=True)

>>> print (d1-d2).days

If you need higher resolution you can use the min, seconds, total_seconds and microseconds attributes to drill down further. This useful module made rewriting a breeze!

How to expand you technical book library on the cheap

I recently got the opportunity to start supporting a number of AIX systems, and being an AIX newbie the first thing I did was ask myself how can I learn everything there is to know about AIX (more to come on this topic tomorrow)? Being a readaholic, I decided to wander over to Amazon to see which AIX books were available.

After 10 minutes of searching and reading reviews, I ended up snagging a copy of AIX 5L Administration for $5. That $5 also included shipping! This led me to start looking at the “used” link associated with various other books I really wanted, and the used prices were typically at least 50% off the price Amazon lists for a new book (the book I got was unused, which is new to me!). This may be old news to everyone else, but I like the fact that I can now expand my technical library on the cheap! Thought I would pass this on to my fellow geeks.

Remounting Linux read-only file systems read-write

I’ve been paged countless times to investigate systems that have gone off the air for one reason or another. Typically when look into these issues the server has crashed unexpectedly (bug, hardware, etc.) and rebooted in an attempt to get a fresh start on life. On occassion the box won’t boot due to a misconfiguration or inconsistent file system, and the server will be left with a read-only root file system and no network connectivity.

Fixing this is pretty easy. If possible I will add a 1 to the end of the boot line to boot the kernel into single user mode. If that works I can fix the issue and then allow the box to boot up normally. If the root file system is mounted read-only, I will use the mount command’s remount option to mount it read-write:

$ mount -o remount,rw /

Once you can write to the file system you should be able to write out changes to the file system to correct the issue that prevented the server from booting. Viva la remount!

Update on my quest to find the perfect NAS device for home use

A few months back I started looking into NAS solutions that would be ideal for home use. I jotted down my initial research in the post building your own nas and the post making sense of the various nas solutions. My original intent was to purchase an HP microserver from Amazon and test out all of the freely available NAS solutions. Due to some time constraints I cancelled my microserver order and re-used a server I had at home. I’m still planning to order a microserver at some point, since they are killer boxes for building home NAS devices with.

I first loaded up openfiler and went to town creating devices and mapping them out to hosts on my network. If you haven’t used openfiler before, it’s a Linux-based NAS distribution that provides a graphical interface on top of a RHEL derived distribution. The fact that it runs Linux gave it a few cool points, but I was less than impressed with the graphical interface. Tasks such as creating logical volumes and shares weren’t as intuitive as I would have thought, and IMHO the GUI didn’t really provide all that much value over running the LVM utilities from the command line. It did manage all of the iSCSI, NFS and Samba work behind the scenes, which is kinda nice if you don’t want to dig into these services and see how they work.

Craving a bit more out of my NAS device, I wiped the server that I installed openfiler on and installed FreeNAS. For those who haven’t used FreeNAS, it is a FreeBSD-based NAS distribution that makes heavy use of ZFS. The FreeNAS installation process is about as painless as can be, and after a 10 minute install I was able to fire up the web interface and start poking around. I was impressed with the initial graphical interface, and after a few minutes of clicking around I had a working ZFS pool and a number of NFS and iSCSI targets provisioned. Everything seemed quite intuitive with the FreeNAS interface, and all of the options were in a place you would normally think to look.

I’m still using FreeNAS, though it doesn’t offer all of the items I would like in a true NAS appliance. Here are the items that would make FreeNAS a slam dunk for me:

– System and network service performance counters and graphs. These may be coming in 8.1.
– Built-in DLNA/UPnP support for streaming videos. This may be coming in 8.1.
– Better hardware monitoring and reporting.
– Energy conservation when the NAS device isn’t in use.
– Ability to act as a print server.

These things are relatively minor, and I suspect they will come in due time. Being a huge fan of ZFS I was stoked to see that this was part of FreeNAS. I’m curious to see how well the file system is supported now that Oracle has cut off the ZFS source code (I suspect it will thrive since the FreeBSD team has some crazy smart chaps working on it). As things stand now FreeNAS works, and it doesn’t cost you a penny to try out. I would still love to buy a synology diskstation to test out, but it’s kinda hard justifying one when what you have a working solution. It’s also nice to know that I can gain access to the FreeNAS source code anytime I want.

If anyone has read Gary Sim’s book Learning FreeNAS: Configure and manage a network attached storage solution please let me know what you thought of it in the comments section. I’m thinking about ordering a copy for a not so technically savy friend since his Youtube FreeNAS videos were done so well.

Would you be interested in RFC cliff notes?

I’ve been toying with the idea of reading one RFC a week and developing cliff notes so I can remember everything I read down the road. While I can always recall WHERE I read something, sometimes I need to go out and read the section in the RFC a second time to verify the details. All of the major protocols have a slew of RFCs associated with them, and I’m thinking about starting with DNS, moving on to SMTP, HTTP and NFS. While I’ve used solutions that heavily rely on these protocols, I’ve never read the RFCs from beginning to end. I’ve read large sections here and there to understand how an aspect of the protocol works, but never the entire thing. If you are interested in a set of cliff notes please add your comments. It takes a lot of time to write up stuff on the blog, so I don’t want to waste my time if there isn’t sufficient demand. :)

Undeleting a file that resides on a Linux EXT3 or EXT4 file system

I have on a couple of occasions been asked to restore files that were deleted. I’ve been fortunate up to this point that I have always been able to get the deleted file back, either through file system manipulation, dd’ing of a device or by restoring the file from a previous back up. One thing I’ve never quite understood is why UNIX and Linux Operating Systems don’t ship with a undelete utility for each file system type. Assuming you don’t zero out the metadata and the data blocks haven’t been re-used, restoring a file is pretty straight forward.

My luck almost came to an end the other day when I accidentally deleted a Nagios configuration file on my desktop. This was only a test installation, so I didn’t take the time to back everything up to a remote destination. I talked about one way to undelete a file on an EXT file system in the past, and was curious if any additional tools had been written to recover files. I found the amazingly cool extundelete utility, and after several tests it appears to be true to its name!

Extundelete requires an unmounted file system to operate on, so you will need to unmount the file system that contains the deleted files prior to recovering these files. To use the tool, you will need to pass extundelete one or more options and the block device associated with the filesystem. The options will tell extundelete which files to undelete, and will allow you to recover a single file, a directory and its contents, or ALL of the files that have been deleted. Be careful with the last one. ;)

So say we have a file system named /mnt, and we accidentally deleted a file named /services:

$ cd /mnt && rm services

If we use the “–restore-file” option, we can restore the file named /services:

$ umount /mnt

$ extundelete –restore-file ‘/services’ /dev/sdc1
WARNING: Extended attributes are not restored.
Loading filesystem metadata … 13 groups loaded.
Loading journal descriptors … 22 descriptors loaded.
Writing output to directory RECOVERED_FILES/
Restored inode 12 to file RECOVERED_FILES/services


total 20
drwxr-xr-x.  2 root root          4096 Apr 15 14:49 .
dr-xr-x---. 16 root root          4096 Apr 15 14:49 ..
-rw-r--r--.  1 root root 2241973580461 Apr 15 14:49 services

$ more RECOVERED_FILES/services
# /etc/services:
# $Id: services,v 1.51 2010/11/12 12:45:32 ovasik Exp $
# Network services, Internet style
# IANA services version: last updated 2010-11-09


Now let’s say we have three critical files in /mnt named 1, 2 and 3 that are blown away:

$ cd /mnt && ls -la
total 1943
drwxr-xr-x. 3 root root 1024 Apr 15 14:50 .
dr-xr-xr-x. 28 root root 4096 Apr 15 14:48 ..
-rw-r–r–. 1 root root 651949 Apr 15 14:50 1
-rw-r–r–. 1 root root 651949 Apr 15 14:50 2
-rw-r–r–. 1 root root 651949 Apr 15 14:50 3
drwx——. 2 root root 12288 Apr 15 14:44 lost+found

$ cd /mnt && rm 1 2 3

Yikes! No need to fear though, we can run extundelete with the “–restore-all” option to restore all of the files deleted in the file system:

$ umount /mnt

$ extundelete –restore-all /dev/sdc1
WARNING: Extended attributes are not restored.
Loading filesystem metadata … 13 groups loaded.
Loading journal descriptors … 34 descriptors loaded.
Searching for recoverable inodes in directory / …
3 recoverable inodes found.
Looking through the directory structure for deleted files …
Restored inode 12 to file RECOVERED_FILES/1
Restored inode 13 to file RECOVERED_FILES/2
Restored inode 14 to file RECOVERED_FILES/3
0 recoverable inodes still lost.


total 44
drwxr-xr-x.  2 root root          4096 Apr 15 14:51 .
dr-xr-xr-x. 28 root root          4096 Apr 15 14:48 ..
-rw-r--r--.  1 root root 2241973580461 Apr 15 14:51 1
-rw-r--r--.  1 root root 2241973580461 Apr 15 14:51 2
-rw-r--r--.  1 root root 2241973580461 Apr 15 14:51 3

You may have noticed that the recovered files are showing up as 2.1TB in size, and for reasons I am not yet clear on the files are restored as HUGE sparse files (I’ll post an update once I figure this out). They aren’t actually 2.1TB:

$ cd RECOVERED_FILES && du -sh *

12K	1
12K	2
12K	3

As with any file system undelete utility, it can’t work if the file system is active and re-using metadata and data blocks. This utility was able to restore all of the files I removed, but then again I didn’t test it on a system that is performing 100s or 1000s of file operations per minute. This is no way a replacement for a quality backup system, but a tool you bring in once all other measures fail. I would like to thank Nic for the awesome work he did on this utility. I haven’t studied the internal of the program yet, so use at your own risk (I haven’t seen any adverse effects from it, but that’s not to say there aren’t any!).