One of my goals for 2018 is to take a part time full stack development course. I want to hit the ground running on day 1 so I recently signed up for a javascript and web development course on Udemy. As part of the Javascript course the instructor provides a number of labs and illustrates each concept with code snippets. When I was learning Python several years ago I found it extremely useful to fire up a python shell to test functions and new modules. The node command shell provides a similar experience and can be installed on a Fedora-derived distribution with dnf:
$ sudo dnf install nodejs
Once nodejs is installed you can run node to access the interactive command shell:
$ node
> var a = 1;
> var b = 2;
> gorp = a + b;
3
This appears to work identically to the python shell and makes it a snap to test new code.
Over the past few months I’ve become super interested in the container security movement. SELinux and apparmor are incredible LSMs for implementing mandatory access control policies and seccomp (SECure COMPuting with filters) can be added on top of MAC to further limit which system calls are issued. The combination of MAC policies and system call filtering has some amazing potential for admins who want to minimize the attack surface on their Linux servers.
Seccomp policies are enforced in the Linux kernel and are added by way of the prtctl(2) and seccomp(2) system calls. If you have a recent kernel you can check your kernel config file to see it seccomp is enabled:
$ grep SECCOMP /boot/config-$(uname -r)
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
CONFIG_SECCOMP_FILTER=y
CONFIG_SECCOMP=y
To use this feature to limit which system calls a process can issue you can run strace or perf to generate a list. I wrote a simple bourne shell script named create-seccomp-profiles to assist with this process. The script is an adaptaion of Brendan Gregg’s syscount script and uses perf to record the system calls a process issues. To generate a list of system calls for a specific process you can run the script with the “-l” (list system calls) and “-c” (command to interrogate) options:
$ create-seccomp-profiles -l -c nginx
System calls captured by perf:
socket socketpair bind listen connect
sendto setsockopt sendmsg recvmsg io_setup
eventfd2 epoll_create epoll_ctl epoll_pwait statfs
dup2 poll getdents ioctl fcntl
mkdir unlink newstat newfstat lseek
read pread64 pwrite64 access open
openat close mprotect brk munmap
set_robust_list futex setitimer setgroups setgid
setuid getpid geteuid newuname prlimit64
prctl sysinfo rt_sigprocmask rt_sigaction rt_sigsuspend
exit_group wait4 set_tid_address mmap arch_prctl
To create an initial profile (which will most likely need to be tweaked) to add to a systemd unit file you can run create-seccomp-profiles with the “-s” (create systemd output):
$ create-seccomp-profiles -s -c nginx
SystemCallFilter=socket socketpair bind listen connect setsockopt sendmsg recvmsg io_setup eventfd2 epoll_create epoll_ctl epoll_pwait statfs dup2 getdents ioctl fcntl mkdir unlink newstat newfstat lseek read pread64 pwrite64 access open openat close mprotect brk munmap set_robust_list futex setitimer setgroups setgid setuid getpid geteuid newuname prlimit64 prctl sysinfo rt_sigprocmask rt_sigaction rt_sigsuspend exit_group wait4 set_tid_address mmap arch_prctl
To ensure that you get all of the possible system calls you need to simulate load that matches what you would see on a live server. This will ensure that dynamically loaded modules (ones loaded via dlopen() for example) will load and run. To apply the seccomp policy to a systemd service you can use the systemctl edit option:
$ systemctl edit nginx
[Service]
SystemCallFilter=socket socketpair bind listen connect setsockopt sendmsg recvmsg io_setup eventfd2 epoll_create epoll_ctl epoll_pwait statfs dup2 getdents ioctl fcntl mkdir unlink newstat newfstat lseek read pread64 pwrite64 access open openat close mprotect brk munmap set_robust_list futex setitimer setgroups setgid setuid getpid geteuid newuname prlimit64 prctl sysinfo rt_sigprocmask rt_sigaction rt_sigsuspend exit_group wait4 set_tid_address mmap arch_prctl
The edit option will create an override.conf in /etc/systemd/system/<SERVICE_NAME>.service.d. To reload the service with the new seccomp profile you can use the systemctl daemon-reload and restart options:
$ systemctl daemon-reload
$ systemctl restart nginx
If everything went as planned your service (nginx in this example) should start up and run as usual. If you happened to miss a system call the service will enter the failed state and a message will be written to the journal. This entry can be viewed with journalctl:
$ journalctl -n 20 -l
Nov 26 16:34:14 localhost.localdomain systemd[1]: Starting The nginx HTTP and reverse proxy server...
Nov 26 16:34:14 localhost.localdomain audit[3861]: SECCOMP auid=4294967295 uid=0 gid=0 ses=4294967295 subj=system_u:system_r:unconfined_service_t:s0 pid=3861 comm="rm" exe="/usr/bin/rm" sig=31 arch=c000003e syscall=5 compat=0 ip=0x7f39ead623c2 code=0x0
Nov 26 16:34:14 localhost.localdomain audit[3861]: ANOM_ABEND auid=4294967295 uid=0 gid=0 ses=4294967295 subj=system_u:system_r:unconfined_service_t:s0 pid=3861 comm="rm" exe="/usr/bin/rm" sig=31 res=1
Nov 26 16:34:14 localhost.localdomain systemd[1]: Started Process Core Dump (PID 3862/UID 0).
Nov 26 16:34:14 localhost.localdomain audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-coredump@1-3862-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Nov 26 16:34:14 localhost.localdomain audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=nginx comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Nov 26 16:34:14 localhost.localdomain systemd[1]: nginx.service: Control process exited, code=killed status=31
Nov 26 16:34:14 localhost.localdomain systemd[1]: Failed to start The nginx HTTP and reverse proxy server.
Nov 26 16:34:14 localhost.localdomain systemd[1]: nginx.service: Unit entered failed state.
Nov 26 16:34:14 localhost.localdomain systemd[1]: nginx.service: Failed with result 'signal'.
Nov 26 16:34:14 localhost.localdomain systemd-coredump[3863]: Process 3861 (rm) of user 0 dumped core.
Stack trace of thread 3861:
#0 0x00007f39ead623c2 __GI___fxstat (ld-linux-x86-64.so.2)
#1 0x00007f39ead56ecb _dl_sysdep_read_whole_file (ld-linux-x86-64.so.2)
#2 0x00007f39ead5dcd8 _dl_load_cache_lookup (ld-linux-x86-64.so.2)
#3 0x00007f39ead4e5d2 _dl_map_object (ld-linux-x86-64.so.2)
#4 0x00007f39ead536b2 openaux (ld-linux-x86-64.so.2)
#5 0x00007f39ead6137b _dl_catch_error (ld-linux-x86-64.so.2)
#6 0x00007f39ead539fc _dl_map_object_deps (ld-linux-x86-64.so.2)
#7 0x00007f39ead48ce7 dl_main (ld-linux-x86-64.so.2)
#8 0x00007f39ead603d1 _dl_sysdep_start (ld-linux-x86-64.so.2)
#9 0x00007f39ead46f68 _dl_start (ld-linux-x86-64.so.2)
#10 0x00007f39ead45ed8 _start (ld-linux-x86-64.so.2)
Note the SECCOMP entry in the output above. This contains the failure message as well as a syscalls argument indicating which system call wasn’t allowed. To see which system call name the system call number maps to you can run ausyscall with the numeric system call identifier:
$ ausyscall 5
fstat
Once you know the system call that caused the service to fail you can addend it to the SystemCallFilter entry in the systemd unit file override file and restart the process using the steps above. In the process starts up successfully you can check the status proc entry to verify that seccomp is active:
$ grep -i seccomp /proc/34256/status
Seccomp: 2
This entry has one of three values:
The create-seccomp-profiles script is very much a work in progress and I have a few items to tackle over the coming weeks:
This was a good first start and I learned a ton about seccomp while researching this exciting topic.
This past weekend while working on create-seccomp-profile I needed a way to generate JSON output from arbitrary text. After a little googling I came across the incredibly useful jo utility. Where jq is invaluable for pulling arbitrary data out of a JSON data structure jo is amazing at creating JSON structures. In it’s simplest form jo can take key value pairs and produce pretty printed JSON output:
$ jo -p name=shibby array=$(jo -a foo=1 bar=2)
{
"name": "shibby",
"array": [
"foo=1",
"bar=2"
]
}
Jo also has a number of other capabilities to create arrays and complex object hierarchies:
$ jo -p defaultAction=SCMP_ACT_ERRNO architectures=$(jo -a SCMP_ARCH_X86_64 SCMP_ARCH_X86 SCMP_ARCH_X32) sycalls=$(jo -a $(jo name=read action=accept) $(jo name=write action=accept args=[]))
{
"defaultAction": "SCMP_ACT_ERRNO",
"architectures": [
"SCMP_ARCH_X86_64",
"SCMP_ARCH_X86",
"SCMP_ARCH_X32"
],
"sycalls": [
{
"name": "read",
"action": "accept"
},
{
"name": "write",
"action": "accept",
"args": []
}
]
}
In teh example above I created a simple docker seccomp profile. This is a super useful utility and I hope it gets rolled out to all of the major distributions. It’s in Ubuntu 17.10 so life is grand. :)
As part of the refactoring work I did on prefetch.net I created a visual markdown tutorial to assist with editing content. I truly enjoy working with markdown and the fact that I can write a post from any device and publish it via a git hook is a truly powerful thing. This should also prove useful as I update my README.md files on github.
I’ve been running my technology blog on top of Wordpress for the past 12-years. It was a great choice when i started but the core product has morfed into more than I need. When you combine that with a constant stream of security vulnerabilities I decided last month it was time to move to a static website generation tool. Like any new venture I sat down one Saturday morning and jotted down the requirements for my new website generator:
I experimented with Jekyl, Pelican and Hugo and after several weeks of testing I fell in love with Hugo. Not only was it super easy to install (it’s a single binary written in GO) but I had the bulk of my website converted after watching the Hugo video series from Giraffe Academy:
The biggest challenge I faced was getting all of my old posts (1200+) out of my existing Wordpress installation. Pelican comes with the pelican-import utility which can take a Wordpress XML export file and convert each post to markdown. Even though I decided to use Hugo to create my content I figured I would use the best tool for the job to perform the conversion:
$ pelican-import -m markdown --wpfile -o posts blogomatty.xml
In the example above I’m passing a file that I exported through the Wordpress UI and generating one markdown file in the posts directory for each blog post. The output files had the following structure:
Title: Real world uses for OpenSSL
Date: 2005-02-13 23:42
Author: admin
Category: Articles, Presentations and Certifications
Slug: real-world-uses-for-openssl
Status: published
If you are interested in learning more about all the cool things you can
do with OpenSSL, you might be interested in my article [Real world uses
for OpenSSL](/articles/realworldssl.html). The article covers
encryption, decryption, digital signatures, and provides an overview of
[ssl-site-check](/code) and [ssl-cert-check](/code).
These files didn’t work correctly out of the gate since Hugo requires you to encapsulate the front matter (the metadata describing the post) with “—” for markdown or “+++” for TOML formatting. To add the necessary formatting I threw together a bit of shell:
#!/bin/sh
for post in `ls posts_to_process`; do
echo "Processing post ${post}"
echo "---" > posts_processed/${post}.md
header=0
cat "posts_to_process/${post}" | while read line; do
if echo $line | egrep -i "^Status:" > /dev/null; then
echo "$line"
echo "---" >> posts_processed/${post}.md
header=1
elif [ ${header} -eq 1 ]; then
echo $line >> posts_processed/${post}.md
elif echo $line | egrep -i "^Title:" > /dev/null; then
echo $line | awk -F':' '{print $2$3}' | sed 's/^ *//g' | sed 's/"/\\"/g' | \
awk '{ print "title:", "\""$0"\"" }' >> posts_processed/${post}.md
else
echo $line >> posts_processed/${post}.md
fi
done
done
This takes the existing post and appends a “—” before and after the front matter. It also escapes quotes and addresses titles that have a single “:” in them. My posts still had issues with the date format and the author wasn’t consistent. To clean up the date I used my good buddy sed:
$ sed -i 's/Date: \(.*\) \(.*\)/Date: \1T\2:00-04:00/g'
To fix the issue with the author I once again turned to sed:
$ sed -i 's/^[Aa]uthor.*/author: matty/'
I had to create a bunch of additional hacks to work around some content consistency issues (NB: content consistency is my biggest take away from this project) but the end product is a blog that runs from statically generated content. In a future post I will dive into Hugo and the gotchas I encountered while converting my site. It was a painful process but luckily the worst is behind me. Now I just need to finish automating a couple manual processes and blogging will be fun again.