One of the best docker resources on the interwebs

For the past two years I’ve scoured the official docker documentation when I needed to learn something. Their documentation is really good but there are areas that lack examples and a deep explanation of why something is the way it is. One of my goals for this year is to read one technical book / RFC a month so I decided to start off the year with James Turnbull’s The Docker Book. James starts with the basics and then extends this with a thorough description of images, testing with docker and orchestration. This is by far the best $10 I’ve spent on a book and I’m hoping to read his new Terraform book once I finish reading through my DNS RFC. Awesome job on the book James!

Making sense of docker storage drivers

Docker has a pluggable storage architecture which currently contains 6 drivers.

AUFS - Original docker storage driver.
OverlayFS Driver built on top of overlayfs.
Btrfs Driver built on top of brtfs.
Device Mapper Driver built on top of the device mapper.
ZFS Driver built on top of the ZFS file system.
VFS A VFS-layer driver that isn't considered suitable for production. 

If you have docker installed you can run ‘docker info’ to see which driver you are using:

$ docker info | grep “Storage Driver:”
Storage Driver: devicemapper

Picking the right driver isn’t straightforward due to how fast docker and the storage drivers are evolving. The docker documentation has some excellent suggestions and you can’t go wrong using the most widely used drivers. I have hit a couple of bugs with the overlayfs driver and I have never bothered with the devicemapper driver with loopback files (vs. the device mapper driver w/ direct LVM) because of Jason’s post.

My biggest storage lesson learned (i.e., I do this because I hit bugs) from the past year is to give docker a chunk of dedicated storage. This space can reside in your root volume group, a dedicated volume group or in a partition. To use a dedicated volume group you can add “VG=VOLUME_GROUP” to /etc/sysconfig/docker-storage-setup:

$ cat /etc/sysconfig/docker-storage-setup
VG=”docker”

To use a dedicate disk you can add “DEV=BLOCK_DEVICE” to /etc/sysconfig/docker-storage-setup:

$ cat /etc/sysconfig/docker-storage-setup
DEVS=”/dev/sdb”

If either of these variables are set docker-storage-setup will create an LVM thin pool which docker will use to layer images. This layering is the foundation that docker containers are built on top of.

If you change VG or DEVS and docker is operational you will need to backup up your images, clean up /var/lib/docker and then run docker-storage-setup to apply the changes. The following shows what happens if docker-storage-setup is run w/o any options set:

                       
$ docker-storage-setup
  Rounding up size to full physical extent 412.00 MiB
  Logical volume "docker-poolmeta" created.
  Logical volume "docker-pool" created.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted docker/docker-pool to thin pool.
  Logical volume docker/docker-pool changed.

This create the data and metadata volumes in the root volume group and updates the docker configuration. If anyone is using the brtfs or zfs storage drivers shoot me a note to let me know what your experience has been.

Using docker to build software testing environments

I’ve been on the docker train for quite some time. While the benefits of running production workloads in containers is well known, I find docker just as valuable for evaluating and testing new software on my laptop. I’ll use this blog post to walk through how I build transient test environments for software evaluation.

Docker is based around images (Fedora, CentOS, Ubuntu, etc.), and these images can be created and customized through the use of a Dockerfile. The Dockerfile contains statements to control the OS that is used, the software that is installed and post configuration. Here is a Dockerfile I like to use for building test environments:

$ cat Dockerfile

FROM centos:7
MAINTAINER Matty

RUN yum -y update
RUN yum -y install openssh-server openldap-servers openldap-clients openldap
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN echo 'root:XXXXXXXX' | chpasswd

RUN /usr/bin/ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key -C '' -N ''
RUN /usr/bin/ssh-keygen -t rsa -f /etc/ssh/ssh_host_dsa_key -C '' -N ''

EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

To create an image from this Dockerfile you can use docker build:

$ docker build -t centos:7 .

The “-t” option assigns a tag to the image which can be referenced when a new container is instantiated. To view the new image you can run docker images:

$ docker images centos
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
centos              7                   4f798f95cfe1        8 minutes ago       414.8 MB
docker.io/centos    6                   f07f6ca555a5        3 weeks ago         194.6 MB
docker.io/centos    7                   980e0e4c79ec        3 weeks ago         196.7 MB
docker.io/centos    latest              980e0e4c79ec        3 weeks ago         196.7 MB

Not to have some fun! To create a new container we can use docker run:

$ docker run -d -P -h foo --name foo --publish 2222:22 centos:7
f84477722896b2701506ee65a3f5a909199675a9cd591f3591e906a8795eba5c

This instantiates a new CentOS container with the name (–name) foo, the hostname (-h) foo and uses the centos:7 image I created earlier. It also maps (–publish) port 22 in the container to port 2222 on my local PC. To access the container you can fire up SSH and connect to port 2222 as root (this is a test container so /dev/null the hate mail):

$ ssh root@localhost -p 2222
root@localhost's password: 
[root@foo ~]# 

Now I can install software, configure it, break it and debug issues all in an isolated environment. Once I’m satisfied with my testing I can stop the container and delete it:

$ docker stop foo
foo

$ docker rm foo

I find that running an SSH daemon in my test containers is super valuable. For production I would take Jérôme’s advice and look into other methods for getting into your containers.