Blog O' Matty


Controlling the inventory order when running an Ansible playbook

This article was posted by on 2020-07-29 00:00:00 -0500 -0500

This week I was updating some Ansible application and OS update playbooks. By default, when you run ansible-playbook it will apply your desired configuration to hosts in the order they are listed in the inventory file (or in the order they are returned by a dynamic inventory script). But what if you want to process hosts in a random order? Or by their sorted or reverse sorted names? I recently came across the order option, and was surprised that I didn’t notice this before.

The order option allows you to control the order ansible operates on the hosts in your inventory. It currently has five options:

inventory          The default. The order is ‘as provided’ by the inventory
reverse_inventory  As the name implies, this reverses the order ‘as provided’ by the inventory
sorted             Hosts are alphabetically sorted by name
reverse_sorted     Hosts are sorted by name in reverse alphabetical order
shuffle            Hosts are randomly ordered each run

So if you want to process your hosts in a random order, you can pass “shuffle” to option:

---
- hosts: "{{ server_list }}"
  become: true
  serial: 1
  order: shuffle
  tasks:
    - name: Upgrade Operating System packages
      yum:
        name: '*'
        state: latest
      register: yum_updates_applied
...

This is super useful for large clusters, epsecially those that have hosts grouped by functional purpose. Nifty option!

How the docker pull command works under the covers (with HTTP headers to illustrate the process)

This article was posted by on 2020-07-22 00:00:00 -0500 -0500

I talked previously about needing to decode docker HTTP headers to debug a registry issue. That debugging session was super fun, but I had a few questions about how that interaction actually works. So I started to decode all of the HTTP requests and responses from a $(docker pull), which truly helped me solidify how the docker daemon (dockerd) talks to a container registry. I figured I would share my notes here so I (as well as anyone else on the ‘net) can reference them in the future.

Here are the commands I ran prior to reviewing the client / server interactions:

$ docker login harbor

$ docker pull harbor/nginx/ingress:v1.0.0

There are a couple of interesting bits in these commands. First, the docker CLI utility doesn’t actually retrieve a container image. That job is delegated to the docker server daemon (dockerd). Second, when you type docker login, it will authenticate to the registry and cache your credentials in $HOME/.docker/config.json by default. Those are then used in future requests to the container registry.

Now on to the HTTP requests and responses. The first GET issued by dockerd is to the /v2/ registry API endpoint:

GET /v2/ HTTP/1.1
Host: harbor
User-Agent: docker/19.03.12
Accept-Encoding: gzip
Connection: close

The Harbor registry responds with a 401 unauthorized when we try to retrieve the URI /v2/. It also adds a Www-Authenticate: header with the path to the registry’s token server:

HTTP/1.1 401 Unauthorized
Server: nginx
Date: Thu, 16 Jul 2020 18:59:45 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 87
Connection: close
Docker-Distribution-Api-Version: registry/2.0
Set-Cookie: beegosessionID=XYZ; Path=/; HttpOnly
Www-Authenticate: Bearer realm="https://harbor/service/token",service="harbor-registry"

{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}

Next, we try to retrieve an access token (a JWT in this case) from the token server:

GET /service/token?scope=repository%3Anginx%2Fingress%3Apull&service=harbor-registry HTTP/1.1
Host: harbor
User-Agent: docker/19.03.12
Accept-Encoding: gzip
Connection: close

The server responds with a 200 and an entity body (not included below) containing the access token (a JWT):

HTTP/1.1 200 OK
Server: nginx
Date: Thu, 16 Jul 2020 18:59:45 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 959
Connection: close
Content-Encoding: gzip
Set-Cookie: beegosessionID=XYZ; Path=/; HttpOnly

The scope in the JWT controls what you can do, and to which repositories. So now that we have an access token, we can retrieve the manifest for the container image:

GET /v2/nginx/ingress/manifests/v1.0.0 HTTP/1.1
Host: harbor
User-Agent: docker/19.03.12
Accept: application/vnd.docker.distribution.manifest.v1+prettyjws
Accept: application/json
Accept: application/vnd.docker.distribution.manifest.v2+json
Accept: application/vnd.docker.distribution.manifest.list.v2+json
Accept: application/vnd.oci.image.index.v1+json
Accept: application/vnd.oci.image.manifest.v1+json
Authorization: Bearer JWT.JWT.JWT
Accept-Encoding: gzip
Connection: close

If you aren’t familiar with manifests, they are JSON files that describe the container and the layers that make up the image. There is a schema which defines the manifest, and the following response shows an actual manifest sent back from the container registry:

HTTP/1.1 200 OK
Server: nginx
Date: Thu, 16 Jul 2020 18:59:45 GMT
Content-Type: application/vnd.docker.distribution.manifest.v2+json
Content-Length: 1154
Connection: close
Docker-Content-Digest: sha256:a7425073232ed3fb26b45ec6b26482e53984692ce6265b64f85c6c68b72c3cc5
Docker-Distribution-Api-Version: registry/2.0
Etag: "sha256:a7425073232ed3fb26b45ec6b26482e53984692ce6265b64f85c6c68b72c3cc5"
Set-Cookie: beegosessionID=XYZ; Path=/; HttpOnly

{
   "schemaVersion": 2,
   "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
   "config": {
      "mediaType": "application/vnd.docker.container.image.v1+json",
      "size": 2556,
      "digest": "sha256:53a19cd1924db72bd427b3792cf8ee5be6f969caa33c7a32ed104a1561f37bb2"
   },
   "layers": [
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "size": 701123,
         "digest": "sha256:2ea20e1f93179438e0f481d2f291580b0fd6808ce2c716e5f9fc961b2b038e4e"
      },
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "size": 32,
         "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1"
      },
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "size": 525239,
         "digest": "sha256:5c59e002a478e367ed6aa3c1d0b22b98abbb8091378ef4c273dbadb368b735b1"
      },
      {
         "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
         "size": 2087201,
         "digest": "sha256:d1d2b2330ff21c60d278fb02908491f59868b942c6433e8699e405764bca5645"
      }
   ]

In the manifest above, you can see the manifest version, the media type, and the image layers that make up the container image. This is all described in the official documentation. Now that dockerd knows the container image layout, if will retrieve one or more image layers in parallel (how many images to retrieve in parallel is controlled with the “–max-concurrent-download” option):

GET /v2/nginx/ingress/blobs/sha256:2ea20e1f93179438e0f481d2f291580b0fd6808ce2c716e5f9fc961b2b038e4e HTTP/1.1
Host: harbor
User-Agent: docker/19.03.12
Accept-Encoding: identity
Authorization: Bearer JWT.JWT.JWT
Connection: close

And that’s it. After all of the image layers are pulled, the next step would typically be to start a docker container`. Had a blast looking into this!

Using the sslsplit MITM proxy to capture Docker registry communications

This article was posted by on 2020-07-16 00:00:53 -0500 -0500

This past weekend I got to debug a super fun issue! One of my Kubernetes clusters was seeing a slew of ErrImagePull errors. When I logged into one of the Kubernetes workers, the dockerd debug logs showed it had an issue pulling an image, but it didn’t log WHY it couldn’t pull it. Fortunately I use a private container registry, so I figured I could print the registry communications with ssldump. When I ran ssldump with the registries private key, I saw numerous application_data lines, but no data.

It turns out newer versions of TLS use forward secrecy. When forward secrecy is used, the server’s private key is used to encrypt the initial communications, and to negotiate a session key which is used for the remainder of the TLs session. Ssldump doesn’t currently support decrypting communications with the session key, so I needed to come up with plan B to get the HTTP headers. It turns out the sslsplit utility is an ideal solution for this! If you haven’t used it, SSLsplit is a MITM proxy which can be configured to decrypt TLS communications from one or more clients.

SSLsplit is easy to use, but needs a few things in place before it can start decoding TLS record layer messages. First, we need to generate a RootCA certificate and the associated private key. Mkcert makes this super easy:

$ mkcert -install

The new RootCA is used to mint the certificate that sslsplit will present to the client (dockerd in this case). Next, we need to tell the OS (CentOS 7 in this case) to trust the new CA certificate:

$ cd /etc/pki/ca-trust/source/anchors

$ cp /home/vagrant/.local/share/mkcert/rootCA.pem .

$ update-ca-trust

If you skip this step, docker will complain about an unknown certificate authority:

$ docker pull nginx

Error response from daemon: Get https://harbor/v2/: x509: certificate signed by unknown authority

Next we need to restart dockerd to pick up the new RootCA certificate:

$ systemctl restart docker

Now that the foundation is built, we can fire up sslsplit:

$ sudo /usr/local/bin/sslsplit -k /home/vagrant/.local/share/mkcert/rootCA-key.pem -c /home/vagrant/.local/share/mkcert/rootCA.pem -P ssl 0.0.0.0 443 10.10.10.250 443 -D -X /home/vagrant/ssl.pkt -L /home/vagrant/ssl.log

In the example above, I started sslsplit with the “-k” (CA certificate private key), “-c” (CA certificate), “-D” (don’t detach from the terminal and turn on debugging), -X (log packets to file), -L (log content to a text file), and the “-P” option. The “-P” option contains the proxy specification, which takes the following form:

PROTOCOL LISTEN_ADDRESS LISTEN_PORT HOST_TO_FORWARD_TO PORT_TO_FORWARD_TO

And that’s it! Now if you add an entry to /etc/hosts with your container registry name and the local IP, all communications will flow through SSLsplit. Once you gather the information you need, you can remove the host file entry and stop sslsplit. Then you can review the log or feed the packet capture to your favorite decoder to see what’s going on. My issue turned out to be a Harbor configuration issue, which was easily fixed once I reviewed the HTTP requests and responses.

Using dockle to check docker containers for known issues

This article was posted by on 2020-07-15 00:00:00 -0500 -0500

As an SRE, I’m always on the look out for tooling that can help me do my job better. The Kubernetes ecosystem is filled with amazing tools, especially ones that can validate that your clusters and container images are configured in a reliable and secure fashion. One such tool is dockle. If you haven’t heard of it, dockle is a container scanning tool that can be used verify that your containers are adhering to best practices.

To get started with dockle, you can pass the name of a repository and an optional tag as an argument:

$ dockle kindest/node

WARN  - CIS-DI-0001: Create a user for the container
  * Last user should not be root
WARN  - DKL-DI-0006: Avoid latest tag
  * Avoid 'latest' tag
INFO  - CIS-DI-0005: Enable Content trust for Docker
  * export DOCKER_CONTENT_TRUST=1 before docker pull/build
INFO  - CIS-DI-0006: Add HEALTHCHECK instruction to the container image
  * not found HEALTHCHECK statement
INFO  - CIS-DI-0008: Confirm safety of setuid/setgid files
  * setgid file: usr/bin/expiry grwxr-xr-x
  * setuid file: usr/bin/su urwxr-xr-x
  * setuid file: usr/bin/newgrp urwxr-xr-x
  * setuid file: usr/bin/chfn urwxr-xr-x
  * setuid file: usr/bin/passwd urwxr-xr-x
  * setuid file: usr/bin/chsh urwxr-xr-x
  * setuid file: usr/bin/mount urwxr-xr-x
  * setuid file: usr/bin/umount urwxr-xr-x
  * setuid file: usr/bin/gpasswd urwxr-xr-x
  * setgid file: usr/bin/chage grwxr-xr-x
  * setgid file: usr/sbin/pam_extrausers_chkpwd grwxr-xr-x
  * setgid file: usr/sbin/unix_chkpwd grwxr-xr-x
  * setgid file: usr/bin/wall grwxr-xr-x

Dockle will then inspect the container image and provide feedback on STDOUT. The output contains the checkpoint that triggered the output, as well as a description of what if found. I really dig the concise output, as well as the ability to ignore warnings and control the exit codes that are produced. It’s easy to add this to your CI/CD pipeline, and is a nice compliment to container scanning tools such as Clair and Trivy. Super cool project!

Decoding JSON Web Tokens (JWTs) from the Linux command line

This article was posted by on 2020-07-14 00:00:00 -0500 -0500

Over the past few months I’ve been spending some of my spare time trying to understand OAUTH2 and OIDC. At the core of OAUTH2 is the concept of a bearer token. The most common form of bearer token is the JWT (JSON Web Token), which is a string with three hexadecimal components separated by periods (e.g., XXXXXX.YYYYYYYY.ZZZZZZZZ).

There are plenty of online tools available to decode JWTs, but being a command line warrior I wanted something I could use from a bash prompt. While looking into command line JWT decoders, I came across the following gist describing how to do this with jq. After a couple of slight modifications I was super stoked with the following jq incantation (huge thanks Lucas!):

$ jq -R 'split(".") | .[0],.[1] | @base64d | fromjson' <<< $(cat "${JWT}")

{
  "typ": "JWT",
  "alg": "RS256",
  "kid": "XYZ"
}
{
  "iss": "prefetch-token-issuer",
  "sub": "",
  "aud": "prefetch-registry",
  "exp,  "XYZ"
  "nbf": XYZ,
  "iat": XYZ,
  "jti": "XYZ",
  "access": [
    {
      "type": "repository",
      "name": "foo/container",
      "actions": [
        "pull"
      ]
    }
  ]
}

To make this readily available, I created a Bourne shell function which passes argument 1 (the JWT) to jq:

jwtd() {
    if [[ -x $(command -v jq) ]]; then
         jq -R 'split(".") | .[0],.[1] | @base64d | fromjson' <<< "${1}"
         echo "Signature: $(echo "${1}" | awk -F'.' '{print $3}')"
    fi
}

Once active, you can decode JWTs from the Linux command line with relative ease:

$ jwtd XXXXXX.YYYYYYY.ZZZZZZZ

Huge thanks to Lukas Lihotzki for the AMAZING Gist comment. Incredible work!