Blog O' Matty

Generating Kubernetes pod CIDR networks with kubectl and jq

This article was posted by Matty on 2018-02-20 15:58:11 -0500 EST

I’ve been looking into solutions to advertise pod CIDR blocks to devices outside of my Kubernetes cluster network. The Calico and kube-router projects both have working solutions to solve this problem so I’m evaluating both products. While watching the kube-router BGP demo the instructor used kubectl and jq to display the CIDR blocks assigned to each Kubernetes worker:

$ kubectl get nodes -o json | jq '.items[] | .spec'

  "externalID": "",
  "podCIDR": ""
  "externalID": "",
  "podCIDR": ""
  "externalID": "",
  "podCIDR": ""
  "externalID": "",
  "podCIDR": ""
  "externalID": "",
  "podCIDR": ""

This is a cool use of the kubectl JSON output option and jq. Stashing this away here for safe keeping. :)

Getting the Flannel host-gw working with Kubernetes

This article was posted by Matty on 2018-02-20 09:10:06 -0500 EST

When I first started learning how the Kubernetes networking model works I wanted to configure everything manually to see how the pieces fit together. This was a great learning experience and was easy to automate with ansible. This solution has a couple of downsides. If a machine is rebooted it loses the PodCIDR routes since they aren’t persisted to disk. It also doesn’t add or remove routes for hosts as they are added and removed from the cluster. I wanted a more permanent and dynamic solution so I started looking at the flannel host-gw and vxlan backends.

The flannel host-gw option was the first solution I evaluated. This backend takes the PodCIDR addresses assigned to all of the nodes and creates routing table entries so the workers can reach each other through the cluster IP range. In addition, flanneld will NAT the cluster IPs to the host IP if a pod needs to contact a host outside of the local broadcast domain. The flannel daemon (flanneld) runs as a DaemonSet so one pod (and one flanneld daemon) will be created on each worker. Setting up the flannel host-gw is ridiculously easy. To begin, you will need to download the deployment manifest from GitHub:

$ wget

Once you retrieve the manifest you will need to change the Type in the net-conf.json YAML block from vxlan to host-gw. We can use our good buddy sed to make the change:

$ sed 's/vxlan/host-gw/' -i kube-flannel.yml

To apply the configuration to your cluster you can use the kubectl create command:

$ kubectl create -f kube-flannel.yml

This will create several Kubernetes objects:

To verify the pods were created and are currently running we can use the kubectl get command:

$ kubectl get pods -n kube-system -o wide

NAME                                    READY     STATUS    RESTARTS   AGE       IP             NODE
kube-flannel-ds-42nwn                   1/1       Running   0          5m
kube-flannel-ds-49zvp                   1/1       Running   0          5m
kube-flannel-ds-t8g9f                   1/1       Running   0          5m
kube-flannel-ds-v6kdr                   1/1       Running   0          5m
kube-flannel-ds-xnlzc                   1/1       Running   0          5m

We can also use the kubectl logs command to review the flanneld logs that were produced when it was initialized:

$ kubectl logs -n kube-system kube-flannel-ds-t8g9f

I0220 14:31:23.347252       1 main.go:475] Determining IP address of default interface
I0220 14:31:23.347435       1 main.go:488] Using interface with name ens192 and address
I0220 14:31:23.347446       1 main.go:505] Defaulting external address to interface address (
I0220 14:31:23.357568       1 kube.go:131] Waiting 10m0s for node controller to sync
I0220 14:31:23.357622       1 kube.go:294] Starting kube subnet manager
I0220 14:31:24.357751       1 kube.go:138] Node controller sync successful
I0220 14:31:24.357771       1 main.go:235] Created subnet manager: Kubernetes Subnet Manager -
I0220 14:31:24.357774       1 main.go:238] Installing signal handlers
I0220 14:31:24.357869       1 main.go:353] Found network config - Backend type: host-gw
I0220 14:31:24.357984       1 main.go:300] Wrote subnet file to /run/flannel/subnet.env
I0220 14:31:24.357988       1 main.go:304] Running backend.
I0220 14:31:24.358007       1 main.go:322] Waiting for all goroutines to exit
I0220 14:31:24.358044       1 route_network.go:53] Watching for new subnet leases
I0220 14:31:24.443807       1 route_network.go:85] Subnet added: via
I0220 14:31:24.444040       1 route_network.go:85] Subnet added: via
I0220 14:31:24.444798       1 route_network.go:85] Subnet added: via
I0220 14:31:24.444883       1 route_network.go:85] Subnet added: via

To verify the PodCIDR routes were created we can log into one of the workers and run ip route show:

$ ip route show

default via dev ens192 proto static metric 100 via dev ens192 via dev ens192 via dev ens192 via dev ens192 dev docker0 proto kernel scope link src linkdown dev ens192 proto kernel scope link src metric 100 

Sweet! My cluster IP range is and the output above shows the routes this worker will take to reach cluster IPs on other workers. Now if you’re like me you may be wondering how does flannel create routes on the host when its running in a container? Here’s were the power of DaemonSets shine. Inside the deployment manifest flannel sets hostNetwork: to true:

      hostNetwork: true
      nodeSelector: amd64
      - key:
        operator: Exists
        effect: NoSchedule

This allows the pod to access the hosts network namespace. There are a couple of items you should be aware of. The flannel manifest I downloaded from GitHub uses a flannel image from the repository. I’m always nervous about using images I don’t generate from scratch (and validate w/ digital signatures) with automated build tools. Second, if we log into one of the flannel containers and run ps:

$ kubectl exec -i -t -n kube-system kube-flannel-ds-t8g9f ash

/ # ps auxwww
    1 root       0:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
 1236 root       0:00 ash
 1670 root       0:00 ash
 1679 root       0:00 ps auxwww

You will notice that flanneld is started with the “–kube-subnet-mgr” option. This option tells flanneld to contact the API server to retrieve the subnet assignments. This will also cause flanneld to watch for network changes (host additions and removals) and adjust the host routes accordingly. In a follow up post I’ll dig into vxlan and some techniques I found useful for debugging node-to-node communications.

Copying files into and out of Kubernetes managed containers with kubectl cp

This article was posted by Matty on 2018-02-20 08:38:52 -0500 EST

Kubectl has some pretty amazing features built in. One useful option is the ability to copy files into and out of containers. This functionality is provided through the cp command. To copy the file app.log from the container named myapp-4dpjr to the local directory you can use the following cp syntax:

$ kubectl cp myapp-4dpjr:/app/app.log .

To copy a file into a container you can reverse the cp arguments:

$ kubectl cp debugger myapp-4dpjr:/bin

In order to be able to copy a file into a container the tar executable needs to be installed. Cool stuff.

How the docker container creation process works (from docker run to runc)

This article was posted by Matty on 2018-02-19 09:42:34 -0500 EST

Over the past few months I’ve been investing a good bit of personal time studying how Linux containers work. Specifically, what does docker run actually do. In this post I’m going to walk through what I’ve observed and try to demystify how all the pieces fit togther. To start our adventure I’m going to create an alpine container with docker run:

$ docker run -i -t --name alpine alpine ash

This container will be used in the output below. When the docker run command is invoked it parses the options passed on the command line and creates a JSON object to represent the object it wants docker to create. The object is then sent to the docker daemon through the /var/run/docker.sock UNIX domain socket. We can use the strace utility to observe the API calls:

$ strace -s 8192 -e trace=read,write -f docker run -d alpine

[pid 13446] write(3, "GET /_ping HTTP/1.1\r\nHost: docker\r\nUser-Agent: Docker-Client/1.13.1 (linux)\r\n\r\n", 79) = 79
[pid 13442] read(3, "HTTP/1.1 200 OK\r\nApi-Version: 1.26\r\nDocker-Experimental: false\r\nServer: Docker/1.13.1 (linux)\r\nDate: Mon, 19 Feb 2018 16:12:32 GMT\r\nContent-Length: 2\r\nContent-Type: text/plain; charset=utf-8\r\n\r\nOK", 4096) = 196
[pid 13442] write(3, "POST /v1.26/containers/create HTTP/1.1\r\nHost: docker\r\nUser-Agent: Docker-Client/1.13.1 (linux)\r\nContent-Length: 1404\r\nContent-Type: application/json\r\n\r\n{\"Hostname\":\"\",\"Domainname\":\"\",\"User\":\"\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":[],\"Cmd\":null,\"Image\":\"alpine\",\"Volumes\":{},\"WorkingDir\":\"\",\"Entrypoint\":null,\"OnBuild\":null,\"Labels\":{},\"HostConfig\":{\"Binds\":null,\"ContainerIDFile\":\"\",\"LogConfig\":{\"Type\":\"\",\"Config\":{}},\"NetworkMode\":\"default\",\"PortBindings\":{},\"RestartPolicy\":{\"Name\":\"no\",\"MaximumRetryCount\":0},\"AutoRemove\":false,\"VolumeDriver\":\"\",\"VolumesFrom\":null,\"CapAdd\":null,\"CapDrop\":null,\"Dns\":[],\"DnsOptions\":[],\"DnsSearch\":[],\"ExtraHosts\":null,\"GroupAdd\":null,\"IpcMode\":\"\",\"Cgroup\":\"\",\"Links\":null,\"OomScoreAdj\":0,\"PidMode\":\"\",\"Privileged\":false,\"PublishAllPorts\":false,\"ReadonlyRootfs\":false,\"SecurityOpt\":null,\"UTSMode\":\"\",\"UsernsMode\":\"\",\"ShmSize\":0,\"ConsoleSize\":[0,0],\"Isolation\":\"\",\"CpuShares\":0,\"Memory\":0,\"NanoCpus\":0,\"CgroupParent\":\"\",\"BlkioWeight\":0,\"BlkioWeightDevice\":null,\"BlkioDeviceReadBps\":null,\"BlkioDeviceWriteBps\":null,\"BlkioDeviceReadIOps\":null,\"BlkioDeviceWriteIOps\":null,\"CpuPeriod\":0,\"CpuQuota\":0,\"CpuRealtimePeriod\":0,\"CpuRealtimeRuntime\":0,\"CpusetCpus\":\"\",\"CpusetMems\":\"\",\"Devices\":[],\"DiskQuota\":0,\"KernelMemory\":0,\"MemoryReservation\":0,\"MemorySwap\":0,\"MemorySwappiness\":-1,\"OomKillDisable\":false,\"PidsLimit\":0,\"Ulimits\":null,\"CpuCount\":0,\"CpuPercent\":0,\"IOMaximumIOps\":0,\"IOMaximumBandwidth\":0},\"NetworkingConfig\":{\"EndpointsConfig\":{}}}\n", 1556) = 1556
[pid 13442] read(3, "HTTP/1.1 201 Created\r\nApi-Version: 1.26\r\nContent-Type: application/json\r\nDocker-Experimental: false\r\nServer: Docker/1.13.1 (linux)\r\nDate: Mon, 19 Feb 2018 16:12:32 GMT\r\nContent-Length: 90\r\n\r\n{\"Id\":\"b70b57c5ae3e25585edba898ac860e388582391907be4070f91eb49f4db5c433\",\"Warnings\":null}\n", 4096) = 281

Now here is were the real fun begins. Once the docker daemon receives the request it will parse the output and contact containerd via the gRPC API to set up the container runtime using the options passed on the command line. We can use the ctr utility to observe this interaction:

$ ctr --address "unix:///run/containerd.sock" events

TIME                           TYPE                           ID                             PID                            STATUS
time="2018-02-19T12:10:07.658081859-05:00" level=debug msg="Calling POST /v1.26/containers/create" 
time="2018-02-19T12:10:07.676706130-05:00" level=debug msg="container mounted via layerStore: /var/lib/docker/overlay2/2beda8ac904f4a2531d72e1e3910babf145c6e68dfd02008c58786adb254f9dc/merged" 
time="2018-02-19T12:10:07.682430843-05:00" level=debug msg="Calling POST /v1.26/containers/d1a6d87886e2d515bfff37d826eeb671502fa7c6f47e422ec3b3549ecacbc15f/attach?stderr=1&stdin=1&stdout=1&stream=1" 
time="2018-02-19T12:10:07.683638676-05:00" level=debug msg="Calling GET /v1.26/events?filters=%7B%22container%22%3A%7B%22d1a6d87886e2d515bfff37d826eeb671502fa7c6f47e422ec3b3549ecacbc15f%22%3Atrue%7D%2C%22type%22%3A%7B%22container%22%3Atrue%7D%7D" 
time="2018-02-19T12:10:07.684447919-05:00" level=debug msg="Calling POST /v1.26/containers/d1a6d87886e2d515bfff37d826eeb671502fa7c6f47e422ec3b3549ecacbc15f/start" 
time="2018-02-19T12:10:07.687230717-05:00" level=debug msg="container mounted via layerStore: /var/lib/docker/overlay2/2beda8ac904f4a2531d72e1e3910babf145c6e68dfd02008c58786adb254f9dc/merged" 
time="2018-02-19T12:10:07.885362059-05:00" level=debug msg="sandbox set key processing took 11.824662ms for container d1a6d87886e2d515bfff37d826eeb671502fa7c6f47e422ec3b3549ecacbc15f" 
time="2018-02-19T12:10:07.927897701-05:00" level=debug msg="libcontainerd: received containerd event: &types.Event{Type:\"start-container\", Id:\"d1a6d87886e2d515bfff37d826eeb671502fa7c6f47e422ec3b3549ecacbc15f\", Status:0x0, Pid:\"\", Timestamp:(*timestamp.Timestamp)(0xc420bacdd0)}" 
2018-02-19T17:10:07.927795344Z start-container                d1a6d87886e2d515bfff37d826eeb671502fa7c6f47e422ec3b3549ecacbc15f                                0
time="2018-02-19T12:10:07.930283397-05:00" level=debug msg="libcontainerd: event unhandled: type:\"start-container\" id:\"d1a6d87886e2d515bfff37d826eeb671502fa7c6f47e422ec3b3549ecacbc15f\" timestamp:<seconds:1519060207 nanos:927795344 > " 
time="2018-02-19T12:10:07.930874606-05:00" level=debug msg="Calling POST /v1.26/containers/d1a6d87886e2d515bfff37d826eeb671502fa7c6f47e422ec3b3549ecacbc15f/resize?h=35&w=115" 

Setting up the container runtime is a pretty substantial undertaking. Namespaces need to be configured, the Image needs to be mounted, security controls (app armor profiles, seccomp profiles, capabilities) need to be enabled, etc , etc. You can get a pretty good idea of everything that is required to set up the runtime by reviewing the output of docker inspect containerid and the config.json runtime specification file (more on that in a moment).

Containerd doesn’t actually create the container runtime. It sets up the environment and then invokes containerd-shim to start the container runtime via the configured OCI runtime (controlled with the containerd “–runtime” option) . For most modern systems the container runtime is based on runc. We can see this first hand with the pstree utility:

$ pstree -l -p -s -T

systemd,1 --switched-root --system --deserialize 24
  ├─docker-containe,19606 --listen unix:///run/containerd.sock --shim /usr/libexec/docker/docker-containerd-shim-current --start-timeout 2m --debug
  │   ├─docker-containe,19834 93a619715426f613646359863e77cc06fa85502273df931517ec3f4aaae50d5a /var/run/docker/libcontainerd/93a619715426f613646359863e77cc06fa85502273df931517ec3f4aaae50d5a /usr/libexec/docker/docker-runc-current

Since pstree truncates the process name we can verify the PIDs with ps:

$ ps auxwww | grep [1]9606

root     19606  0.0  0.2 685636 10632 ?        Ssl  13:01   0:00 /usr/libexec/docker/docker-containerd-current --listen unix:///run/containerd.sock --shim /usr/libexec/docker/docker-containerd-shim-current --start-timeout 2m --debug

$ ps auxwww | grep [1]9834

root     19834  0.0  0.0 527748  3020 ?        Sl   13:01   0:00 /usr/libexec/docker/docker-containerd-shim-current 93a619715426f613646359863e77cc06fa85502273df931517ec3f4aaae50d5a /var/run/docker/libcontainerd/93a619715426f613646359863e77cc06fa85502273df931517ec3f4aaae50d5a /usr/libexec/docker/docker-runc-current

When I first started researching the interaction between dockerd, containerd and the shim I wasn’t real sure what purpose the shim served. Luckily Google took me to a great write up by Michael Crosby. The shim serves a couple of purposes:

  1. It allows you to run daemonless containers.
  2. STDIO and other FDs are kept open in the event that containerd and docker die.
  3. Reports the containers exit status to containerd.

The first and second bullet points are super important. These features allows the container to be decoupled from the docker daemon allowing dockerd to be upgraded or restarted w/o impacting the running containers. Nifty! I mentioned that the shim is responsible for kicking off runc to actually run the container. Runc needs two things to do its job: a specification file and a path to a root file system image (the combination of the two is referred to as a bundle). To see how this works we can create a rootfs by exporting the alpine docker image:

$ mkdir -p alpine/rootfs

$ cd alpine

$ docker export d1a6d87886e2 | tar -C rootfs -xvf -

time="2018-02-19T12:54:13.082321231-05:00" level=debug msg="Calling GET /v1.26/containers/d1a6d87886e2/export" 

The export option takes a container if which you can find in the docker ps -a output. To generate a specificationfile you can use the runc spec command:

$ runc spec

This will create a specification file named config.json in your current directory. This file can be customized to suit your needs and requirements. Once you are happy with the file you can run runc with the rootfs directory as its sole argument (the container configuration will be read from the file config.json file):

$ runc run rootfs

This simple example will spawn an alpine ash shell:

$ runc run rootfs

/ # cat /etc/os-release
NAME="Alpine Linux"
PRETTY_NAME="Alpine Linux v3.7"

Being able to create containers and play with the runc runtime specification is incredibly powerful. You can evaluate different apparmor profiles, test out Linux capabilities and play around with every facet of the container runtime environment without needing to install docker. I just barely scratched the surface here and would highly recommend reading through the runc and containerd documentation. Super cool stuff!

Exporting Docker Images For Analysis

This article was posted by Matty on 2018-02-17 10:55:17 -0500 EST

I was recently looking into a docker problem and wanted to see the actual contents of the image that was having issues. Typically you can grab the Dockerfile that was used to build the image and use docker build to recreate it. I wasn’t able to locate the Dockerfile for this image so I decided to extract the image to a local file system for analysis. This was super easy to do with the docker export command.

To illustrate how this is done lets say you want to extract the contents of the Kubernetes heapster image to the a directory named rootfs. If you run docker export with the container ID (you can get this from docker inspect and docker ps) and pipe that to tar the rootfs directory will be populated with the image contents once the command completes:

$ docker export $(docker ps -a | awk '/heapster/ && !/POD/ {print $1}') | tar -C rootfs -xvf -


This has also been super useful for studying how runc works under the covers.