Generating Kubernetes POD CIDR Routes With Ansible


Over the past two months I’ve been preparing to take the Kubernetes Administrator certification. As part of my prep work I’ve been reading a lot of code and breaking things in my HA cluster to see how they break and what what is required to fix them. I’ve also been automating every part of the cluster build process to get more familiar with ansible and the cluster bootstrap process.

Another area I’ve spent a tremendous amount of time on it Kubernetes networking. The Kubernetes network architecture was incredibly confusing when I first got into K8s so a lot of my time has been spent studying how layer-3 routing and overlay networking work under the covers. In the world of Kubernetes every pod is assigned an IP address and Kubernetes assumes pods are able to talk to other PODs via these IPs. The Kubernetes cluster networking document describes the reason behind this:

Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. We give every pod its own IP address so you do not need to explicitly create links between pods and you almost never need to deal with mapping container ports to host ports. This creates a clean, backwards-compatible model where pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.

There are several solutions available to help with the pod-to-pod network connectivity requirement. I’ve done a good deal of work with flannel and weave and they both work remarkably well! I’ve also implemented a flat layer-3 network solution using host routes. Lorenzo Nicora provided a way to create and apply these routes with ansible via the kubernetes-routing.yaml playbook. If you want to see the routes that will be generated you can run the kubectl get nodes command listed at the top of the playbook:

$ kubectl get nodes --output=jsonpath='{range .items[*]}{.status.addresses[?(@.type=="InternalIP")].address} {.spec.podCIDR} {"\n"}{end}'

192.168.2.44 10.1.4.0/24 
192.168.2.45 10.1.0.0/24 
192.168.2.46 10.1.2.0/24 
192.168.2.47 10.1.3.0/24 
192.168.2.48 10.1.1.0/24 

This command will return the list of nodes as a JSON object and iterate over the elements to get the address and POD CIDR assigned to each worker. The kubernetes-routing playbook takes this concept and creates a number of tasks to extract this information, create routes and apply them to the workers. When I was first experimenting with this playbook I bumped into the following error:

TASK [kubernetes-workers : Get a list of IP addresses] ***********************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TemplateRuntimeError: no test named 'equalto'
fatal: [kubworker1.homefetch.net]: FAILED! => {"msg": "Unexpected failure during module execution.", "stdout": ""}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TemplateRuntimeError: no test named 'equalto'

The error came from the following task:

- name: Get a list of IP addresses
  set_fact:
    kubernetes_nodes_addresses: "{{ node_addresses_tmp.results|map(attribute='item')|selectattr('type','equalto','InternalIP')|map(attribute='address')|list }}"

After a bit of searching I came across a Jinja2 commit related to the equalto test. This feature was introduced in Jinja 2.8 and unfortunately I had an older version installed (the documentation notes this requirement so this was my own fault). After upgrading with pip:

$ pip install --upgrade Jinja2

Collecting Jinja2
  Downloading Jinja2-2.10-py2.py3-none-any.whl (126kB)
    100% |████████████████████████████████| 133kB 2.1MB/s
Collecting MarkupSafe>=0.23 (from Jinja2)
  Downloading MarkupSafe-1.0.tar.gz
Installing collected packages: MarkupSafe, Jinja2
  Found existing installation: MarkupSafe 0.11
    Uninstalling MarkupSafe-0.11:
      Successfully uninstalled MarkupSafe-0.11
  Running setup.py install for MarkupSafe ... done
  Found existing installation: Jinja2 2.7.2
    Uninstalling Jinja2-2.7.2:
      Successfully uninstalled Jinja2-2.7.2
Successfully installed Jinja2-2.10 MarkupSafe-1.0

The playbook ran without issue and my workers had routes! Lorenzo did a great job with this playbook and I like his use of with_flattened (this was new to me) and map to generate the list of node addresses. While this solution isn’t suitable for production it’s a great way to get an HA test cluster up and operational.

This article was posted by Matty on 2018-01-20 09:40:26 -0500 -0500