Blog O' Matty


How I prepared for, and passed the Hashicorp Certified Terraform Associate certification

This article was posted by on 2020-05-11 01:00:00 -0500 -0500

I recently passed the Hashicorp Terraform certified associate certification. I’ve been using Terraform in various capacities for several years, and was stoked when I found out Hashicorp opened this certification to the public. The best part of the certification, the test only costs $70! That is SUPER, SUPER reasonable for a certification exam! Most certification exams cost upwards of $200, so my hats off to Hashicorp for making this test super afordable.

To help folks prepare, I thought I would share what I did to get ready for it. While I can’t share the test questions, I can give you some things to think about prior to taking the certification. The first place to start is the review guide. I read through every link on this page, and wrote a ton of HCL to play with features I haven’t used before (things like expanding function arguments and strip markers). You will need to have a pretty thorough understanding of each topic listed there to pass the exam.

The prep guide also has a large section on modules, so I spent some time reviewing how modules work. Specifically, the parent-child relationship, and how inputs and outputs are defined. I also wrote a number of custom modules to ensure I knew this stuff backwards and forwards. This allowed me to commit everything module related to memory, something no actual Terraform user does. When I create new modules in practice, I typically copy something I already have in version control, and modify it to serve my needs. I also read through the entire Terraform AWS VPC module the night before the test. I didn’t learn a lot, but having the syntax fresh in my mind definitely helped during the test.

Taking a step back, it might be useful to describe the environment I used to prepare for the exam. During my preparation, I used three projects. I had one project that utilized a free account in Terraform cloud, one that used an S3-backend, and a third that used local state. This was the first time I had the opportunity to work with Terraform cloud, and I was super impressed. I love what Hashicorp has done with workspaces, Sentinel, policy as code, cost forecasting, and putting these together to make collaboration super easy. The official study guide has several bullet points on this, so I would definitely get familiar working with all three.

The study guide also covers reading and writing Hashicorp Configuration Language (HCL). This is the foundation of Terraform, and you will need to have a thorough understanding of all of the items listed there. For the past several years, I’ve made extensive use of the Hashicorp Terraform console. This is a GREAT place to test interpolation syntax, functions, and review data sources. If I need to write HCL on a given day, the first thing I do is split my screen with tmux so I can have the console available. This is an area I didn’t spend a ton of time on, since I’ve used Terraform for so long. If you are new to Terraform, I would highly suggest spending a good bit of time here. It will pay off during the test, and will make it easier to utilize Terraform in a work environment.

If you use Terraform frequently, you are probably familiar with the command line options. Using “init” to initialize a project, “fmt” to format code, “validate” to verify your HCL is structurally sound, “plan” to get an execution plan, “taint” and “untaint” to purge resources, “apply” to implement your changes, and “destroy” to remove everything in a workspace. To make sure I knew all of the options, I ran ‘terraform -h’ and read the documentation for each command. I also played around with “import”, which is something I’ve never used in practice. But the study guide listed it, so I reviewed it just to be safe.

I don’t recall how much time I was given to take the test, but I finished everything in less than 30-minutes. The test was totally approachable, and I actually thought it was one of the easier tests I’ve taken. I’m not sure if this is because I’ve been using Terraform for so long, or I’m comparing it to the Kubernetes Certified Associate exam which I also recently passed. That test was intense. Not because the material was hard, but because you are asked to do a TON of stuff in a short period of time. If you have any questions on the Terraform exam, feel free to hit me up a Twitter. I can’t give you the questions, but I can help you prepare. Also willing to offer up some sample projects if that would help! Hopefully folks find this useful.

Validating Kubernetes manifests with kubeval

This article was posted by on 2020-05-11 01:00:00 -0500 -0500

I recently got some spare time to clean up and enhance my Kubernetes CI/CD pipelines. I have long embraced the Fail-Fast approach to deployments, and have added test after test to make our deployments go off without a hitch. One tool that has helped with this is kubeval. This super useful tool can process one or more deployment manifests, and spit out an error if they aren’t properly structured. This is one of the tests I run for each commit that touches Kubernetes deployment files, and a super useful one at that!

In its simplest form, kubeval can be passed a manifest to evaluate. Given the following broken manifest:

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: nginx
        name: nginx
     resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always

Kubeval will spit out an error letting you know that the YAML is invalid:

$ kubeval nginx.yaml

ERR  - Failed to decode YAML from nginx.yaml: error converting YAML to JSON: yaml: line 12: mapping values are not allowed in this context

If you process the return code in your CI pipeline, you can exit immediately if a malformed manifest was checked into version control. YAML can be a pain to work with, so this gives me a bit more comfort that issues are caught quickly.

Creating development and testing environments with Weave footloose

This article was posted by on 2020-05-11 00:00:00 -0500 -0500

I’ve been a long time user of Vagrant. It’s invaluable for spinning up local test and development environments. Recently I learned about Weave’s footloose project, which provides a similar experience to Vagrant. But instead of using virtual machines, it utilizes containers. To see if this project would improve my workflows, I spent some time with it last weekend. Getting started is super easy. First, you need to grab the footloose binary from the release page:

$ curl -L0 -o footloose https://github.com/weaveworks/footloose/releases/download/0.6.3/footloose-0.6.3-linux-x86_64 && chmod 755 footloose

To set up a new environment, you will first need to create a configuration file with the “config create” option:

$ footloose config create -n centos --replicas 3

This will create a file named footloose.yaml in your current working directory:

$ cat footloose.yaml

cluster:
  name: centos
  privateKey: cluster-key
machines:
- count: 3
    spec:
    backend: docker
    image: quay.io/footloose/centos7:0.6.3
    name: node%d
    portMappings:
    - containerPort: 22

The configuration file contains the name of the cluster, the number of containers to provision, the docker image to use, etc. To create a new cluster, you can run footloose with the “create” option:

$ time footloose create -c footloose.yaml

INFO[0000] Docker Image: quay.io/footloose/centos7:0.6.3 present locally
INFO[0000] Creating machine: cluster-node0 ...          
INFO[0001] Creating machine: cluster-node1 ...          
INFO[0002] Creating machine: cluster-node2 ...          

real  0m3.811s
user  0m0.796s
sys 0m0.763s

The first time I ran this, I was blown away! Less than four seconds to provision 3 working containers that look and feel like VMs. Nice! To see your footloose containers, you can use the “show” command:

$ footloose show

NAME           HOSTNAME   PORTS           IP           IMAGE                             CMD          STATE     BACKEND
centos-node0   node0      0->{22 32780}   172.17.0.2   quay.io/footloose/centos7:0.6.3   /sbin/init   Running   docker
centos-node1   node1      0->{22 32781}   172.17.0.3   quay.io/footloose/centos7:0.6.3   /sbin/init   Running   docker
centos-node2   node2      0->{22 32782}   172.17.0.4   quay.io/footloose/centos7:0.6.3   /sbin/init   Running   docker

You can also check the docker “ps” command to see the containers that were created:

$ docker ps -a | grep foot

30ae19361b43        quay.io/footloose/centos7:0.6.3   "/sbin/init"             6 seconds ago       Up 5 seconds        0.0.0.0:32782->22/tcp       centos-node2
b6249e884d71        quay.io/footloose/centos7:0.6.3   "/sbin/init"             7 seconds ago       Up 6 seconds        0.0.0.0:32781->22/tcp       centos-node1
da7580c41f30        quay.io/footloose/centos7:0.6.3   "/sbin/init"             9 seconds ago       Up 8 seconds        0.0.0.0:32780->22/tcp       centos-node0

Super cool! Now to take these for a drive. Footloose has a “ssh” option which can be used to interact with a footloose node:

$ footloose ssh -c footloose.yaml root@node0 uptime

 06:50:30 up 1 day, 19:09,  0 users,  load average: 0.65, 0.64, 0.63

You can also ssh in and interact with the container directly:

$ footloose ssh root@node0

$ ps auxww

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.0  0.0  42956  3108 ?        Ss   07:38   0:00 /sbin/init
root        18  0.0  0.0  39084  2940 ?        Ss   07:38   0:00 /usr/lib/systemd/systemd-journald
root        46  0.0  0.0 112920  4312 ?        Ss   07:38   0:00 /usr/sbin/sshd -D
root        60  1.0  0.0 152740  5720 ?        Ss   07:43   0:00 sshd: root@pts/1
dbus        62  0.0  0.0  58108  2252 ?        Ss   07:43   0:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
root        63  0.3  0.0  26380  1660 ?        Ss   07:43   0:00 /usr/lib/systemd/systemd-logind
root        64  0.3  0.0  15260  1972 pts/1    Ss   07:43   0:00 -bash
root        77  0.0  0.0  55184  1868 pts/1    R+   07:43   0:00 ps auxww

$ yum -y install nginx

As part of my typical testing workflow, I use Ansible playbooks to customize my Vagrant boxes. This is also possible with footloose:

$ cat inventory

[all]
centos-node0 ansible_connection=docker
centos-node1 ansible_connection=docker
centos-node2 ansible_connection=docker

The inventory files specify docker as the connection type, allowing ansible and ansible-playbook to run against the footloose containers:

$ ansible -i inventory -a uptime all

centos-node2 | CHANGED | rc=0 >>
 06:59:19 up 1 day, 19:18,  0 users,  load average: 2.46, 1.06, 0.76
centos-node1 | CHANGED | rc=0 >>
 06:59:19 up 1 day, 19:18,  0 users,  load average: 2.46, 1.06, 0.76
centos-node0 | CHANGED | rc=0 >>
 06:59:19 up 1 day, 19:18,  0 users,  load average: 2.46, 1.06, 0.76

$ ansible-playbook -i inventory haproxy.yml all

Given the short boot times, and the fact that I can re-use my existing playbooks, I see myself making extensive use of footloose in the future!!!

Viewing Kubernetes RBAC permissions for users and groups

This article was posted by on 2020-05-10 00:00:00 -0500 -0500

As a security conscious Kubernetes operator, I take security extremely seriously. When new services are rolled out, I do everything in my power to ensure roles and clusterroles have the minimum number of permissions they need. Creating permissions is easy to do with audit2rbac, but how do you view them once they are in place? I use the Kubectl Krew plug-in manager, which provides a way to easily install a number of useful plug-ins. One one these plug-ins, access-matrix, allows you to view the permissions for a user or group in a human readable form:

$ kubectl access-matrix --as system:serviceaccount:kube-system:kube-proxy

NAME                                                          LIST  CREATE  UPDATE  DELETE
apiservices.apiregistration.k8s.io                            ✖     ✖       ✖       ✖
bindings                                                            ✖               
certificatesigningrequests.certificates.k8s.io                ✖     ✖       ✖       ✖
clusterrolebindings.rbac.authorization.k8s.io                 ✖     ✖       ✖       ✖
clusterroles.rbac.authorization.k8s.io                        ✖     ✖       ✖       ✖
componentstatuses                                             ✖                     
configmaps                                                    ✖     ✖       ✖       ✖
controllerrevisions.apps                                      ✖     ✖       ✖       ✖
cronjobs.batch                                                ✖     ✖       ✖       ✖
csidrivers.storage.k8s.io                                     ✖     ✖       ✖       ✖
csinodes.storage.k8s.io                                       ✖     ✖       ✖       ✖
customresourcedefinitions.apiextensions.k8s.io                ✖     ✖       ✖       ✖
daemonsets.apps                                               ✖     ✖       ✖       ✖
deployments.apps                                              ✖     ✖       ✖       ✖
endpoints                                                     ✔     ✖       ✖       ✖
endpointslices.discovery.k8s.io                               ✔     ✖       ✖       ✖
events                                                        ✖     ✔       ✔       ✖
events.events.k8s.io                                          ✖     ✔       ✔       ✖
horizontalpodautoscalers.autoscaling                          ✖     ✖       ✖       ✖
ingressclasses.networking.k8s.io                              ✖     ✖       ✖       ✖
ingresses.extensions                                          ✖     ✖       ✖       ✖
ingresses.networking.k8s.io                                   ✖     ✖       ✖       ✖
jobs.batch                                                    ✖     ✖       ✖       ✖
leases.coordination.k8s.io                                    ✖     ✖       ✖       ✖
limitranges                                                   ✖     ✖       ✖       ✖
localsubjectaccessreviews.authorization.k8s.io                      ✖               
mutatingwebhookconfigurations.admissionregistration.k8s.io    ✖     ✖       ✖       ✖
namespaces                                                    ✖     ✖       ✖       ✖
networkpolicies.networking.k8s.io                             ✖     ✖       ✖       ✖
nodes                                                         ✔     ✖       ✖       ✖
persistentvolumeclaims                                        ✖     ✖       ✖       ✖
persistentvolumes                                             ✖     ✖       ✖       ✖
poddisruptionbudgets.policy                                   ✖     ✖       ✖       ✖
pods                                                          ✖     ✖       ✖       ✖
podsecuritypolicies.policy                                    ✖     ✖       ✖       ✖
podtemplates                                                  ✖     ✖       ✖       ✖
priorityclasses.scheduling.k8s.io                             ✖     ✖       ✖       ✖
replicasets.apps                                              ✖     ✖       ✖       ✖
replicationcontrollers                                        ✖     ✖       ✖       ✖
resourcequotas                                                ✖     ✖       ✖       ✖
rolebindings.rbac.authorization.k8s.io                        ✖     ✖       ✖       ✖
roles.rbac.authorization.k8s.io                               ✖     ✖       ✖       ✖
runtimeclasses.node.k8s.io                                    ✖     ✖       ✖       ✖
secrets                                                       ✖     ✖       ✖       ✖
selfsubjectaccessreviews.authorization.k8s.io                       ✔               
selfsubjectrulesreviews.authorization.k8s.io                        ✔               
serviceaccounts                                               ✖     ✖       ✖       ✖
services                                                      ✔     ✖       ✖       ✖
statefulsets.apps                                             ✖     ✖       ✖       ✖
storageclasses.storage.k8s.io                                 ✖     ✖       ✖       ✖
subjectaccessreviews.authorization.k8s.io                           ✖               
tokenreviews.authentication.k8s.io                                  ✖               
validatingwebhookconfigurations.admissionregistration.k8s.io  ✖     ✖       ✖       ✖
volumeattachments.storage.k8s.io                              ✖     ✖       ✖       ✖

The output shows the permissions allowed for each RBAC verb, and is formatted in a easily readable form. You can also use the “–verbs” option to cherry pick the verbs you want to see:

$ kubectl access-matrix -n kube-system --verbs get,list,watch,update,patch,delete --as system:serviceaccount:kube-system:coredns

NAME                                            GET  LIST  WATCH  UPDATE  PATCH  DELETE
bindings                                                                         
configmaps                                      ✖    ✖     ✖      ✖       ✖      ✖
controllerrevisions.apps                        ✖    ✖     ✖      ✖       ✖      ✖
cronjobs.batch                                  ✖    ✖     ✖      ✖       ✖      ✖
daemonsets.apps                                 ✖    ✖     ✖      ✖       ✖      ✖
deployments.apps                                ✖    ✖     ✖      ✖       ✖      ✖
endpoints                                       ✖    ✔     ✔      ✖       ✖      ✖
endpointslices.discovery.k8s.io                 ✖    ✖     ✖      ✖       ✖      ✖
events                                          ✖    ✖     ✖      ✖       ✖      ✖
events.events.k8s.io                            ✖    ✖     ✖      ✖       ✖      ✖
horizontalpodautoscalers.autoscaling            ✖    ✖     ✖      ✖       ✖      ✖
ingresses.extensions                            ✖    ✖     ✖      ✖       ✖      ✖
ingresses.networking.k8s.io                     ✖    ✖     ✖      ✖       ✖      ✖
jobs.batch                                      ✖    ✖     ✖      ✖       ✖      ✖
leases.coordination.k8s.io                      ✖    ✖     ✖      ✖       ✖      ✖
limitranges                                     ✖    ✖     ✖      ✖       ✖      ✖
localsubjectaccessreviews.authorization.k8s.io                                   
networkpolicies.networking.k8s.io               ✖    ✖     ✖      ✖       ✖      ✖
persistentvolumeclaims                          ✖    ✖     ✖      ✖       ✖      ✖
poddisruptionbudgets.policy                     ✖    ✖     ✖      ✖       ✖      ✖
pods                                            ✖    ✔     ✔      ✖       ✖      ✖
podtemplates                                    ✖    ✖     ✖      ✖       ✖      ✖
replicasets.apps                                ✖    ✖     ✖      ✖       ✖      ✖
replicationcontrollers                          ✖    ✖     ✖      ✖       ✖      ✖
resourcequotas                                  ✖    ✖     ✖      ✖       ✖      ✖
rolebindings.rbac.authorization.k8s.io          ✖    ✖     ✖      ✖       ✖      ✖
roles.rbac.authorization.k8s.io                 ✖    ✖     ✖      ✖       ✖      ✖
secrets                                         ✖    ✖     ✖      ✖       ✖      ✖
serviceaccounts                                 ✖    ✖     ✖      ✖       ✖      ✖
services                                        ✖    ✔     ✔      ✖       ✖      ✖
statefulsets.apps                               ✖    ✖     ✖      ✖       ✖      ✖

If you want to further refine the output, you can add the “–as-group” option to view permissions by user and group. Amazing tool, and definitely one to keep in your bat belt!

Listing Linux SD device queue depth sizes

This article was posted by on 2020-05-09 14:15:11 -0500 -0500

While investigating a disk performance issue this week, I needed to find the queue depth of a block device. There are several ways to do this, but I think the lsscsi “-l” option takes the cake:

$ lsscsi -l

[0:0:0:0]    disk    ATA      Samsung SSD 840  CB6Q  /dev/sda
  state=running queue_depth=31 scsi_level=6 type=0 device_blocked=0 timeout=30
[1:0:0:0]    disk    ATA      WDC WD20EZRZ-00Z 0A80  /dev/sdb
  state=running queue_depth=31 scsi_level=6 type=0 device_blocked=0 timeout=30
[2:0:0:0]    disk    ATA      WDC WD20EZRZ-00Z 0A80  /dev/sdc
  state=running queue_depth=31 scsi_level=6 type=0 device_blocked=0 timeout=30
[5:0:0:0]    disk    ATA      WDC WD15EADS-00P 0A01  /dev/sdd
  state=running queue_depth=31 scsi_level=6 type=0 device_blocked=0 timeout=30

Simple, easy and elegant. Noting this for future reference.