Blog O' Matty


Using node local caching on your Kubernetes nodes to reduce CoreDNS traffic

This article was posted by on 2020-05-15 01:00:00 -0500 -0500

Kubernetes 1.18 was recently released, and with it came a slew of super useful features! One feature that hit GA is node local caching. This allows each node in your cluster to cache DNS queries, reducing load on your primary in-cluster CoreDNS servers. Now that this feature is GA, I wanted to take it for a spin. If you’ve looked at the query logs on an active CoreDNS pod, or dealt with AWS DNS query limits, I’m sure you will appreciate the value this feature brings.

To get this set up, I first downloaded the node local DNS deployment manifest:

$ curl -o localnodecache.tml -L0 https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml

The manifest contains a service account, service, daemonset and config map. The container that is spun up on each node runs CoreDNS, but in caching mode. The caching feature is enabled with the following configuration block, which is part of the config map that was installed above:

cache {
        success  9984 30
        denial   9984 5
        prefetch 500  5
}

The cache block tells CoreDNS how many queries to cache, as well as how long to keep them (TTL). You can also configure CoreDNS to prefetch frequently queried items prior to them expiring! Next, we need to replace three PILLAR variables in the manifest:

$ export localdns="169.254.20.10"

$ export domain="cluster.local"

$ export kubedns="10.96.0.10"

$ sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__/$kubedns/g" nodelocaldns.yaml > nodedns.yml

The localdns variable contains the IP address you want your caching coredns instance to listen for queries on. The documentation uses a link local address, but you can use anything you want. It just can’t overlap with existing IPs. Domain contains the Kubernetes domain you you set “clusterDomain” to. And finally, kubedns is the service IP that sits in front of your primary CoreDNS pods. Once the manifest is applied:

$ kubectl apply -f nodedns.yml

You will see a new daemonset, and one caching DNS pod per host:

$ kubectl get ds -n kube-system node-local-dns

NAME             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
node-local-dns   4         4         4       4            4           <none>          3h43m

$ kubectl get po -o wide -n kube-system -l k8s-app=node-local-dns

NAME                   READY   STATUS    RESTARTS   AGE     IP           NODE                 NOMINATED NODE   READINESS GATES
node-local-dns-24knq   1/1     Running   0          3h40m   172.18.0.4   test-worker          <none>           <none>
node-local-dns-fl2zf   1/1     Running   0          3h40m   172.18.0.3   test-worker2         <none>           <none>
node-local-dns-gvqrv   1/1     Running   0          3h40m   172.18.0.5   test-control-plane   <none>           <none>
node-local-dns-v9hlv   1/1     Running   0          3h40m   172.18.0.2   test-worker3         <none>           <none>

One thing I found interesting is how DNS queries get routed to the caching DNS pods. Given a pod with a ClusterFirst policy, the nameserver value in /etc/resolv.conf will get populated with the service IP that sits in front of your in-cluster CoreDNS pods:

$ kubectl get svc kube-dns -o wide -n kube-system

NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   5d18h   k8s-app=kube-dns

$ kubectl exec -it nginx-f89759699-2c8zw -- cat /etc/resolv.conf

search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5

But under the covers, iptables has an OUTPUT chain to route DNS requests destined for your CoreDNS cluster service IP to the IP assigned to the localdns variable. We can view that with the iptables command:

$ iptables -L OUTPUT

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     udp  --  10.96.0.10           anywhere             udp spt:53
ACCEPT     tcp  --  10.96.0.10           anywhere             tcp spt:53
ACCEPT     udp  --  169.254.20.10        anywhere             udp spt:53
ACCEPT     tcp  --  169.254.20.10        anywhere             tcp spt:53
KUBE-SERVICES  all  --  anywhere         anywhere             ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL  all  --  anywhere         anywhere

Pretty neat! Now to test this out. If we exec into a pod:

$ kubectl exec -it nginx-f89759699-59ss2 -- sh

And query the local caching instance:

$ dig +short @169.254.20.10 prefetch.net

67.205.141.207

$ dig +short @169.254.20.10 prefetch.net

67.205.141.207

We get the same results. But if you check the logs, the first request will hit the local caching server, and then be forwarded to your primary CoreDNS service IP. When the second query comes in, the cached entry will be returned to the requester, reducing load on your primary CoreDNS servers. And if your pods are configured to point to the upstream CoreDNS servers, iptables will ensure that query hits the local DNS cache. Pretty sweet! And this all happens through the magic of CoreDNS, IPTables and some awesome developers! This feature rocks!

TRIM support is enabled by default in Fedora 32

This article was posted by on 2020-05-15 00:00:00 -0500 -0500

As a long time Fedora user, I like to keep up with the planning discussions that go into each release. These discussions are super useful for understanding what is coming to Redhat Enterprise Linux and CentOS. One feature I’ve been keeping my eye on is the FSTRIM enabled by default feature. Using TRIM can free unused space on your storage arrays, and is especially important if you use thin provisioned storage devices. Well the day has finally come. In Fedora 32, the fstrim timer is now enabled by default:

$ cat /etc/fedora-release

Fedora release 32 (Thirty Two)

$ systemctl status -l fstrim.timer

● fstrim.timer - Discard unused blocks once a week
     Loaded: loaded (/usr/lib/systemd/system/fstrim.timer; enabled; vendor preset: enabled)
     Active: active (waiting) since Thu 2020-05-14 20:26:14 UTC; 44s ago
    Trigger: Mon 2020-05-18 00:00:00 UTC; 3 days left
   Triggers: ● fstrim.service
       Docs: man:fstrim

May 14 20:26:14 localhost.localdomain systemd[1]: Started Discard unused blocks once a week.

The timer is configured to kick off fstrim once a week, and within a one hour window:

$ cat /usr/lib/systemd/system/fstrim.timer

[Unit]
Description=Discard unused blocks once a week
Documentation=man:fstrim
ConditionVirtualization=!container

[Timer]
OnCalendar=weekly
AccuracySec=1h
Persistent=true

[Install]
WantedBy=timers.target

This is super cool, and should make a ton of storage engineers super happy!

Disabling cron jobs globally on CentOS machines

This article was posted by on 2020-05-14 01:00:00 -0500 -0500

As a long time CentOS user, I’ve always winced when I took ownership of new systems with e-mail notifications enabled. Whether this was through setting MAILTO to a distribution list, or using the default and sending it to the user who created the job. Having cron send mail has woken me up more than once when /var/spool/* filled up. Give the awesome searching capabilites that come with solutions like Elasticsearch, Loki, and Gray log, I now turn cron mailing off on all of my systems. You can disable cron mail on CentOS by adding the “-s -m off” options to the CRONDARGS variable in /etc/sysconfig/crond:

$ cat /etc/sysconfig/crond

CRONDARGS=-s -m off

I currently have an awesome dashboard to show job status, and get notified through actionable channels when critical jobs fail. While the vast majority of my scheduled activities use Kubernetes Jobs, I still have a few outliers that use crond. This has drastically reduced the number of messages I receive through e-mail, and allowed me to get true alerting wrapper around mission critical stuff. Curious how many folks still use e-mail for notifications?

Managing multiple Kubernetes resources by label

This article was posted by on 2020-05-14 00:00:00 -0500 -0500

Kubernetes labels are super useful. If you aren’t familiar with them, a label is a key/value pair assigned in the metadata section (either metadata.labels, or spec.template.metadata.labels) of a deployment manifest. The following example assigns three key/value labels to a deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
    group: dse
    env: staging

$ kubectl get deploy nginx -o json | jq '.metadata.labels'

{
  "app": "nginx",
  "env": "staging",
  "group": "dse"
}

Labels get super useful when you need to apply an action to multiple resources. Actions can include get:

$ kubectl get po -l app=nginx

NAME                    READY   STATUS    RESTARTS   AGE
nginx-f89759699-65ngc   1/1     Running   0          114s
nginx-f89759699-98qdd   1/1     Running   0          114s
nginx-f89759699-b7mbs   1/1     Running   0          114s
nginx-f89759699-g2xcq   1/1     Running   0          114s
nginx-f89759699-gbjb9   1/1     Running   0          114s
nginx-f89759699-j9cnx   1/1     Running   0          114s
nginx-f89759699-mpqjl   1/1     Running   0          114s
nginx-f89759699-r9csq   1/1     Running   0          114s
nginx-f89759699-t44fq   1/1     Running   0          114s
nginx-f89759699-vppwf   1/1     Running   0          114s

Which will get all pods with the label “app=nginx”. You can also use this with actions like “delete”, which will trigger the deletion of any pod matching the label passed to the “-l” option:

$ kubectl delete po -l app=nginx

pod "nginx-f89759699-65ngc" deleted
pod "nginx-f89759699-98qdd" deleted
pod "nginx-f89759699-b7mbs" deleted
pod "nginx-f89759699-g2xcq" deleted
pod "nginx-f89759699-gbjb9" deleted
pod "nginx-f89759699-j9cnx" deleted
pod "nginx-f89759699-mpqjl" deleted
pod "nginx-f89759699-r9csq" deleted
pod "nginx-f89759699-t44fq" deleted
pod "nginx-f89759699-vppwf" deleted

Additionally, this can also be used with actions like “drain”, “taint” and “untaint”. If you are working with hundreds of nodes, or thousands of pods, this will save you a ton of time when debugging and managing your infrastructure!

Finding Kubernetes issues with Popeye

This article was posted by on 2020-05-13 00:00:00 -0500 -0500

Kubernetes is an incredible platform, but there are a lot of things that can go wrong. This is especially the case when you are new to K8S, and are overwhelmed with configuration options, deployment manifests, networking, and how containers work. Fortunately Kubernetes has matured quickly, and there are tons of opensource tools to troubleshoot and monitor your clusters. One of these tools, Popeye, is a must for any Kubernetes operator. Popeye will evaulate your clusters against best practices, and display warnings if it finds issues.

Getting going with Popeye is a breeze. If you have Krew installed, you can install the plug-in with the following command:

$ kubectl krew install popeye

To audit a cluster, you can pass the “popeye” option to kubectl:

$ kubectl popeye

This will produce a comprehensive report similar to the following:

 ___     ___ _____   _____                                                      K          .-'-.     
| _ \___| _ \ __\ \ / / __|                                                      8     __|      `\  
|  _/ _ \  _/ _| \ V /| _|                                                        s   `-,-`--._   `\
|_| \___/_| |___| |_| |___|                                                      []  .->'  a     `|-'
  Biffs`em and Buffs`em!                                                          `=/ (__/_       /  
                                                                                    \_,    `    _)  
                                                                                       `----;  |     


GENERAL [KIND-TEST]
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · Connectivity...................................................................................✅
  · MetricServer...................................................................................💥


CLUSTERS (1 SCANNED)                                                         💥 0 😱 0 🔊 0 ✅ 1 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · Version........................................................................................✅
    ✅ [POP-406] K8s version OK.


CLUSTERROLES (60 SCANNED)                                                   💥 0 😱 0 🔊 60 ✅ 0 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · admin..........................................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · cluster-admin..................................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · edit...........................................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · kindnet........................................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · kubeadm:get-nodes..............................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · local-path-provisioner-role....................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:aggregate-to-admin......................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:aggregate-to-edit.......................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:aggregate-to-view.......................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:auth-delegator..........................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:basic-user..............................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:certificates.k8s.io:certificatesigningrequests:nodeclient...............................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:certificates.k8s.io:certificatesigningrequests:selfnodeclient...........................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:certificates.k8s.io:kube-apiserver-client-approver......................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:certificates.k8s.io:kube-apiserver-client-kubelet-approver..............................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:certificates.k8s.io:kubelet-serving-approver............................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:certificates.k8s.io:legacy-unknown-approver.............................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:attachdetach-controller......................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:certificate-controller.......................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:clusterrole-aggregation-controller...........................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:cronjob-controller...........................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:daemon-set-controller........................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:deployment-controller........................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:disruption-controller........................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:endpoint-controller..........................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:endpointslice-controller.....................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:expand-controller............................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:generic-garbage-collector....................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:horizontal-pod-autoscaler....................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:job-controller...............................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:namespace-controller.........................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:node-controller..............................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:persistent-volume-binder.....................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:pod-garbage-collector........................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:pv-protection-controller.....................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:pvc-protection-controller....................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:replicaset-controller........................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:replication-controller.......................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:resourcequota-controller.....................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:route-controller.............................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:service-account-controller...................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:service-controller...........................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:statefulset-controller.......................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:controller:ttl-controller...............................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:coredns.................................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:discovery...............................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:heapster................................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:kube-aggregator.........................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:kube-controller-manager.................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:kube-dns................................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:kube-scheduler..........................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:kubelet-api-admin.......................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:node....................................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:node-bootstrapper.......................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:node-problem-detector...................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:node-proxier............................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:persistent-volume-provisioner...........................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:public-info-viewer......................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:volume-scheduler........................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · view...........................................................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.


CLUSTERROLEBINDING                                                           💥 0 😱 0 🔊 0 ✅ 0 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · Nothing to report.


CONFIGMAP                                                                    💥 0 😱 0 🔊 0 ✅ 0 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · Nothing to report.


DAEMONSET                                                                    💥 0 😱 0 🔊 0 ✅ 0 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · Nothing to report.


DEPLOYMENT                                                                   💥 0 😱 0 🔊 0 ✅ 0 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · Nothing to report.


HORIZONTALPODAUTOSCALER                                                      💥 0 😱 0 🔊 0 ✅ 0 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · Nothing to report.


INGRESS                                                                      💥 0 😱 0 🔊 0 ✅ 0 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · Nothing to report.


NAMESPACES (1 SCANNED)                                                       💥 0 😱 0 🔊 0 ✅ 1 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · default........................................................................................✅


NETWORKPOLICY                                                                💥 0 😱 0 🔊 0 ✅ 0 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · Nothing to report.


PERSISTENTVOLUME                                                             💥 0 😱 0 🔊 0 ✅ 0 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · Nothing to report.


PERSISTENTVOLUMECLAIM                                                        💥 0 😱 0 🔊 0 ✅ 0 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · Nothing to report.


PODS (1 SCANNED)                                                               💥 1 😱 0 🔊 0 ✅ 0 0٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · default/centos.................................................................................💥
    🔊 [POP-206] No PodDisruptionBudget defined.
    😱 [POP-300] Using "default" ServiceAccount.
    😱 [POP-301] Connects to API Server? ServiceAccount token is mounted.
    😱 [POP-302] Pod could be running as root user. Check SecurityContext/image.
    🐳 centos
      💥 [POP-100] Untagged docker image in use.
      😱 [POP-106] No resources requests/limits defined.
      😱 [POP-102] No probes defined.
      😱 [POP-306] Container could be running as root user. Check SecurityContext/Image.


PODDISRUPTIONBUDGET                                                          💥 0 😱 0 🔊 0 ✅ 0 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · Nothing to report.


PODSECURITYPOLICY                                                            💥 0 😱 0 🔊 0 ✅ 0 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · Nothing to report.


REPLICASET                                                                   💥 0 😱 0 🔊 0 ✅ 0 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · Nothing to report.


ROLE                                                                         💥 0 😱 0 🔊 0 ✅ 0 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · Nothing to report.


ROLEBINDING                                                                  💥 0 😱 0 🔊 0 ✅ 0 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · Nothing to report.


SECRETS (1 SCANNED)                                                          💥 0 😱 0 🔊 0 ✅ 1 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · default/default-token-zvwkm....................................................................✅


SERVICES (1 SCANNED)                                                         💥 0 😱 0 🔊 0 ✅ 1 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · default/kubernetes.............................................................................✅


SERVICEACCOUNTS (1 SCANNED)                                                  💥 0 😱 0 🔊 0 ✅ 1 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · default/default................................................................................✅


STATEFULSET                                                                  💥 0 😱 0 🔊 0 ✅ 0 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · Nothing to report.


SUMMARY
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
Your cluster score: 95 -- A
                                                                                o          .-'-.     
                                                                                 o     __| A    `\  
                                                                                  o   `-,-`--._   `\
                                                                                 []  .->'  a     `|-'
                                                                                  `=/ (__/_       /  
                                                                                    \_,    `    _)  
                                                                                       `----;  |     

The official documentation describes the report morphology. These break down into Ok, Info, Warn and Error codes. Whenever I take ownership of an existing cluster, or help friends debug issues, Popeye and kubeaudit are run to help me understand where the cluster stands. Popeye also has a number of options to control the output that is produced:

-o, --out string  Specify the output type (standard, jurassic, yaml, json, html, junit, prometheus, score) (default "standard")

This makes it super easy to add Popeye to a deployment pipeline, security dashboard, or just about anything you can think of. While Popeye won’t produce a delectable burger for Wimpy, it will help you understand issues in your cluster!