Kubernetes server side validation recently landed, and it’s a super useful feature. Prior to server side validation, you could use the kubectl dry-run feature to validate your deployment manifests:
$ kubectl apply --dry-run -f nginx.yaml
pod/nginx created (dry run)
When this command runs, the validation occurs on the machine that hosts the kubectl binary. While useful, there are a few use cases were your manifest would validate locally, but wouldn’t apply when you sent it to the API server. One example is if your kubectl binary was older than 1.16, and you tried to send a JSON payload with deprecated APIs to a 1.16+ API server. With the new server side apply feature, you can have the API server validate the manifest.
To use server side validation, you can add the string “server” to the “–dry-run” option:
$ kubectl apply --dry-run=server -f nginx.yaml
error: error validating "nginx.yaml": error validating data: ValidationError(Pod.spec.containers[0].resources): unknown field "requestss" in io.k8s.api.core.v1.ResourceRequirements; if you choose to ignore these errors, turn validation off with --validate=false
If the API server detects an issue, kubectl will note that in the output. Super useful feature, and definitely one you should add to your CI/CD pipeline if kubectl is your deployment tool of choice.
As a developer, operator, and architect, I am always evaluating technological solutions. A fair number of these solutions use TLS, which requires minting new certificates. I recently came across mkcert, which makes it SUPER easy to provision new certificates for development and testing. To get started with mkcert, you will need to run it with the “-install” option:
$ mkcert -install
Created a new local CA at "/home/vagrant/.local/share/mkcert" 💥
The local CA is now installed in the system trust store! ⚡️
The local CA is now installed in the Firefox and/or Chrome/Chromium trust store (requires browser restart)!
This will create a new CA certificate in $HOME/.local/share/mkcert, and update your trust stores so curl, Firefox, etc. won’t complain when they connect to a TLS endpoint that uses a mkcert minted certificate. To actually create a certificate, you can run mkcert with the common name you want assigned to the certificate:
$ mkcert localhost
Using the local CA at "/home/vagrant/.local/share/mkcert" ✨
Created a new certificate valid for the following names 📜
- "localhost"
The certificate is at "./localhost.pem" and the key at "./localhost-key.pem" ✅
That’s it! You now have a RootCA, a private key, and an X.509 certificate to use for testing. It takes seconds to create them, and you can fire up your favorite service with the generated certs:
$ openssl s_server -cert localhost.pem -key localhost-key.pem -www -accept 8443 &
$ curl -D - https://localhost:8443
ACCEPT
HTTP/1.0 200 ok
...
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID: XXXX
Session-ID-ctx: 01000000
Master-Key: XXXX
Key-Arg : None
Krb5 Principal: None
PSK identity: None
PSK identity hint: None
Start Time: 1593446296
Timeout : 300 (sec)
Verify return code: 0 (ok)
...
The certificate and private key are created in your current working directory, and the RootCA certificate is placed in $HOME/.local/share/mkcert by default. All of these files are PEM encoded, so openssl and company can be used to print their contents. In addition, mkcert will populate the nssdb file in your home directory with the RootCA:
$ ls -la /home/vagrant/.pki/nssdb
total 32
drwxrw----. 2 vagrant vagrant 55 Jun 29 15:45 .
drwxrw----. 3 vagrant vagrant 19 Mar 31 18:20 ..
-rw-------. 1 vagrant vagrant 10240 Jun 29 15:45 cert9.db
-rw-------. 1 vagrant vagrant 13312 Jun 29 15:45 key4.db
-rw-------. 1 vagrant vagrant 436 Jun 29 15:45 pkcs11.txt
$ certutil -L -d sql:/home/vagrant/.pki/nssdb
Certificate Nickname Trust Attributes
SSL,S/MIME,JAR/XPI
mkcert development CA 196291963499902809203365320023044568657 C,,
While I still love OpenSSL and cfssl, this is my new go to for quickly minting certificates for development and testing. Amazing stuff!
Hashicorp Vault has become one of my favorite technologies over the past year. Secrets management is a non-trivial undertaking, and I’m routinely blown away by how easy vault makes secrets management. One nifty thing I recently learned is vault has auto-completion, which you can enable with the “-autocomplete-install” option:
$ vault -autocomplete-install && source $HOME/.bashrc
Once enabled, you can type vault followed by a tab to see all of the available options:
$ vault <tab>
agent delete login plugin secrets token
audit kv namespace policy server unwrap
auth lease operator print ssh write
debug list path-help read status
This also works for subcommands, so typing $(vault audit TAB) will display the options that can be passed to the audit sub-command. I’m a huge fan of auto-completion, and try to use it whenever I can to improve my efficieny.
As a long time Kafka, Zookeeper, and Prometheus user, I have been utilizing the JMX exporter to gather operational metrics from my Zookeeper and Kafka clusters. Having to bolt on an additional component is never fun, so I was delighted to see that Zookeeper 3.6.0 added native Prometheus metric support. Enabling it is as easy as adding the following lines to your zoo.cfg configuration file:
metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
metricsProvider.httpPort=7000
metricsProvider.exportJvmInfo=true
The included metrics rock, and there are now dozens of additional USEFUL metrics you can add to your dashboards. Hopefully Kafka will take note and provide native Prometheus metrics in a future release.
Kubernetes 1.18 was recently released, and with it came a slew of super useful features! One feature that hit GA is node local caching. This allows each node in your cluster to cache DNS queries, reducing load on your primary in-cluster CoreDNS servers. Now that this feature is GA, I wanted to take it for a spin. If you’ve looked at the query logs on an active CoreDNS pod, or dealt with AWS DNS query limits, I’m sure you will appreciate the value this feature brings.
To get this set up, I first downloaded the node local DNS deployment manifest:
$ curl -o localnodecache.tml -L0 https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml
The manifest contains a service account, service, daemonset and config map. The container that is spun up on each node runs CoreDNS, but in caching mode. The caching feature is enabled with the following configuration block, which is part of the config map that was installed above:
cache {
success 9984 30
denial 9984 5
prefetch 500 5
}
The cache block tells CoreDNS how many queries to cache, as well as how long to keep them (TTL). You can also configure CoreDNS to prefetch frequently queried items prior to them expiring! Next, we need to replace three PILLAR variables in the manifest:
$ export localdns="169.254.20.10"
$ export domain="cluster.local"
$ export kubedns="10.96.0.10"
$ sed -i "s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__/$kubedns/g" nodelocaldns.yaml > nodedns.yml
The localdns variable contains the IP address you want your caching coredns instance to listen for queries on. The documentation uses a link local address, but you can use anything you want. It just can’t overlap with existing IPs. Domain contains the Kubernetes domain you you set “clusterDomain” to. And finally, kubedns is the service IP that sits in front of your primary CoreDNS pods. Once the manifest is applied:
$ kubectl apply -f nodedns.yml
You will see a new daemonset, and one caching DNS pod per host:
$ kubectl get ds -n kube-system node-local-dns
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
node-local-dns 4 4 4 4 4 <none> 3h43m
$ kubectl get po -o wide -n kube-system -l k8s-app=node-local-dns
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
node-local-dns-24knq 1/1 Running 0 3h40m 172.18.0.4 test-worker <none> <none>
node-local-dns-fl2zf 1/1 Running 0 3h40m 172.18.0.3 test-worker2 <none> <none>
node-local-dns-gvqrv 1/1 Running 0 3h40m 172.18.0.5 test-control-plane <none> <none>
node-local-dns-v9hlv 1/1 Running 0 3h40m 172.18.0.2 test-worker3 <none> <none>
One thing I found interesting is how DNS queries get routed to the caching DNS pods. Given a pod with a ClusterFirst policy, the nameserver value in /etc/resolv.conf will get populated with the service IP that sits in front of your in-cluster CoreDNS pods:
$ kubectl get svc kube-dns -o wide -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 5d18h k8s-app=kube-dns
$ kubectl exec -it nginx-f89759699-2c8zw -- cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5
But under the covers, iptables has an OUTPUT chain to route DNS requests destined for your CoreDNS cluster service IP to the IP assigned to the localdns variable. We can view that with the iptables command:
$ iptables -L OUTPUT
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT udp -- 10.96.0.10 anywhere udp spt:53
ACCEPT tcp -- 10.96.0.10 anywhere tcp spt:53
ACCEPT udp -- 169.254.20.10 anywhere udp spt:53
ACCEPT tcp -- 169.254.20.10 anywhere tcp spt:53
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL all -- anywhere anywhere
Pretty neat! Now to test this out. If we exec into a pod:
$ kubectl exec -it nginx-f89759699-59ss2 -- sh
And query the local caching instance:
$ dig +short @169.254.20.10 prefetch.net
67.205.141.207
$ dig +short @169.254.20.10 prefetch.net
67.205.141.207
We get the same results. But if you check the logs, the first request will hit the local caching server, and then be forwarded to your primary CoreDNS service IP. When the second query comes in, the cached entry will be returned to the requester, reducing load on your primary CoreDNS servers. And if your pods are configured to point to the upstream CoreDNS servers, iptables will ensure that query hits the local DNS cache. Pretty sweet! And this all happens through the magic of CoreDNS, IPTables and some awesome developers! This feature rocks!