Blog O' Matty


Collecting Nginx metrics with the Prometheus nginx_exporter

This article was posted by on 2020-07-07 01:00:00 -0500 -0500

Over the past year I’ve rolled out numerous Prometheus exporters to provide visibility into the infrastructure I manage. Exporters are server processes that interface with an application (HAProxy, MySQL, Redis, etc.), and make their operational metrics available through an HTTP endpoint. The nginx_exporter is an exporter for Nginx, and allows you to gather the stub_status metrics in a super easy way.

To use this exporter, you will first need to download the nginx_exporter binary from the projects Github release page. Once downloaded and extracted, you should create a user to run the process as (this isn’t required, but helps to enhance security):

$ useradd -s /bin/false -c 'nginx_exporter service account' -r prometheus

Next, you will need to create a systemd unit file to start and stop the nginx_exporter process:

$ cat /etc/systemd/system/nginx_exporter.service

[Unit]
Description=Prometheus Nginx Exporter
After=network.target

[Service]
ExecStart= /usr/local/bin/nginx-prometheus-exporter -nginx.scrape-uri https://example.com/metrics -web.listen-address=127.0.0.1:9113
Restart=always
User=prometheus
Group=prometheus

[Install]
WantedBy=multi-user.target

$ systemctl enable nginx_exporter && systemctl start nginx_exporter

The example above assumes that you are running the nginx_exporter process on the same server Nginx is running on. To allow the node_exporter to scrape metrics, you will need to add a location stanza similar to the following:

location /metrics {
    stub_status on;
    access_log   off;
    allow 127.0.0.1;
    deny all;
}

That’s it. You can now test the metrics endpoint with your favorite web utility:

$ curl -s localhost:9113/metrics | grep ^nginx

nginx_connections_accepted 77
nginx_connections_active 6
nginx_connections_handled 77
nginx_connections_reading 0
nginx_connections_waiting 5
nginx_connections_writing 1
nginx_http_requests_total 1513
nginx_up 1

If you are using Nginx+ there are a TON more metrics exposed. But for basic website monitoring, these can prove useful.

Improving my Linux diff experience with icdiff

This article was posted by on 2020-07-07 00:00:00 -0500 -0500

I recently came across icdiff. This little gem allows you to see the difference between two files, but what makes it special is its ability to highlight the differences (sdiff, which was my go to diff tool, doesn’t have this feature):

$ icdiff --cols 80 -U 1 -N node_groups.tf node_groups_new.tf

node_groups.tf                          node_groups_new.tf                     
      10     # `depends_on` causes a re       10     # `depends_on` causes a re
fresh on every run so is usele          fresh on every run so is usele         
ss here.                                ss here.                               
      11     # [Re]creating or removing       11     # [Re]creating or removing
 these resources will trigger            these resources will trigger          
recreation of Node Group resou          recreation of Node Group resou         
rces                                    rces ***something***                   
      12     aws_auth         = coalesc       12     aws_auth         = coalesc
elist(kubernetes_config_map.aw          elist(kubernetes_config_map.aw         
s_auth[*].id, [""])[0]                  s_auth[*].id, [""])[0]    

In the example above, icdiff highlighted the keyword “something” on line 11 in column 2. I really dig the highlighting, and its ability to print X lines before and after the match. You can also define the output column size which is helpful when you are working on the command line.

Using Kubernetes server side validation to validate your deployment manifests

This article was posted by on 2020-06-29 01:00:00 -0500 -0500

Kubernetes server side validation recently landed, and it’s a super useful feature. Prior to server side validation, you could use the kubectl dry-run feature to validate your deployment manifests:

$ kubectl apply --dry-run -f nginx.yaml

pod/nginx created (dry run)

When this command runs, the validation occurs on the machine that hosts the kubectl binary. While useful, there are a few use cases were your manifest would validate locally, but wouldn’t apply when you sent it to the API server. One example is if your kubectl binary was older than 1.16, and you tried to send a JSON payload with deprecated APIs to a 1.16+ API server. With the new server side apply feature, you can have the API server validate the manifest.

To use server side validation, you can add the string “server” to the “–dry-run” option:

$ kubectl apply --dry-run=server -f nginx.yaml

error: error validating "nginx.yaml": error validating data: ValidationError(Pod.spec.containers[0].resources): unknown field "requestss" in io.k8s.api.core.v1.ResourceRequirements; if you choose to ignore these errors, turn validation off with --validate=false

If the API server detects an issue, kubectl will note that in the output. Super useful feature, and definitely one you should add to your CI/CD pipeline if kubectl is your deployment tool of choice.

Using mkcert to quickly create certificates for testing and development environments

This article was posted by on 2020-06-29 00:00:00 -0500 -0500

As a developer, operator, and architect, I am always evaluating technological solutions. A fair number of these solutions use TLS, which requires minting new certificates. I recently came across mkcert, which makes it SUPER easy to provision new certificates for development and testing. To get started with mkcert, you will need to run it with the “-install” option:

$ mkcert -install

Created a new local CA at "/home/vagrant/.local/share/mkcert" 💥
The local CA is now installed in the system trust store! ⚡️
The local CA is now installed in the Firefox and/or Chrome/Chromium trust store (requires browser restart)!

This will create a new CA certificate in $HOME/.local/share/mkcert, and update your trust stores so curl, Firefox, etc. won’t complain when they connect to a TLS endpoint that uses a mkcert minted certificate. To actually create a certificate, you can run mkcert with the common name you want assigned to the certificate:

$ mkcert localhost

Using the local CA at "/home/vagrant/.local/share/mkcert" ✨

Created a new certificate valid for the following names 📜
 - "localhost"

The certificate is at "./localhost.pem" and the key at "./localhost-key.pem" ✅

That’s it! You now have a RootCA, a private key, and an X.509 certificate to use for testing. It takes seconds to create them, and you can fire up your favorite service with the generated certs:

$ openssl s_server -cert localhost.pem -key localhost-key.pem -www -accept 8443 &

$ curl -D - https://localhost:8443

ACCEPT
HTTP/1.0 200 ok
...
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES256-GCM-SHA384
    Session-ID: XXXX
    Session-ID-ctx: 01000000
    Master-Key: XXXX
    Key-Arg   : None
    Krb5 Principal: None
    PSK identity: None
    PSK identity hint: None
    Start Time: 1593446296
    Timeout   : 300 (sec)
    Verify return code: 0 (ok)
...

The certificate and private key are created in your current working directory, and the RootCA certificate is placed in $HOME/.local/share/mkcert by default. All of these files are PEM encoded, so openssl and company can be used to print their contents. In addition, mkcert will populate the nssdb file in your home directory with the RootCA:

$ ls -la /home/vagrant/.pki/nssdb

total 32
drwxrw----. 2 vagrant vagrant    55 Jun 29 15:45 .
drwxrw----. 3 vagrant vagrant    19 Mar 31 18:20 ..
-rw-------. 1 vagrant vagrant 10240 Jun 29 15:45 cert9.db
-rw-------. 1 vagrant vagrant 13312 Jun 29 15:45 key4.db
-rw-------. 1 vagrant vagrant   436 Jun 29 15:45 pkcs11.txt

$ certutil -L -d sql:/home/vagrant/.pki/nssdb

Certificate Nickname                                         Trust Attributes
                                                             SSL,S/MIME,JAR/XPI

mkcert development CA 196291963499902809203365320023044568657 C,,

While I still love OpenSSL and cfssl, this is my new go to for quickly minting certificates for development and testing. Amazing stuff!

Enabling hashicorp vault auto-completion

This article was posted by on 2020-06-16 00:00:00 -0500 -0500

Hashicorp Vault has become one of my favorite technologies over the past year. Secrets management is a non-trivial undertaking, and I’m routinely blown away by how easy vault makes secrets management. One nifty thing I recently learned is vault has auto-completion, which you can enable with the “-autocomplete-install” option:

$ vault -autocomplete-install && source $HOME/.bashrc

Once enabled, you can type vault followed by a tab to see all of the available options:

$ vault <tab>

agent      delete     login      plugin     secrets    token
audit      kv         namespace  policy     server     unwrap
auth       lease      operator   print      ssh        write
debug      list       path-help  read       status

This also works for subcommands, so typing $(vault audit TAB) will display the options that can be passed to the audit sub-command. I’m a huge fan of auto-completion, and try to use it whenever I can to improve my efficieny.