Using external-dns to managed DNS entries in Kubernetes clusters


Kubernetes provides a service resource to distribute traffic across one or more pods. I won’t go into detail on what a service is, since it’s covered in-depth elsewhere. For Internet-facing applications, this Service will typically be of type LoadBalancer. If you are running in the “cloud,” creating a service of type LoadBalancer will trigger cloud provider specific logic to provision an external load balancer (either private or public) with the target being your service. Once the load balancer is provisioned, your cloud provider will return a long DNS name to represent the load balancer endpoint:

$ kubectl get svc

NAME            TYPE           CLUSTER-IP       EXTERNAL-IP                                                             PORT(S)          AGE
matty-service   LoadBalancer   1.2.3.4          XXXXXXXXXXXXXXXXXXXXXXXX-XXXXXXXXXX.us-east-1.elb.amazonaws.com         8081:32759/TCP   3m16s

If you are hosting services for customers, you don’t want them to use that long string to access your service. The DNS name also won’t work when you are using virtual name hosting or an Ingress that contains hostnames. What you want to is to hand your customer a vanity name (e.g., example.com) that maps to the long load balancer FQDN through DNS. You could manually go into your providers DNS service to create this mapping, or you can let external-dns take care of that for you!

If you are still reading, external-dns may sound enticing to you. To get started, you will first need to grab the deployment manifests for your DNS provider. In my case, I am using route53 so I installed the RBAC manifests after reviewing them. External-dns also needs to make API calls to your cloud-provider to list and update DNS records. This will require permissions which the installation guides cover in-depth.

External-dns is highly configurable, and has numerous flags to control how it will manage DNS updates. I would suggest ammending the defaults based on your risk tolerance and operational practices. At a minimum, I would suggest reviewing the following flags:

--aws-prefer-cname          - Create an ALIAS or CNAME record in Route53.
--namespace=""              - Limit the namespaces external-dns looks for annotations in.
--publish-internal-services - Publish DNS for ClusterIP services.
--provider=aws              - The provider you need external-dns to work with.
--policy=upsert-only        - Controls how records are synchronized.
--txt-owner-id="default"    - The name to assign to this external-dns instance.
--txt-prefix=""             - Custom string appended to each DNS record ownership record.
--domain-filter=domains     - The list of domains external-dns should operate on.

One flag to think through is “–policy”. This controls how external-dns will manage records when services are added and removed. External-dns has three modes of operation: sync, upsert-only, and create-only. These are described in policy.go:

Policies is a registry of available policies.
var Policies = map[string]Policy{
 "sync":        &SyncPolicy{},
   "upsert-only": &UpsertOnlyPolicy{},
     "create-only": &CreateOnlyPolicy{},
     }

     // SyncPolicy allows for full synchronization of DNS records.
     type SyncPolicy struct{}

     // Apply applies the sync policy which returns the set of changes as is.
     func (p *SyncPolicy) Apply(changes *Changes) *Changes {
       return changes
       }

       // UpsertOnlyPolicy allows everything but deleting DNS records.
       type UpsertOnlyPolicy struct{}

I was concerned with entries being removed when I first started using external-dns, so I wanted to point this out (FWIW: a working backup and recovery solution eased my fears). To get external-dns to create DNS records for your service, you need to add an “external-dns.alpha.kubernetes.io/hostname” annotation with the DNS entry to create. Here is an example annotated service which will trigger the creation of matty.prefetch.net:

apiVersion: v1
kind: Service
metadata:
  name: matty-service
  annotations:
    external-dns.alpha.kubernetes.io/hostname: matty.prefetch.net
spec:
  selector:
    run: nginx-matty
  ports:
    - port: 80
      targetPort: 80
  type: LoadBalancer

After the DNS entry is created, the external-dns pod will log a message to indicate the record was created:

$ kubectl logs external-dns-XXXXX

time="2020-01-27T20:28:35Z" level=info msg="Desired change: CREATE matty.prefetch.net A [Id: /hostedzone/XXXXXXXXXXX]"
time="2020-01-27T20:28:35Z" level=info msg="Desired change: CREATE matty.prefetch.net TXT [Id: /hostedzone/XXXXXXXXXXXX]"
time="2020-01-27T20:28:35Z" level=info msg="2 record(s) in zone prefetch.net. [Id: /hostedzone/XXXXXXXXX] were successfully updated"

In the example above, external-dns created a Route53 ALIAS record pointing matty.prefetch.net to the ALB DNS name returned by the cloud provider. It also created a TXT ownership record to indicate external-dns owns the entry:

$ dig +short matty.prefetch.net txt

"heritage=external-dns,external-dns/owner=my-hostedzone-identifier,external-dns/resource=service/default/matty-service"

To verify the entry resolves you can run dig:

$ dig +short matty.prefetch.net

34.204.233.20
52.87.68.17

The IPs returned should be the same ones returned if you resolve the load balancer DNS name:

$ dig +short XXXXXXXXXXXXXXXXXXXXXXXX-XXXXXXXXXX.us-east-1.elb.amazonaws.com

34.204.233.20
52.87.68.17

Now the big question! Would I run this in production? I’m not sure yet. Currently, I’m using it to provision minimized EKS clusters for developers, and that is working well. There are some large organizations using it, but there are a few GitHub issues that concern me. Once I get a bit more comfortable with it, I won’t hesitate using it in production. The code is readable, well organized, and the community is active. Those are always good signs!

This article was posted by on 2020-01-28 11:36:59 -0500 -0500