Over the past few months, I’ve been updating various Terraform modules to utilize the new features in 0.12. Among these, is the ability to iterate over dynamic blocks with for_each. Utilizing this new feature has allowed me to reduce the size of my security groups, while making them more readable. To show this feature in action, I will create a new map variable with the port as a key, and a list of CIDR blocks to allow in as the value:
variable "ec2_ingress_ports_default" {
description = "Allowed Ec2 ports"
type = list
default = {
"22" = ["192.168.1.0/24", "10.1.1.0/24" ]
"443" = ["0.0.0.0/0"]
}
To populate the Ingress statements, you can define a dynamic block, and then use for_each to iterate through the map and populate each ingress stanza:
resource "aws_security_group" "ec2_default_rules" {
name = "ecs_default_sg_rules"
dynamic ingress {
for_each = var.ec2_ingress_ports_default
content {
from_port = ingress.key
to_port = ingress.key
cidr_blocks = ingress.value
protocol = "tcp"
}
}
}
The final result will be one or more Ingress statements, each defining the CIDR block source IPs that are allowed to connect to the port:
+ {
+ cidr_blocks = [
+ "192.168.1.0/24",
+ "10.1.1.0/24",
]
+ description = ""
+ from_port = 22
+ ipv6_cidr_blocks = []
+ prefix_list_ids = []
+ protocol = "tcp"
+ security_groups = []
+ self = false
+ to_port = 22
},
I’ve been able to drastically reduce the amount of HCL in my custom modules, which is always a good thing. There are tons of uses for this, and I will continue to refine things as I clean up old Terraform .tf files.
If you’ve worked with the various cloud providers, you’ve probably realized the value that comes with tagging resources. For billing and searching, I like to create a default set of tags that are applied to every resource. These include the group that owns the resource, the application type, and one or more operational tags. To keep things DRY, I keep a tags.tf file with entries similar to the following:
variable "default_ec2_tags" {
description = "Default set of tags to apply to EC2 instances"
type = map
default = {
Environment = "Production"
SupportTeam = "PlatformEngineering"
Contact = "group@example.com"
}
}
This file then becomes a one-stop-shop for defining tags that apply to everything in a project. When I create resources, I use merge to combine the defaults with resource specific tags:
resource "aws_instance" "nodes" {
count = var.kafka_broker_count
ami = var.kafka_ami_image
instance_type = var.kafka_instance_type
vpc_security_group_ids = [var.kafka_security_group_list]
availability_zone = element(var.availability_zones, count.index)
associate_public_ip_address = var.associate_public_ip == true ? true : false
subnet_id = element(aws_subnet.public-subnet.*.id, count.index)
tags = { for k, v in merge({ "Name" = "Kafka-Broker-${format("%02d", count.index)}" },var.default_ec2_tags) : k => v }
}
This will produce a union of both, resulting in the following plan output:
$ terraform plan
+ tags = {
+ "Environment" = "Production"
+ "Name" = "Kafka-Broker-01"
+ "SupportTeam" = "PlatformEngineering"
+ "Contact" = "group@example.com"
This has proven to be super useful, though it took me some time to get the tags just right. When you are trying to track down a billing issue, or locate an application owner in a sea of resources, tags will quickly become your best friend.
I am a long time Terraform user. The number of providers that are available for Terraform, and having a resource for pretty much every cloud service makes it super appealing. But even with several years of production usage, I still find myself scratching my head at times when I’m writing my interpolations. Terraform provides a really nice shell to assist with this, and it can be accessed with the terraform “console” option:
$ terraform console
Once you are in the shell, typing an expression will produce immediate feedback:
> list("a","b","c")[0]
> a
The expression above creates a list, and then displays the first element in it. A more realistic example would be grabbing a given subnet from a list:
> aws_subnet.vpc-foo-public-subnets-proxies.*.id[0]
subnet-0a211c324068d847e
In this example I use the splat operator to get all subnets in a list, and then display the subnet in index 0. The console is also super useful for playing with data sources. To see what a data source contains, you can type it into the console:
> data.aws_availability_zones.available
{
"group_names" = [
"us-east-1",
]
"id" = "2020-04-26 14:08:51.531061491 +0000 UTC"
"names" = [
"us-east-1a",
"us-east-1b",
"us-east-1c",
"us-east-1d",
"us-east-1e",
"us-east-1f",
]
"state" = "available"
"zone_ids" = [
"use1-az2",
"use1-az4",
"use1-az6",
"use1-az1",
"use1-az3",
"use1-az5",
]
}
Once you see the structure (maps, lists, strings etc.), you can use the built-in functions to massauge the data to your liking:
> slice(data.aws_availability_zones.available.names,2,4)
[
"us-east-1c",
"us-east-1d",
]
One other super useful feature is the ability to play with maps. You can retrieve map values:
> lookup(map("a","avalue","b","bvalue"), "a")
avalue
Or the keys that comprise a map:
> keys(map("a","avalue","b","bvalue"))
[
"a",
"b",
]
Or both the keys and their values:
> { for k, v in map("a", "one", "b", "two"): v => k }
> {
> "one" = "a"
> "two" = "b"
> }
As of Terraform 0.12, you now have the ability to use for, for_each and logic operators ( x ? y : z existed previously) in your HCL. Testing logic operations in the console is as easy as:
> keys(map("a","avalue","b","bvalue"))[0] == "a" ? "true" : "false"
true
> keys(map("a","avalue","b","bvalue"))[0] == "ab" ? "true" : "false"
false
The same goes for testing looping logic:
> [for string in ["one", "two", "three"] : upper(string)]
[
"ONE",
"TWO",
"THREE",
]
> [ for key in keys(map("a", "one", "b", "two")) : key ]
[
"a",
"b",
]
> [ for value in map("a", "one", "b", "two") : value ]
[
"one",
"two",
]
The console has become my best friend when writing Terraform code. I can test my code in the shell, then commit it once I know it will produce the result I want. This also reduces the number of times I need to come back to my code after reviewing my terraform plans. Viva la Terraform!
The past couple of weeks I have been digging into gRPC and HTTP2 in my spare time. I needed a way to review the requests and responses, and an easy way to explore gRPC servers. I also wanted something to dump protocol buffers in a human readable format. It turns out grpcurl was written for just this purpose, and has been super useful for groking gRPC.
Grpccurl has a number of useful options. The list option can be used to display the service interfaces your gRPC server exposes:
$ grpcurl -v prefetch.net:443 list
build.stack.fortune.FortuneTeller
grpc.reflection.v1alpha.ServerReflection
In the previous example, I listed the service interfaces in the fortune teller sample application. In order for this to work, the server you are curling must support reflection. Once you have the list of service interfaces, you can list the methods they expose by passing the service name to list:
$ grpcurl -v prefetch.net:443 list build.stack.fortune.FortuneTeller
build.stack.fortune.FortuneTeller.Predict
To get additional information on the methods returned, you can pass a method name to describe:
$ grpcurl -v prefetch.net:443 describe build.stack.fortune.FortuneTeller.Predict
build.stack.fortune.FortuneTeller.Predict is a method:
rpc Predict ( .build.stack.fortune.PredictionRequest ) returns ( .build.stack.fortune.PredictionResponse );
This will describe the type of the symbol, and print the request and response types. The describe option can also be used to print additional detail about the request and response objects:
$ grpcurl -v prefetch.net:443 describe .build.stack.fortune.PredictionResponse
build.stack.fortune.PredictionResponse is a message:
message PredictionResponse {
string message = 1;
}
In the output above, we can see that the server returns a message with a string. To invoke a gRPC method, you can pass the method as an argument:
$ grpcurl -v prefetch.net:443 build.stack.fortune.FortuneTeller.Predict
Resolved method descriptor:
rpc Predict ( .build.stack.fortune.PredictionRequest ) returns ( .build.stack.fortune.PredictionResponse );
Request metadata to send:
(empty)
Response headers received:
access-control-allow-credentials: true
content-type: application/grpc
date: Mon, 20 Apr 2020 15:24:34 GMT
server: nginx
x-content-type-options: nosniff
x-xss-protection: 1; mode=block
Response contents:
{
"message": "Life is a POPULARITY CONTEST! I'm REFRESHINGLY CANDID!!"
}
Response trailers received:
(empty)
Sent 0 requests and received 1 response
With the verbose flag, you can dump the response headers and response payload as a JSON object. This is a super cool utility, and super, super helpful when exploring and debugging gRPC code.
When you support large Kubernetes clusters, you need efficient methods to list pods, nodes, and deployments when you are troubleshooting issues. Kubectl has a number of built-in methods to do this. You can use jsonpath, selectors and sort-by statements to return the exact data you need. In addition, you can use the kubectl “-l” option to list objects that contain (or don’t contain) a label. To illustrate this, lets assign the label “env=staging” to a node:
$ kubectl label node test-worker env=staging
node/test-worker labeled
Once that label is assigned, you can list all nodes with that label by passing it as an argument to the list option:
$ kubectl get nodes -l env=staging
NAME STATUS ROLES AGE VERSION
test-worker Ready <none> 2m15s v1.17.0
This gets even more useful when you add in logic operations:
$ kubectl get nodes -l 'env!=staging'
NAME STATUS ROLES AGE VERSION
test-control-plane Ready master 10m v1.17.0
test-control-plane2 Ready master 10m v1.17.0
test-worker2 Ready <none> 9m31s v1.17.0
test-worker3 Ready <none> 9m41s v1.17.0
The example above will list all nodes that don’t contain the label ‘env=staging’. Super useful stuff!