Blog

create infrastructure

Create Infrastructure as Code with Terraform

March 27, 2020 · by Ian Cass and Anthony Hogg ·

This blog is part of the Akamai Platform Release, where we’re giving you all the details about what Akamai has added and improved for developers! You can view all of our updates here.  

You’ve probably already read our introductory blog post on the new Akamai Terraform Provider, but if not, go check it out to see our Terraform demo video and to review some of the basics.

This blog will dive a bit deeper than our introductory post. We’ll walk through three advanced use cases to show you how you can strengthen your continuous integration and continuous deployment (CI/CD) pipeline with Akamai and Terraform. 

You’ll find code samples for some of the examples below in our Terraform community examples GIT Repository. We love collaborating with our users, so feel free to submit a pull request to submit your unique implementation!

Leverage Terraform to Set Up Property Snippets

If you want to use the Akamai Terraform provider, you’ll need to provide Property Manager rules as raw json. This is configured in the akamai_property docs using the "rules" configuration argument.

You’ll need to add a string of valid json as opposed to a filename. Terraform makes this easy by providing a "local_file" data source that you can use to load the file.

In the “local_file”, paste the following: 

data "local_file" "rules" {
  filename = "${path.module}/rules.json"
}
resource "akamai_property" "example" {
  ....
  rules = "${data.local_file.terraform-demo.content}"
}

The above example simply takes the content of rules.json and provides it to the property as a string (by invoking .content). In our DevOps environment, we want the flexibility to add variables into the json instead (vs. need to hard code the variables).As you can see above, defining a "local_file" data source loads the file so you can use that data source in the "akamai_property".

This concept of utilizing a property value from one data/resource definition in another resource allows Terraform to work out the relationship between different objects. It can then determine the order in which things need to be processed.

The FAQ section of our documentation shows us how to do this by using the Terraform "template_file" provider rather than the "local_file" data source. 

You can define the rules.json with the variables you see below.

data "template_file" "rules" {
template = "${file("${path.module}/rules.json")}"
vars = {
origin = "${var.origin}"
}
}
resource "akamai_property" "example" {
....
rules = "${data.template_file.rules.rendered}"
}

{
"name": "origin",
"options": {
"hostname": "${origin}",

}
},
...

Now, your json is templated but it's still a large monolithic blob. This would be ok if all the properties were exactly the same, but often different properties require different rule sets. This is getting closer to how we'd like to do things in a DevOps world but it's not quite there yet. 

Create a base rule template 

Creating a base rule template allows you to import rule sets individually and gives you more flexibility.

First, you’ll need to create a directory structure.

rules/rules.json
rules/snippets/default.json
rules/snippets/performance.json

Below, you can see how to provide a basic template for your json.The "rules" directory contains a single file "rules.json" and a sub-directory that contains all of the rule snippets you’ll need. 

"rules": {
  "name": "default",
  "children": [
${file("${snippets}/performance.json")}
],
${file("${snippets}/default.json")}
  ],
"options": {
"is_secure": true
  }
},
  "ruleFormat": "v2018-02-27"
}

And  - of course, you can always add more. Each snippet should be a json fragment for each section of the rule tree (you can also reference a central repo). You can pull in two snippets - one for the default rules and another for the performance rule. 

To make this work, we need to remove the "template_file" section we added earlier and replace it with the following:

data "template_file" "rule_template" {
template = "${file("${path.module}/rules/rules.json")}"
vars = {
snippets = "${path.module}/rules/snippets"
}
}
data "template_file" "rules" {
template = "${data.template_file.rule_template.rendered}"
vars = {
tdenabled = var.tdenabled
}
}

Note: The first pass creates the entire json and the second pass replaces the variables that we need for each fragment. This will tell Terraform to process the rules.json and pull each referenced fragment. Then Terraform will pass its output through another template_file section to process it a second time. 

You can utilize the rendered output in our property definition to:

resource "akamai_property" "example" {
....
rules = "${data.template_file.rules.rendered}"
}

Easier management for SaaS Providers

SaaS Providers want to maintain Property Manager configurations that are identical - other than a few specific parameters. 

Usually, it’s best to keep separate configurations for each service instance, but this means that you can end up with 1000’s of configs. Terraform lets you maintain all of your configs in a scalable way and creates an easy onboarding experience.

In this next example, I’ll show you how to use Terraform’s ‘for_each’ function to iterate over an array or map and provision infrastructure for each component. 

You can define a complex object type variable that describes each of our customers SaaS instances like so:

variable "customers" {
     type = map(object({
             username = string
             password = string
     }))
}

customers =  {
  "foreach1.example.com" = {
             username = "test"
             password = "test"
     },
  "foreach2.example.com" = {
             username = "test2"
             password = "test2"
     }
}

Then, you can populate this variable with our terraform.tfvars.

Next, you can modify your main.tf to include the "for_each" logic in each resource that needs to be individually provisioned for each instance.

For example:

data "template_file" "rules" {
     for_each = var.customers

     template = data.template_file.rule_template.rendered
     vars = {
             username = each.value.username
             password = each.value.password
     }
}

Each instance will be referenced with the key value of your variable. One will be "template_file.rules[foreach1.example.com]" and the other will be "template_file.rules[foreach2.example.com]". Given the configuration for the "customers" variable, Terraform will create two instances of "template_file.rules". 

It’s not important what the key value is, only that the key value exists and is required when you reference it in the future. In practical terms, this means that you need to supply the key value when you reference this resource from another resource.

resource "akamai_property" "property" {

  for_each = var.customers

  name    = each.key
  cp_code = akamai_cp_code.cpcode[each.key].id

  contact = [""]

  contract = data.akamai_contract.contract.id

  group = data.akamai_group.group.id

  product = "prd_Site_Accel"

  rule_format = "v2018-02-27"

  hostnames = {

     "${each.key}" = akamai_edge_hostname.edge_hostname.edge_hostname

  }

  rules   = data.template_file.rules[each.key].rendered


  is_secure = true


}

You can see this in action when you do a "terraform apply".

akamai_property_activation.activation["foreach1.example.com"]: Modifying... [id=atv_8064436]
akamai_property_activation.activation["foreach2.example.com"]: Modifying... [id=atv_8064438]
akamai_property_activation.activation["foreach1.example.com"]: Modifications complete after 5s [id=atv_8064436]
akamai_property_activation.activation["foreach2.example.com"]: Still modifying... [id=atv_8064438, 10s elapsed]
...
akamai_property_activation.activation["foreach2.example.com"]: Modifications complete after 2m19s [id=atv_8064592]

terraform destroy -target='akamai_property_activation.activation["foreach1.example.com"]'You can also reference a particular instance to target something specific, for example:

...

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  # akamai_property_activation.activation["foreach1.example.com"] will be destroyed
  - resource "akamai_property_activation" "activation" {
   - activate = true -> null
   - contact  = [
       - "myemail@akamai.com",
     ] -> null
   - id   = "atv_8064436" -> null
   - network  = "STAGING" -> null
   - property = "prp_594500-0aa0a95548921426a2d416e373c7354a48218ffa" -> null
   - status   = "ACTIVE" -> null
   - version  = 1 -> null
}

Build a Golden Pipeline using Terraform and Concourse CI

 You may wish to automate your Akamai configuration management without sacrificing the ability to edit them in Property Manager. The Golden pipeline is a good starting point for implementing this.

chart

In this scenario, the master configuration is managed by human operators using Property Manager. Concourse CI tracks activations of the master configuration on the production network. When one occurs, the associated property version rule tree is retrieved and propagated to the QA, Preprod and Prod environments.

chart

Triggering The Pipeline

We want to trigger the pipeline when a new version of the master is deployed to the Akamai production network. In Concourse, a very natural way of accomplishing this is by implementing the activation as a Concourse resource.

An example implementation of a Concourse property activation resource can be found here: https://github.com/ynohat/concourse-akamai/tree/master/property-activation-resource.

In addition to triggering the pipeline, the activation resource also retrieves the property rule tree and makes it available to the pipeline tasks that follow.

A different strategy could be to retrieve the rule tree directly using a Terraform Data Source. In this context, this would be less efficient since each Terraform application would make an API call to retrieve the rule tree from the master. We avoid this by treating it as an immutable input to the entire pipeline.

Managing State

Terraform stores the state of the infrastructure as it knows it. When we run `terraform apply`, the configuration is compared to the stored state and the changes are orchestrated. When we run `terraform refresh`, the local state is reconciled with the actual state.

By default, Terraform stores state in a directory on the host system where the command is run. Since Concourse will run each task in an isolated, ephemeral container, local state will be lost between runs.

We recommend using one of Terraform’s alternate backends such as Consul, Etcd or S3 when using Terraform with automation - or indeed any situation involving concurrent execution!

Staying DRY

Because all environments are essentially clones of the master, there is no sense in maintaining each one of them as separate resources in the Terraform configuration.

Instead, we want to use one Terraform configuration, and inject variables specific to each environment. To make this work, we need to split the state by environment, otherwise Terraform would try to *replace* the QA environment with the pre-production environment and so forth.

This is accomplished by leveraging Terraform Workspaces, one each for QA, Preproduction and Production. Workspaces are a builtin mechanism used precisely for this purpose: one configuration, multiple parallel versions of the state.

An example leveraging this workflow can be found here: https://github.com/terraform-providers/terraform-provider-akamai/tree/master/examples/community-examples/property/workspaces-test

Dealing With Activations

The terraform configuration specifies two activation resources, one for staging and one for production:

resource "akamai_property_activation" "staging" {
  property = akamai_property.default.id
  network  = "STAGING"
  activate = var.staging
  contact  = var.email

The pipeline first activates to staging by running the equivalent of:

terraform apply -var staging=true -var production=false

It then activates to production by running:

terraform apply -var staging=true -var production=true