Terraform Pt. 3 - the backend...

Table of Contents

Once you have bribed convinced your team to adopt Terraform, it's time to find a better place for your tfstate files than a local backend (i.e. your machine). To enable collaboration without breaking things and stepping on each others toes, you need a backend that offers remote storage, versioning and file locking. 

Terraform supports a number of backends with two of the more popular ones being

  • AWS S3 bucket (for storage an versioning) along with a DynamoDB (for file locking).
  • Terraform Cloud

S3 and DynamoDB

Let's make a bucket 

provider "aws" {  
    profile = "default"
    region  = "us-east-1" 
    }
    resource "aws_s3_bucket" "s3terraformbucket" {
      bucket = "s3terrabucket1002"
     }
    resource "aws_s3_bucket_versioning" "versioning_example" {
    bucket = "s3terrabucket1002"
    versioning_configuration {
      status = "Enabled"
     }
    }

...and now the DynamoDB 

Terraform aws_dynamodb_table resource reference

Note: the key should be called exactly "LockID" - even capitalization is important. If you give it different name things will error out.

terraform {
    required_providers {
      aws = {
          version = "~> 4.2"
      }
     }
    }
    resource "aws_dynamodb_table" "terra-lock" {
      name = "tfstate"
      read_capacity = 5
      write_capacity = 5
      hash_key = "LockID"
      attribute {
        name = "LockID"
        type = "S"
      }
    tags = {
      "Name" = "tf state lock table"
      }
    }

Popping and locking

Now that we have an S3 bucket with versioning to store the tfstate file in and a DynamoDB table to help with locking, we can migrate state to its new home by adding the following block:


terraform {
    backend "s3" {
      bucket = "s3terrabucket1002" //S3 bucket needs to have a globally unique name
      key = "vm-state/terraform.tfstate"
      dynamodb_table = "tfstate"
      region = "us-east-1"
     }
    }
A .tf file that spins up an EC2 instance, storing its state remotely for example could look like this:

terraform {
  backend "s3" {
    bucket = "s3terrabucket1002"
    key = "vm-state/terraform.tfstate"
    dynamodb_table = "tfstate"
    region = "us-east-1"
   }
  }
    provider "aws" {
      profile ="default"
      region = "us-east-1"
    }
    resource "aws_instance" "terra_test" {
      ami = "ami-07d02ee1eeb0c996c"
      instance_type = "t2.micro"
      subnet_id = "subnet-005d0d592aab263bf"
      tags = {
    Name = "terravm"
      }
    }
After running terraform init you will notice that terraform will detect that you want to rehome state to S3 "Terraform has detected that the configuration specified for the backend has changed" and will copy the file over to the bucket. 
PS. you can also bring the file back to local state just by removing the block and rerunning terraform init.

    Initializing the backend...
    Backend configuration changed!
Terraform has detected that the configuration specified for the backend has changed. Terraform will now check for existing state in the backends.
Successfully configured the backend "s3"! Terraform will automatically use this backend unless the backend configuration changes.

How you can tell it's working in the UI

Locking:

Versioning:

Terraform Cloud as a backend

Sign up for Terraform Cloud (there's a free tier)

Have Terraform CLI (which if you are reading this you probably already do)

Enter your keys in the variables section (and mark them as sensitive)

Proceed to enter the code block below and perform a terraform init (changing organization to the one you've defined when creating your Terraform Cloud account and also changing the tags to whatever you like)

 cloud {
        organization = "organization"
        workspaces {
            tags = ["resource:vm"]
        }
    }

 

Run terraform init - notice the "Migrating from backend "s3" to TerraformCloud" message. 

It will error out because I need to log into Terraform Cloud first:

terraform login (which will ask you to go to the highlighted url and grab a token)

Upon successful authentication you are greeted with cool ASCII art :) 

Now comes the cool stuff - this is what terraform plan and terraform apply look like from space when observed in the Terraform Cloud. 

Lessons learned

Pinning your provider version is important.

In my brief experiments I ran into two issues, both in a way related terraform provider changes related to versioning. To solve one I had to either pin the provider version to a an older one, or implement a new way of declaring how an S3 bucket with versioning  enabled could be provisioned. In another instance, I had to again pin the provider version in order to successfully invoke a VM (discussion on this here). 

Given the above, if you adopt terraform, you will have to operationalize a process to regularly validate & revise .tf files with each Terraform release.

Additional Resources

S3 and DynamoDB 

Terraform Cloud Configuration

You should also read:

First Steps in Terraforming

Why Terraform Cloud agnostic - skills learned deploying to AWS are transferable when working with other hyperscalers like Azure or GCP...and even applicable…