First Steps in Terraforming

Table of Contents

Why Terraform

  • Cloud agnostic - skills learned deploying to AWS are transferable when working with other hyperscalers like Azure or GCP...and even applicable to on-prem solutions like VMware vSphere.
  • mindshare - large and engaged user base - chances are, someone already has a solution for your problem
  • HCL (Hashi Corp Language) is fairly easy to pick up (yes, even for me)
  • you can see the changes you are about to make before you actually make them (personal favorite)
  • large number of providers (think AWS, Azure, GCP, Kubernetes, Oracle Cloud, Alibaba Cloud, etc...)


Configure AWS CLI

Head over to AWS IAM, create a user with admin rights but instead of provisioning a password, just create access keys. You'll need these so Terraform can authenticate to AWS and manage resources. 

Once you are in possession of said keys:

aws configure  
    AWS Access Key ID [****************634P]:
    AWS Secret Access Key [****************/fXa]:
    Default region name [us-east-1]:
    Default output format [None]:

In my case since, I've already gone through this step, the values have already been populated. 


provider "aws" {
    profile = "default"    
    region = "us-east-1"
    resource "aws_instance" "terra_test" {
    The ami IDs are region specific. The same AMI in us-east-1 will have a 
different id in us-west-1. */
ami = "ami-07d02ee1eeb0c996c" instance_type = "t2.micro" subnet_id = "subnet-005d0d592aab263bf" tags = { Name = "terravm" }   }

On with the show...

terraform init 👍

Initializes the directory and pulls the necessary provider plugins. 

terraform init
terraform init

terraform plan ✍

A fantastic feature - Terraform has the ability to let you preview changes before applying them. 

create |
- destroy | ~ update in-place | -/+ replace (destroy & then create, or create & destroy )

terraform apply 🤞

The command will produce the same output as terraform plan, giving you on last chance to examine the changes you are about to introduce before actually committing them by typing "yes".

create |
- destroy | ~ update in-place | -/+ replace (destroy & then create, or create & destroy )

terraform destroy 🤘😈 "...there's no undo"

+ create |
- destroy | ~ update in-place | -/+ replace (destroy & then create, or create & destroy )

Things to avoid like the plague COVID

Thou shalt not mess with infra deployed via terraform through web UI, cli, api, or any other way. It defeats the purpose and things will break.  Also, don't directly edit the .tfstate file but rather use terraform state rm and terraform state mv.

Provider versioning could matter. I ran into an issue where deploying an S3 bucket kept failing with this message:

terraform apply
    │ Error: Value for unconfigurable attribute
    │   with aws_s3_bucket.s3terraformbucket,
    │   on line 9, in resource "aws_s3_bucket" "s3terraformbucket":
    │    9: resource "aws_s3_bucket" "s3terraformbucket" {
    │ Can't configure a value for "versioning": its value will be decided automatically based on the result of applying this configuration.

The issue ended up being that since I had not declared what provide (aws in this case) version to use in my .tf file, when I did  terraform init, it used the latest - aws v4.2.0. This 4.x version however "introduces significant changes to the aws_s3_bucket resource". One way to avoid such surprises is to explicitly declare the provider version in your .tf file - the line aws = "~> 3.74" :

terraform {
    required_providers {
    aws = "~> 3.74"

Note: you will need to re-run terraform init after making this change so it will reinitialize the directory and pull the correct provider plugins for the respective version. 

Best practices

Settle on, and use consistent file and folder structure. 

For initial testing and hands on, one monolithic .tf file is OK, however, for production, readability and sanity, you can break up the file into a few individual ones - for example,,,,

Use remote storage for the file maintaining state (literally called terraform.tfstate). For experimenting and POC type work, storing that file locally is fine but if you transition to production workloads, and especially if you work with a team, storing it on a remote back end is a must. It will help avoid (through file locking) race conditions - two or more people doing terraform apply at the same time, and will ensure that the version they use is always the latest. Terraform support a number of backends but the two I plan on experimenting with are S3 bucket and Terraform Cloud. 


Beyond the likes of AWS CoudFormation and Azure ARM templates, there is a relative newcomer to this space that has been gaining momentum - Pulumi. Another one is Crossplane

Additional Resources

Terraform Pt.2 - Importing and stuff...

Terraform Pt.3 - the backend...