Terraform State Management with S3 and DynamoDB: The Production-Ready Approach
Originally published on LinkedIn.
The Problem with Local State
When you first use Terraform, you store state locally in terraform.tfstate. That works for individuals and experiments — but as soon as a second team member joins or a CI/CD pipeline takes over deployments, conflicts, data loss, and inconsistent infrastructure follow.
The solution: a remote backend. On AWS, the combination of S3 (state storage) and DynamoDB (state locking) is the de-facto standard.
S3 Bucket for State
The S3 bucket needs several important configurations:
resource "aws_s3_bucket" "terraform_state" { bucket = "my-project-tfstate" # Prevents accidental deletion lifecycle { prevent_destroy = true }}resource "aws_s3_bucket_versioning" "terraform_state" { bucket = aws_s3_bucket.terraform_state.id versioning_configuration { status = "Enabled" }}resource "aws_s3_bucket_server_side_encryption_configuration" "terraform_state" { bucket = aws_s3_bucket.terraform_state.id rule { apply_server_side_encryption_by_default { sse_algorithm = "aws:kms" } }}resource "aws_s3_bucket_public_access_block" "terraform_state" { bucket = aws_s3_bucket.terraform_state.id block_public_acls = true block_public_policy = true ignore_public_acls = true restrict_public_buckets = true}Versioning is critical: it allows rolling back to a previous state if an apply goes wrong.
DynamoDB Table for State Locking
State locking prevents two concurrent terraform apply runs from modifying the same state simultaneously:
resource "aws_dynamodb_table" "terraform_lock" { name = "my-project-tflock" billing_mode = "PAY_PER_REQUEST" hash_key = "LockID" attribute { name = "LockID" type = "S" }}PAY_PER_REQUEST is the right choice here — the table is written infrequently, and on-demand billing avoids unnecessary costs.
Backend Configuration
terraform { backend "s3" { bucket = "my-project-tfstate" key = "prod/terraform.tfstate" region = "eu-central-1" encrypt = true dynamodb_table = "my-project-tflock" }}The key path allows multiple state files in the same bucket — useful for workspaces or multiple environments.
The Bootstrapping Problem
Here lies the classic chicken-and-egg question: the S3 bucket and DynamoDB table must exist before Terraform can use the backend. Approaches:
- Create manually — once via AWS CLI or console, then import into Terraform
- Separate bootstrap module — a small Terraform project with local state that only provisions the backend
- Terraform Cloud — sidesteps the problem entirely
I prefer option 2: the bootstrap module is small, rarely changed, and local state for this one module is acceptable.
IAM Permissions for CI/CD
The CI/CD pipeline needs minimal permissions:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3:GetObject", "s3:PutObject", "s3:DeleteObject"], "Resource": "arn:aws:s3:::my-project-tfstate/*" }, { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": "arn:aws:s3:::my-project-tfstate" }, { "Effect": "Allow", "Action": [ "dynamodb:GetItem", "dynamodb:PutItem", "dynamodb:DeleteItem" ], "Resource": "arn:aws:dynamodb:eu-central-1:*:table/my-project-tflock" } ]}Least privilege applies here too: the pipeline does not need dynamodb:Scan or dynamodb:Query permissions.
Conclusion
S3 + DynamoDB as a Terraform backend is battle-tested, cost-effective, and easy to operate. Setup takes 30 minutes — and saves hours of debugging state conflicts.
This article was originally published on LinkedIn and expanded for the website version.