Securing AWS secrets using HashiCorp Vault with Terraform?
For managing the cloud resources(EC2, S3 Bucket) on AWS you need to supply the AWS secrets(region, access_key, secret_key) inside the terraform file. You can use the plain text AWS Secrets inside your terraform file and it should work fine. But from the security standpoint, it is strongly discouraged to use plain text AWS Secretes for provisioning your infrastructure using terraform.
Also AWS credentials have a long-lived scoped which makes it more vulnerable for security attacks and if you are storing long-lived AWS secrets as plain text then you are putting the complete AWS cloud infra at security risk because of bad security practices.
What are the best practices for managing the AWS Secrets?
- You should generate short-lived AWS Secrets
- AWS Secrets should be scoped with the IAM role. Assign the least possible IAM role and avoid catering broad IAM roles
- Never store AWS Secrets as plain text.
Then how to implement best practices for Managing the AWS Secrets?
There are many secure alternatives for implementing secure practices for managing the AWS secrets for example -
- Hashicorp Vault
- AWS Secrets Manager
- Ansible Vault
But if you are using Terraform for provisioning infrastructure on AWS then Hashicorp Vault can be a better option for securing your AWS Secrets. In this blog post we will start from scratch by installing the HashiCorp Vault then writing the terraform code for securing as well as dynamically generating the AWS Secrets -
- Install HashiCorp Vault
- Start HashiCorp Vault
- Export AWS Secrets, HashiCorp VAULT_ADDR, and HashiCorp VAULT_TOKEN
- Add AWS Secrets inside HashiCorp Vault
- Provision AWS S3 Bucket using secured and dynamically generated AWS Secrets
- Delete AWS S3 Bucket
- Conclusion
1. Install HashiCorp Vault
As we are starting from scratch so let's start by installing the HashiCorp Vault onto the development machine. Please choose suitable installation instruction based on the operating system of your choice -
Ubutnu/Debian
1#Step-1 : Add PGP for the package signing key
2sudo apt update && sudo apt install gpg
3
4##Step-2 : Add the HashiCorp GPG key.
5wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg >/dev/null
6
7
8##Step-3 : Verify the key's fingerprint
9gpg --no-default-keyring --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg --fingerprint
10
11##Step-4 : Add the official HashiCorp Linux repository.
12echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
13
14##Step-5 : Update and install.
15sudo apt update && sudo apt install vault
CentOS
1#Step-1 : Install yum-config-manager to manage your repositories.
2sudo yum install -y yum-utils
3
4#Step-2 : Use yum-config-manager to add the official HashiCorp Linux repository.
5sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
6
7#Step-3 : Install vault
8sudo yum -y install vault
Fedora
1#Step-1 : Install dnf config-manager to manage your repositories.
2sudo dnf install -y dnf-plugins-core
3
4#Step-2 : Use dnf config-manager to add the official HashiCorp Linux repository.
5sudo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/fedora/hashicorp.repo
6
7#Step-3 : Install Vault
8sudo dnf -y install vault
Amazon Linux
1#Step-1: Install yum-config-manager to manage your repositories.
2sudo yum install -y yum-utils
3
4#Step-2 : Use yum-config-manager to add the official HashiCorp Linux repository.
5sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo
6
7#Step-3 : Install vault
8sudo yum -y install vault
MacOS/Homebrew
1brew tap hashicorp/tap
2
3brew install hashicorp/tap/vault
1.1 Verify the HashiCorp Installation
After installing the HashiCorp Vault use the following vault version command to check the version of Vault -
1vault -v
2
3Vault v1.9.0
2. Start HashiCorp Vault
After successful installation of the HashiCorp Vault, we can start the vault server in two modes -
- Dev Server Mode
- Server Mode
2.1 Starting the HashiCorp Vault Dev Server
If you are using the HashiCorp in the Development Mode then use the following vault start command -
1vault server -dev
After issuing the above run command you should see the following logs onto your console -
1==> Vault server configuration:
2
3 Api Address: http://127.0.0.1:8200
4 Cgo: disabled
5 Cluster Address: https://127.0.0.1:8201
6 Go Version: go1.17.2
7 Listener 1: tcp (addr: "127.0.0.1:8200", cluster address: "127.0.0.1:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
8 Log Level: info
9 Mlock: supported: true, enabled: false
10 Recovery Mode: false
11 Storage: inmem
12 Version: Vault v1.9.0
13
14==> Vault server started! Log data will stream in below:
15.
16.
17.
18.
19WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory
20and starts unsealed with a single unseal key. The root token is already
21authenticated to the CLI, so you can immediately begin using Vault.
22
23You may need to set the following environment variable:
24
25 $ export VAULT_ADDR='http://127.0.0.1:8200'
26
27The unseal key and root token is displayed below in case you want to
28seal/unseal the Vault or re-authenticate.
29
30Unseal Key: Xuu8LGToiPe8AADhtpO6zAwxw2Ly3Fa8GHHXLIznY6s=
31Root Token: s.sOUFttsBbfnQMNZCT99Mo7nA
32
33Development mode should NOT be used in production installations!
After starting the vault server in the development mode you should keep an eye on the log for the Root Token and VAULT_ADDR because we will need the Root Token for managing the AWS Secrets.
(*Note - Never run the server in the Dev server mode in the production environment)
2.2 Starting the HashiCorp Vault in Server Mode
If you are working on a staging or production environment then use the Server Mode of the HashiCorp Vault to start your server. Follow the below steps for starting the HashiCorp vault -
-
Before starting the HashiCorp Vault server create a config file at a suitable location (ex. /home/vagrant/vault-config/config/file)
-
Add the following configuration to the file -
1storage "file" {
2 path = "vault/data"
3}
4
5listener "tcp" {
6 address = "127.0.0.1:8200"
7 tls_disable = 1
8}
9
10ui = true
- Start the HashiCorp Vault server using the config file which we have created
1vault server -config=/home/vagrant/vault-config/config/file
You should see the following server startup logs -
1 ==> Vault server configuration:
2
3 Cgo: disabled
4 Go Version: go1.17.2
5 Listener 1: tcp (addr: "127.0.0.1:8200", cluster address: "127.0.0.1:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
6 Log Level: info
7 Mlock: supported: true, enabled: true
8 Recovery Mode: false
9 Storage: file
10 Version: Vault v1.9.0
11
12==> Vault server started! Log data will stream in below:
13
142021-12-04T07:28:32.011Z [INFO] proxy environment: http_proxy="\"\"" https_proxy="\"\"" no_proxy="\"\""
152021-12-04T07:28:32.011Z [WARN] no `api_addr` value specified in config or in VAULT_API_ADDR; falling back to detection if possible, but this value should be manually set
162021-12-04T07:28:32.282Z [INFO] core: Initializing VersionTimestamps for core
- Apart from that you also need init and unseal the vault before using it
1vault operator init
3. Export AWS Secrets, HashiCorp VAULT_ADDR, and HashiCorp VAULT_TOKEN
In the previous, we have installed the HashiCorp Vault server but to work with AWS Secrets we need to set some environment variables -
- TF_VAR_aws_access_key - The AWS Access Key
- TF_VAR_aws_secret_key - The AWS Secret Key
- VAULT_ADDR - The HashiCorp Vault server address (.i.e. - http://127.0.0.1:8200)
- VAULT_TOKEN - The Root Token which we have generated when starting the HashiCorp Server.
Here are the commands for exporting the environment variables -
1export TF_VAR_aws_access_key=AKIATQ37NXB2BTW6BENX
2
3export TF_VAR_aws_secret_key=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
4
5export VAULT_ADDR=http://127.0.0.1:8200
6
7export VAULT_TOKEN=s.sOUFttsBbfnQMNZCT99Mo7nA
4. Add AWS Secrets inside HashiCorp Vault
Let's write some terraform script to implement secure dynamically generated credentials. For this blog post, we are going to create an S3 Bucket using the dynamically generated AWS credentials -
4.1 Setup AWS Engine to generate AWS Secrets which are valid for 2 minutes
As I mentioned early in the post about secure we should generate short-lived AWS Secrets. So let's create AWS resource vault_aws_secret_backend.aws in which we are going to define -
- default lease time : 120 Seconds( 2 min )
- max lease time : 240 Seconds( 4 min )
Here is the code block for vault_aws_secret_backend -
1 resource "vault_aws_secret_backend" "aws" {
2 access_key = var.aws_access_key
3 secret_key = var.aws_secret_key
4 path = "${var.name}-path"
5
6 default_lease_ttl_seconds = "120"
7 max_lease_ttl_seconds = "240"
8}
4.2 Setup IAM roles only for S3 Bucket
The next best practice for securing AWS Credentials is to assign the least possible IAM roles. As we are going to create an S3 Bucket so the IAM role should only have S3 roles.
Here is the terraform resource block with only S3 IAM roles -
1resource "vault_aws_secret_backend_role" "admin" {
2 backend = vault_aws_secret_backend.aws.path
3 name = "${var.name}-role"
4 credential_type = "iam_user"
5
6 policy_document = <<EOF
7{
8 "Version": "2012-10-17",
9 "Statement": [
10 {
11 "Effect": "Allow",
12 "Action": [
13 "iam:*", "s3:*"
14 ],
15 "Resource": "*"
16 }
17 ]
18}
19EOF
20}
4.3 Apply the Terraform Configuration containing AWS Secrets and IAM Roles
- Below you will find a complete terraform file containing AWS Secrets and IAM Roles. Let save this file with the name main.tf inside a directory tf_aws_secrets_roles
1variable "aws_access_key" {}
2variable "aws_secret_key" {}
3variable "name" { default = "dynamic-aws-creds-vault-admin" }
4
5terraform {
6 backend "local" {
7 path = "terraform.tfstate"
8 }
9}
10
11provider "vault" {}
12
13resource "vault_aws_secret_backend" "aws" {
14 access_key = var.aws_access_key
15 secret_key = var.aws_secret_key
16 path = "${var.name}-path"
17
18 default_lease_ttl_seconds = "120"
19 max_lease_ttl_seconds = "240"
20}
21
22resource "vault_aws_secret_backend_role" "admin" {
23 backend = vault_aws_secret_backend.aws.path
24 name = "${var.name}-role"
25 credential_type = "iam_user"
26
27 policy_document = <<EOF
28{
29 "Version": "2012-10-17",
30 "Statement": [
31 {
32 "Effect": "Allow",
33 "Action": [
34 "iam:*", "ec2:*", "s3:*"
35 ],
36 "Resource": "*"
37 }
38 ]
39}
40EOF
41}
42
43output "backend" {
44 value = vault_aws_secret_backend.aws.path
45}
46
47output "role" {
48 value = vault_aws_secret_backend_role.admin.name
49}
- Apply the changes by running the following terraform commands -
Initialize terraform workspace -
1terraform init
Apply terraform changes -
1terraform apply
You should see at least 2 resources(backend, role) has been created in AWS -
1vault_aws_secret_backend.aws: Creating...
2vault_aws_secret_backend.aws: Creation complete after 0s [id=dynamic-aws-creds-vault-admin-path]
3vault_aws_secret_backend_role.admin: Creating...
4vault_aws_secret_backend_role.admin: Creation complete after 0s [id=dynamic-aws-creds-vault-admin-path/roles/dynamic-aws-creds-vault-admin-role]
5
6Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
7
8Outputs:
9
10backend = "dynamic-aws-creds-vault-admin-path"
11role = "dynamic-aws-creds-vault-admin-role"
Note - Click here to read more on How to manage AWS IAM user, Roles and Policies with Terraform
5. Provision AWS S3 Bucket using secured and dynamically generated AWS Secrets
In the previous step, we have generated the secret along with the roles. Now in this step, we are going to set up a S3 Bucket using the same dynamically generated secret and roles.
For setting up S3 Bucket we first need to create a separate directory .i.e. - setup_s3_bucket and create main.tf file inside it.
But first, let's try to understand each terraform resource block which we will need for setting up the S3 Bucket
5.1 Retrieve the terraform state file
In Step 4 we have generated dynamic AWS secrets and roles. Let's try to retrieve the state file so that it can be used for setting up the S3 bucket -
1data "terraform_remote_state" "admin" {
2 backend = "local"
3
4 config = {
5 path = var.path
6 }
7}
5.2 Retrieve dynamic short-lived AWS credentials from Vault
In the next step, we are going to retrieve the dynamic short-lived AWS Credentials using the following terraform data block -
1data "vault_aws_access_credentials" "creds" {
2 backend = data.terraform_remote_state.admin.outputs.backend
3 role = data.terraform_remote_state.admin.outputs.role
4}
5.3 Setup S3 bucket using terraform aws_s3_bucket, aws_s3_bucket_object, aws_s3_bucket_public_access_block
Now we need 4 more resources for setting up an S3 bucket -
- aws - For fetching the AWS Secret(Access Key, secret key, and region)
- aws_s3_bucket - For Setting up bucket name
- aws_s3_bucket_object - For managing S3 bucket
- aws_s3_bucket_public_access_block - For managing the S3 bucket access roles.
Apart from creating S3 bucket, we are also uploading some test files(jhooq-test1.txt, jhooq-test2.txt) from the upload directory -
1provider "aws" {
2 region = var.region
3 access_key = data.vault_aws_access_credentials.creds.access_key
4 secret_key = data.vault_aws_access_credentials.creds.secret_key
5}
6
7resource "aws_s3_bucket" "jhooq-s3-bucket" {
8 bucket = "jhooq-s3-bucket"
9 acl = "private"
10}
11resource "aws_s3_bucket_object" "object1" {
12 for_each = fileset("uploads/", "*")
13 bucket = aws_s3_bucket.jhooq-s3-bucket.id
14 key = each.value
15 source = "uploads/${each.value}"
16}
17
18resource "aws_s3_bucket_public_access_block" "app" {
19 bucket = aws_s3_bucket.jhooq-s3-bucket.id
20
21 block_public_acls = true
22 block_public_policy = true
23 ignore_public_acls = true
24 restrict_public_buckets = true
25}
Let's apply the above terraform configuration and create an S3 bucket. But before that put, this terraform script inside a file main.tf inside the directory aws_s3_bucket.
Run the following command to apply the above changes -
1terraform init
Then run terraform apply
1terraform apply
After hitting the terraform apply you should see the following onto your console -
1Plan: 4 to add, 0 to change, 0 to destroy.
2
3Do you want to perform these actions?
4 Terraform will perform the actions described above.
5 Only 'yes' will be accepted to approve.
6
7 Enter a value: yes
8
9aws_s3_bucket.jhooq-s3-bucket: Creating...
10aws_s3_bucket.jhooq-s3-bucket: Creation complete after 2s [id=jhooq-s3-bucket]
11aws_s3_bucket_object.object1["test2.txt"]: Creating...
12aws_s3_bucket_public_access_block.app: Creating...
13aws_s3_bucket_object.object1["test1.txt"]: Creating...
14aws_s3_bucket_public_access_block.app: Creation complete after 1s [id=jhooq-s3-bucket]
15aws_s3_bucket_object.object1["test2.txt"]: Creation complete after 1s [id=test2.txt]
16aws_s3_bucket_object.object1["test1.txt"]: Creation complete after 1s [id=test1.txt]
5.4 Verify S3 bucket
Now we have successfully executed terraform script let go to AWS Console and verify the S3 Bucket.
On the AWS page you can simply search S3 Bucket and look for the bucket which you have created. In my case the name of the bucket is - jhooq-s3-bucket.
6. Delete AWS S3 Bucket
After successfully testing and verifying the S3 bucket let's destroy the S3 bucket using the terraform destroy command.
1terraform destroy
It should destroy your S3 bucket along with content present inside the S3 bucket -
1Terraform will perform the following actions:
2
3 # aws_s3_bucket.jhooq-s3-bucket will be destroyed
4 - resource "aws_s3_bucket" "jhooq-s3-bucket" {
5 - acl = "private" -> null
6 - arn = "arn:aws:s3:::jhooq-s3-bucket" -> null
7 - bucket = "jhooq-s3-bucket" -> null
8 - bucket_domain_name = "jhooq-s3-bucket.s3.amazonaws.com" -> null
9 - bucket_regional_domain_name = "jhooq-s3-bucket.s3.eu-central-1.amazonaws.com" -> null
10 - force_destroy = false -> null
11 - hosted_zone_id = "Z21DNDUVLTQW6Q" -> null
12 - id = "jhooq-s3-bucket" -> null
13 - region = "eu-central-1" -> null
14 - request_payer = "BucketOwner" -> null
15 - tags = {} -> null
16
17 - versioning {
18 - enabled = false -> null
19 - mfa_delete = false -> null
20 }
21 }
7. Conclusion
This blog highlights how to secure your AWS Secret which is considered to be the most valuable info for maintaining your AWS infra. After reading this blog post you should be able to -
- Dynamically generate the AWS secrets while using terraform
- You will have a better understanding of how to integrate Terraform with AWS Secrets and IAM Policies.
- Also you will get a glance at managing AWS Secrets using HashiCorp Vault.
Read More - Terragrunt -
Posts in this Series
- Securing Sensitive Data in Terraform
- Boost Your AWS Security with Terraform : A Step-by-Step Guide
- How to Load Input Data from a File in Terraform?
- Can Terraform be used to provision on-premises infrastructure?
- Fixing the Terraform Error creating IAM Role. MalformedPolicyDocument Has prohibited field Resource
- In terraform how to handle null value with default value?
- Terraform use module output variables as inputs for another module?
- How to Reference a Resource Created by a Terraform Module?
- Understanding Terraform Escape Sequences
- How to fix private-dns-enabled cannot be set because there is already a conflicting DNS domain?
- Use Terraform to manage AWS IAM Policies, Roles and Users
- How to split Your Terraform main.tf File into Multiple Files
- How to use Terraform variable within variable
- Mastering the Terraform Lookup Function for Dynamic Keys
- Copy files to EC2 and S3 bucket using Terraform
- Troubleshooting Error creating EC2 Subnet InvalidSubnet Range The CIDR is Invalid
- Troubleshooting InvalidParameter Security group and subnet belong to different networks
- Managing strings in Terraform: A comprehensive guide
- How to use terraform depends_on meta argument?
- What is user_data in Terraform?
- Why you should not store terraform state file(.tfstate) inside Git Repository?
- How to import existing resource using terraform import comand?
- Terraform - A detailed guide on setting up ALB(Application Load Balancer) and SSL?
- Testing Infrastructure as Code with Terraform?
- How to remove a resource from Terraform state?
- What is Terraform null Resource?
- In terraform how to skip creation of resource if the resource already exist?
- How to setup Virtual machine on Google Cloud Platform
- How to use Terraform locals?
- Terraform Guide - Docker Containers & AWS ECR(elastic container registry)?
- How to generate SSH key in Terraform using tls_private_key?
- How to fix-Terraform Error acquiring the state lock ConditionalCheckFiledException?
- Terraform Template - A complete guide?
- How to use Terragrunt?
- Terraform and AWS Multi account Setup?
- Terraform and AWS credentials handling?
- How to fix-error configuring S3 Backend no valid credential sources for S3 Backend found?
- Terraform state locking using DynamoDB (aws_dynamodb_table)?
- Managing Terraform states?
- Securing AWS secrets using HashiCorp Vault with Terraform?
- How to use Workspaces in Terraform?
- How to run specific terraform resource, module, target?
- How Terraform modules works?
- Secure AWS EC2s & GCP VMs with Terraform SSH Keys!
- What is terraform provisioner?
- Is terraform destroy needed before terraform apply?
- How to fix terraform error Your query returned no results. Please change your search criteria and try again?
- How to use Terraform Data sources?
- How to use Terraform resource meta arguments?
- How to use Terraform Dynamic blocks?
- Terraform - How to nuke AWS resources and save additional AWS infrastructure cost?
- Understanding terraform count, for_each and for loop?
- How to use Terraform output values?
- How to fix error configuring Terraform AWS Provider error validating provider credentials error calling sts GetCallerIdentity SignatureDoesNotMatch?
- How to fix Invalid function argument on line in provider credentials file google Invalid value for path parameter no file exists
- How to fix error value for undeclared variable a variable named was assigned on the command line?
- What is variable.tf and terraform.tfvars?
- How to use Terraform Variables - Locals,Input,Output
- Terraform create EC2 Instance on AWS
- How to fix Error creating service account googleapi Error 403 Identity and Access Management (IAM) API has not been used in project before or it is disabled
- Install terraform on Ubuntu 20.04, CentOS 8, MacOS, Windows 10, Fedora 33, Red hat 8 and Solaris 11