Securing AWS secrets using HashiCorp Vault with Terraform?


For managing the cloud resources(EC2, S3 Bucket) on AWS you need to supply the AWS secrets(region, access_key, secret_key) inside the terraform file. You can use the plain text AWS Secrets inside your terraform file and it should work fine. But from the security standpoint, it is strongly discouraged to use plain text AWS Secretes for provisioning your infrastructure using terraform.

Also AWS credentials have a long-lived scoped which makes it more vulnerable for security attacks and if you are storing long-lived AWS secrets as plain text then you are putting the complete AWS cloud infra at security risk because of bad security practices.

What are the best practices for managing the AWS Secrets?

  1. You should generate short-lived AWS Secrets
  2. AWS Secrets should be scoped with the IAM role. Assign the least possible IAM role and avoid catering broad IAM roles
  3. Never store AWS Secrets as plain text.

Then how to implement best practices for Managing the AWS Secrets?

There are many secure alternatives for implementing secure practices for managing the AWS secrets for example -

  1. Hashicorp Vault
  2. AWS Secrets Manager
  3. Ansible Vault

But if you are using Terraform for provisioning infrastructure on AWS then Hashicorp Vault can be a better option for securing your AWS Secrets. In this blog post we will start from scratch by installing the HashiCorp Vault then writing the terraform code for securing as well as dynamically generating the AWS Secrets -

  1. Install HashiCorp Vault
  2. Start HashiCorp Vault
  3. Export AWS Secrets, HashiCorp VAULT_ADDR, and HashiCorp VAULT_TOKEN
  4. Add AWS Secrets inside HashiCorp Vault
  5. Provision AWS S3 Bucket using secured and dynamically generated AWS Secrets
  6. Delete AWS S3 Bucket
  7. Conclusion

1. Install HashiCorp Vault

As we are starting from scratch so let’s start by installing the HashiCorp Vault onto the development machine. Please choose suitable installation instruction based on the operating system of your choice -

Ubutnu/Debian

1curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
2
3sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
4
5sudo apt-get update && sudo apt-get install vault 

CentOS

1sudo yum install -y yum-utils
2
3sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
4
5sudo yum -y install vault

Fedora

1sudo dnf install -y dnf-plugins-core
2
3sudo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/fedora/hashicorp.repo
4
5sudo dnf -y install vault 

Amazon Linux

1sudo yum install -y yum-utils
2
3sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo
4
5sudo yum -y install vault

MacOS/Homebrew

1brew tap hashicorp/tap
2
3brew install hashicorp/tap/vault

1.1 Verify the HashiCorp Installation

After installing the HashiCorp Vault use the following vault version command to check the version of Vault -

1vault -v
2
3Vault v1.9.0 


2. Start HashiCorp Vault

After successful installation of the HashiCorp Vault, we can start the vault server in two modes -

  1. Dev Server Mode
  2. Server Mode

2.1 Starting the HashiCorp Vault Dev Server

If you are using the HashiCorp in the Development Mode then use the following vault start command -

1vault server -dev 

After issuing the above run command you should see the following logs onto your console -

 1==> Vault server configuration:
 2
 3             Api Address: http://127.0.0.1:8200
 4                     Cgo: disabled
 5         Cluster Address: https://127.0.0.1:8201
 6              Go Version: go1.17.2
 7              Listener 1: tcp (addr: "127.0.0.1:8200", cluster address: "127.0.0.1:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
 8               Log Level: info
 9                   Mlock: supported: true, enabled: false
10           Recovery Mode: false
11                 Storage: inmem
12                 Version: Vault v1.9.0
13
14==> Vault server started! Log data will stream in below:
15.
16.
17.
18.
19WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory
20and starts unsealed with a single unseal key. The root token is already
21authenticated to the CLI, so you can immediately begin using Vault.
22
23You may need to set the following environment variable:
24
25    $ export VAULT_ADDR='http://127.0.0.1:8200'
26
27The unseal key and root token is displayed below in case you want to
28seal/unseal the Vault or re-authenticate.
29
30Unseal Key: Xuu8LGToiPe8AADhtpO6zAwxw2Ly3Fa8GHHXLIznY6s=
31Root Token: s.sOUFttsBbfnQMNZCT99Mo7nA
32
33Development mode should NOT be used in production installations!

After starting the vault server in the development mode you should keep an eye on the log for the Root Token and VAULT_ADDR because we will need the Root Token for managing the AWS Secrets.

(*Note - Never run the server in the Dev server mode in the production environment)


2.2 Starting the HashiCorp Vault in Server Mode

If you are working on a staging or production environment then use the Server Mode of the HashiCorp Vault to start your server. Follow the below steps for starting the HashiCorp vault -

  1. Before starting the HashiCorp Vault server create a config file at a suitable location (ex. /home/vagrant/vault-config/config/file)

  2. Add the following configuration to the file -

 1storage "file" {
 2  path = "vault/data"
 3}
 4
 5listener "tcp" {
 6  address = "127.0.0.1:8200"
 7  tls_disable = 1
 8}
 9
10ui = true
  1. Start the HashiCorp Vault server using the config file which we have created
1vault server -config=/home/vagrant/vault-config/config/file

You should see the following server startup logs -

 1 ==> Vault server configuration:
 2
 3                     Cgo: disabled
 4              Go Version: go1.17.2
 5              Listener 1: tcp (addr: "127.0.0.1:8200", cluster address: "127.0.0.1:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
 6               Log Level: info
 7                   Mlock: supported: true, enabled: true
 8           Recovery Mode: false
 9                 Storage: file
10                 Version: Vault v1.9.0
11
12==> Vault server started! Log data will stream in below:
13
142021-12-04T07:28:32.011Z [INFO]  proxy environment: http_proxy="\"\"" https_proxy="\"\"" no_proxy="\"\""
152021-12-04T07:28:32.011Z [WARN]  no `api_addr` value specified in config or in VAULT_API_ADDR; falling back to detection if possible, but this value should be manually set
162021-12-04T07:28:32.282Z [INFO]  core: Initializing VersionTimestamps for core

  1. Apart from that you also need init and unseal the vault before using it
1vault operator init 

3. Export AWS Secrets, HashiCorp VAULT_ADDR, and HashiCorp VAULT_TOKEN

In the previous, we have installed the HashiCorp Vault server but to work with AWS Secrets we need to set some environment variables -

  1. TF_VAR_aws_access_key - The AWS Access Key
  2. TF_VAR_aws_secret_key - The AWS Secret Key
  3. VAULT_ADDR - The HashiCorp Vault server address (.i.e. - http://127.0.0.1:8200)
  4. VAULT_TOKEN - The Root Token which we have generated when starting the HashiCorp Server.

Here are the commands for exporting the environment variables -

1export TF_VAR_aws_access_key=AKIATQ37NXB2BTW6BENX
2
3export TF_VAR_aws_secret_key=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
4
5export VAULT_ADDR=http://127.0.0.1:8200
6
7export VAULT_TOKEN=s.sOUFttsBbfnQMNZCT99Mo7nA

4. Add AWS Secrets inside HashiCorp Vault

Let’s write some terraform script to implement secure dynamically generated credentials. For this blog post, we are going to create an S3 Bucket using the dynamically generated AWS credentials -

4.1 Setup AWS Engine to generate AWS Secrets which are valid for 2 minutes

As I mentioned early in the post about secure we should generate short-lived AWS Secrets. So let’s create AWS resource vault_aws_secret_backend.aws in which we are going to define -

  1. default lease time : 120 Seconds( 2 min )
  2. max lease time : 240 Seconds( 4 min )

Here is the code block for vault_aws_secret_backend -

1 resource "vault_aws_secret_backend" "aws" {
2  access_key = var.aws_access_key
3  secret_key = var.aws_secret_key
4  path       = "${var.name}-path"
5
6  default_lease_ttl_seconds = "120"
7  max_lease_ttl_seconds     = "240"
8}

4.2 Setup IAM roles only for S3 Bucket

The next best practice for securing AWS Credentials is to assign the least possible IAM roles. As we are going to create an S3 Bucket so the IAM role should only have S3 roles.

Here is the terraform resource block with only S3 IAM roles -

 1resource "vault_aws_secret_backend_role" "admin" {
 2  backend         = vault_aws_secret_backend.aws.path
 3  name            = "${var.name}-role"
 4  credential_type = "iam_user"
 5
 6  policy_document = <<EOF
 7{
 8  "Version": "2012-10-17",
 9  "Statement": [
10    {
11      "Effect": "Allow",
12      "Action": [
13        "iam:*", "s3:*"
14      ],
15      "Resource": "*"
16    }
17  ]
18}
19EOF
20}

4.3 Apply the Terraform Configuration containing AWS Secrets and IAM Roles

  1. Below you will find a complete terraform file containing AWS Secrets and IAM Roles. Let save this file with the name main.tf inside a directory tf_aws_secrets_roles
 1variable "aws_access_key" {}
 2variable "aws_secret_key" {}
 3variable "name" { default = "dynamic-aws-creds-vault-admin" }
 4
 5terraform {
 6  backend "local" {
 7    path = "terraform.tfstate"
 8  }
 9}
10
11provider "vault" {}
12
13resource "vault_aws_secret_backend" "aws" {
14  access_key = var.aws_access_key
15  secret_key = var.aws_secret_key
16  path       = "${var.name}-path"
17
18  default_lease_ttl_seconds = "120"
19  max_lease_ttl_seconds     = "240"
20}
21
22resource "vault_aws_secret_backend_role" "admin" {
23  backend         = vault_aws_secret_backend.aws.path
24  name            = "${var.name}-role"
25  credential_type = "iam_user"
26
27  policy_document = <<EOF
28{
29  "Version": "2012-10-17",
30  "Statement": [
31    {
32      "Effect": "Allow",
33      "Action": [
34        "iam:*", "ec2:*", "s3:*"
35      ],
36      "Resource": "*"
37    }
38  ]
39}
40EOF
41}
42
43output "backend" {
44  value = vault_aws_secret_backend.aws.path
45}
46
47output "role" {
48  value = vault_aws_secret_backend_role.admin.name
49}
  1. Apply the changes by running the following terraform commands -

Initialize terraform workspace -

1terraform init 

Apply terraform changes -

1terraform apply

You should see at least 2 resources(backend, role) has been created in AWS -

 1vault_aws_secret_backend.aws: Creating...
 2vault_aws_secret_backend.aws: Creation complete after 0s [id=dynamic-aws-creds-vault-admin-path]
 3vault_aws_secret_backend_role.admin: Creating...
 4vault_aws_secret_backend_role.admin: Creation complete after 0s [id=dynamic-aws-creds-vault-admin-path/roles/dynamic-aws-creds-vault-admin-role]
 5
 6Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
 7
 8Outputs:
 9
10backend = "dynamic-aws-creds-vault-admin-path"
11role = "dynamic-aws-creds-vault-admin-role"


5. Provision AWS S3 Bucket using secured and dynamically generated AWS Secrets

In the previous step, we have generated the secret along with the roles. Now in this step, we are going to set up a S3 Bucket using the same dynamically generated secret and roles.

For setting up S3 Bucket we first need to create a separate directory .i.e. - setup_s3_bucket and create main.tf file inside it.

But first, let’s try to understand each terraform resource block which we will need for setting up the S3 Bucket


5.1 Retrieve the terraform state file

In Step 4 we have generated dynamic AWS secrets and roles. Let’s try to retrieve the state file so that it can be used for setting up the S3 bucket -

1data "terraform_remote_state" "admin" {
2  backend = "local"
3
4  config = {
5    path = var.path
6  }
7} 

5.2 Retrieve dynamic short-lived AWS credentials from Vault

In the next step, we are going to retrieve the dynamic short-lived AWS Credentials using the following terraform data block -

1data "vault_aws_access_credentials" "creds" {
2  backend = data.terraform_remote_state.admin.outputs.backend
3  role    = data.terraform_remote_state.admin.outputs.role
4} 

5.3 Setup S3 bucket using terraform aws_s3_bucket, aws_s3_bucket_object, aws_s3_bucket_public_access_block

Now we need 4 more resources for setting up an S3 bucket -

  1. aws - For fetching the AWS Secret(Access Key, secret key, and region)
  2. aws_s3_bucket - For Setting up bucket name
  3. aws_s3_bucket_object - For managing S3 bucket
  4. aws_s3_bucket_public_access_block - For managing the S3 bucket access roles.

Apart from creating S3 bucket, we are also uploading some test files(jhooq-test1.txt, jhooq-test2.txt) from the upload directory -

 1provider "aws" {
 2  region     = var.region
 3  access_key = data.vault_aws_access_credentials.creds.access_key
 4  secret_key = data.vault_aws_access_credentials.creds.secret_key
 5}
 6
 7resource "aws_s3_bucket" "jhooq-s3-bucket" {
 8  bucket = "jhooq-s3-bucket"
 9  acl = "private" 
10}
11resource "aws_s3_bucket_object" "object1" {
12  for_each = fileset("uploads/", "*")
13  bucket = aws_s3_bucket.jhooq-s3-bucket.id
14  key = each.value
15  source = "uploads/${each.value}"
16}
17 
18resource "aws_s3_bucket_public_access_block" "app" {
19 bucket = aws_s3_bucket.jhooq-s3-bucket.id
20 
21 block_public_acls       = true
22 block_public_policy     = true
23 ignore_public_acls      = true
24 restrict_public_buckets = true
25} 

Let’s apply the above terraform configuration and create an S3 bucket. But before that put, this terraform script inside a file main.tf inside the directory aws_s3_bucket.

Run the following command to apply the above changes -

1terraform init 

Then run terraform apply

1terraform apply

After hitting the terraform apply you should see the following onto your console -

 1Plan: 4 to add, 0 to change, 0 to destroy.
 2
 3Do you want to perform these actions?
 4  Terraform will perform the actions described above.
 5  Only 'yes' will be accepted to approve.
 6
 7  Enter a value: yes
 8
 9aws_s3_bucket.jhooq-s3-bucket: Creating...
10aws_s3_bucket.jhooq-s3-bucket: Creation complete after 2s [id=jhooq-s3-bucket]
11aws_s3_bucket_object.object1["test2.txt"]: Creating...
12aws_s3_bucket_public_access_block.app: Creating...
13aws_s3_bucket_object.object1["test1.txt"]: Creating...
14aws_s3_bucket_public_access_block.app: Creation complete after 1s [id=jhooq-s3-bucket]
15aws_s3_bucket_object.object1["test2.txt"]: Creation complete after 1s [id=test2.txt]
16aws_s3_bucket_object.object1["test1.txt"]: Creation complete after 1s [id=test1.txt]


5.4 Verify S3 bucket

Now we have successfully executed terraform script let go to AWS Console and verify the S3 Bucket.

On the AWS page you can simply search S3 Bucket and look for the bucket which you have created. In my case the name of the bucket is - jhooq-s3-bucket.


6. Delete AWS S3 Bucket

After successfully testing and verifying the S3 bucket let’s destroy the S3 bucket using the terraform destroy command.

1terraform destroy

It should destroy your S3 bucket along with content present inside the S3 bucket -

 1Terraform will perform the following actions:
 2
 3  # aws_s3_bucket.jhooq-s3-bucket will be destroyed
 4  - resource "aws_s3_bucket" "jhooq-s3-bucket" {
 5      - acl                         = "private" -> null
 6      - arn                         = "arn:aws:s3:::jhooq-s3-bucket" -> null
 7      - bucket                      = "jhooq-s3-bucket" -> null
 8      - bucket_domain_name          = "jhooq-s3-bucket.s3.amazonaws.com" -> null
 9      - bucket_regional_domain_name = "jhooq-s3-bucket.s3.eu-central-1.amazonaws.com" -> null
10      - force_destroy               = false -> null
11      - hosted_zone_id              = "Z21DNDUVLTQW6Q" -> null
12      - id                          = "jhooq-s3-bucket" -> null
13      - region                      = "eu-central-1" -> null
14      - request_payer               = "BucketOwner" -> null
15      - tags                        = {} -> null
16
17      - versioning {
18          - enabled    = false -> null
19          - mfa_delete = false -> null
20        }
21    } 


7. Conclusion

This blog highlights how to secure your AWS Secret which is considered to be the most valuable info for maintaining your AWS infra. After reading this blog post you should be able to -

  1. Dynamically generate the AWS secrets while using terraform
  2. You will have a better understanding of how to integrate Terraform with AWS Secrets and IAM Policies.
  3. Also you will get a glance at managing AWS Secrets using HashiCorp Vault.

Read More -

  1. Install terraform on Ubuntu 20.04, CentOS 8, MacOS, Windows 10, Fedora 33, Red hat 8 and Solaris 11
  2. How to setup Virtual machine on Google Cloud Platform using terraform
  3. Create EC2 Instance on AWS using terraform
  4. How to use Terraform Input Variables
  5. What is variable.tf and terraform.tfvars?
  6. How to use Terraform locals?
  7. How to use Terraform output values?
  8. Understanding terraform count, for_each and for loop?
  9. Cloud-nuke : How to nuke AWS resources and save additional AWS infrastructure cost?
  10. How to use Terraform Dynamic blocks?
  11. How to use Terraform resource meta arguments?
  12. How to use Terraform Data sources?
  13. What is terraform provisioner?
  14. Terraform how to do SSH in AWS EC2 instance?
  15. How Terraform modules works?
  16. How to run specific terraform resource?
  17. How to use Workspaces in Terraform?
  18. Securing AWS secrets using HashiCorp Vault with Terraform?
  19. Managing Terraform states?
  20. Terraform state locking using DynamoDB (aws_dynamodb_table)