Securing AWS secrets using HashiCorp Vault with Terraform?

For managing the cloud resources(EC2, S3 Bucket) on AWS you need to supply the AWS secrets(region, access_key, secret_key) inside the terraform file. You can use the plain text AWS Secrets inside your terraform file and it should work fine. But from the security standpoint, it is strongly discouraged to use plain text AWS Secretes for provisioning your infrastructure using terraform.

Also AWS credentials have a long-lived scoped which makes it more vulnerable for security attacks and if you are storing long-lived AWS secrets as plain text then you are putting the complete AWS cloud infra at security risk because of bad security practices.

What are the best practices for managing the AWS Secrets?

  1. You should generate short-lived AWS Secrets
  2. AWS Secrets should be scoped with the IAM role. Assign the least possible IAM role and avoid catering broad IAM roles
  3. Never store AWS Secrets as plain text.

Then how to implement best practices for Managing the AWS Secrets?

There are many secure alternatives for implementing secure practices for managing the AWS secrets for example -

  1. Hashicorp Vault
  2. AWS Secrets Manager
  3. Ansible Vault

But if you are using Terraform for provisioning infrastructure on AWS then Hashicorp Vault can be a better option for securing your AWS Secrets. In this blog post we will start from scratch by installing the HashiCorp Vault then writing the terraform code for securing as well as dynamically generating the AWS Secrets -

  1. Install HashiCorp Vault
  2. Start HashiCorp Vault
  3. Export AWS Secrets, HashiCorp VAULT_ADDR, and HashiCorp VAULT_TOKEN
  4. Add AWS Secrets inside HashiCorp Vault
  5. Provision AWS S3 Bucket using secured and dynamically generated AWS Secrets
  6. Delete AWS S3 Bucket
  7. Conclusion

1. Install HashiCorp Vault

As we are starting from scratch so let's start by installing the HashiCorp Vault onto the development machine. Please choose suitable installation instruction based on the operating system of your choice -


 1#Step-1 : Add PGP for the package signing key
 2sudo apt update && sudo apt install gpg
 4##Step-2 : Add the HashiCorp GPG key.
 5wget -O- | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg >/dev/null
 8##Step-3 : Verify the key's fingerprint
 9gpg --no-default-keyring --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg --fingerprint
11##Step-4 : Add the official HashiCorp Linux repository.
12echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
14##Step-5 : Update and install.
15sudo apt update && sudo apt install vault


1#Step-1 : Install yum-config-manager to manage your repositories.
2sudo yum install -y yum-utils
4#Step-2 : Use yum-config-manager to add the official HashiCorp Linux repository.
5sudo yum-config-manager --add-repo
7#Step-3 : Install vault
8sudo yum -y install vault


1#Step-1 : Install dnf config-manager to manage your repositories.
2sudo dnf install -y dnf-plugins-core
4#Step-2 : Use dnf config-manager to add the official HashiCorp Linux repository.
5sudo dnf config-manager --add-repo
7#Step-3 : Install Vault
8sudo dnf -y install vault 

Amazon Linux

1#Step-1: Install yum-config-manager to manage your repositories.
2sudo yum install -y yum-utils
4#Step-2 : Use yum-config-manager to add the official HashiCorp Linux repository.
5sudo yum-config-manager --add-repo
7#Step-3 : Install vault
8sudo yum -y install vault


1brew tap hashicorp/tap
3brew install hashicorp/tap/vault

1.1 Verify the HashiCorp Installation

After installing the HashiCorp Vault use the following vault version command to check the version of Vault -

1vault -v
3Vault v1.9.0 

hashicorp vault version after installation

2. Start HashiCorp Vault

After successful installation of the HashiCorp Vault, we can start the vault server in two modes -

  1. Dev Server Mode
  2. Server Mode

2.1 Starting the HashiCorp Vault Dev Server

If you are using the HashiCorp in the Development Mode then use the following vault start command -

1vault server -dev 

After issuing the above run command you should see the following logs onto your console -

 1==> Vault server configuration:
 3             Api Address:
 4                     Cgo: disabled
 5         Cluster Address:
 6              Go Version: go1.17.2
 7              Listener 1: tcp (addr: "", cluster address: "", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
 8               Log Level: info
 9                   Mlock: supported: true, enabled: false
10           Recovery Mode: false
11                 Storage: inmem
12                 Version: Vault v1.9.0
14==> Vault server started! Log data will stream in below:
19WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory
20and starts unsealed with a single unseal key. The root token is already
21authenticated to the CLI, so you can immediately begin using Vault.
23You may need to set the following environment variable:
25    $ export VAULT_ADDR=''
27The unseal key and root token is displayed below in case you want to
28seal/unseal the Vault or re-authenticate.
30Unseal Key: Xuu8LGToiPe8AADhtpO6zAwxw2Ly3Fa8GHHXLIznY6s=
31Root Token: s.sOUFttsBbfnQMNZCT99Mo7nA
33Development mode should NOT be used in production installations!

After starting the vault server in the development mode you should keep an eye on the log for the Root Token and VAULT_ADDR because we will need the Root Token for managing the AWS Secrets.

hashicorp vault root token

(*Note - Never run the server in the Dev server mode in the production environment)

2.2 Starting the HashiCorp Vault in Server Mode

If you are working on a staging or production environment then use the Server Mode of the HashiCorp Vault to start your server. Follow the below steps for starting the HashiCorp vault -

  1. Before starting the HashiCorp Vault server create a config file at a suitable location (ex. /home/vagrant/vault-config/config/file)

  2. Add the following configuration to the file -

 1storage "file" {
 2  path = "vault/data"
 5listener "tcp" {
 6  address = ""
 7  tls_disable = 1
10ui = true
  1. Start the HashiCorp Vault server using the config file which we have created
1vault server -config=/home/vagrant/vault-config/config/file

You should see the following server startup logs -

 1 ==> Vault server configuration:
 3                     Cgo: disabled
 4              Go Version: go1.17.2
 5              Listener 1: tcp (addr: "", cluster address: "", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled")
 6               Log Level: info
 7                   Mlock: supported: true, enabled: true
 8           Recovery Mode: false
 9                 Storage: file
10                 Version: Vault v1.9.0
12==> Vault server started! Log data will stream in below:
142021-12-04T07:28:32.011Z [INFO]  proxy environment: http_proxy="\"\"" https_proxy="\"\"" no_proxy="\"\""
152021-12-04T07:28:32.011Z [WARN]  no `api_addr` value specified in config or in VAULT_API_ADDR; falling back to detection if possible, but this value should be manually set
162021-12-04T07:28:32.282Z [INFO]  core: Initializing VersionTimestamps for core

hashicorp vault root token

  1. Apart from that you also need init and unseal the vault before using it
1vault operator init 

3. Export AWS Secrets, HashiCorp VAULT_ADDR, and HashiCorp VAULT_TOKEN

In the previous, we have installed the HashiCorp Vault server but to work with AWS Secrets we need to set some environment variables -

  1. TF_VAR_aws_access_key - The AWS Access Key
  2. TF_VAR_aws_secret_key - The AWS Secret Key
  3. VAULT_ADDR - The HashiCorp Vault server address (.i.e. -
  4. VAULT_TOKEN - The Root Token which we have generated when starting the HashiCorp Server.

Here are the commands for exporting the environment variables -

1export TF_VAR_aws_access_key=AKIATQ37NXB2BTW6BENX
3export TF_VAR_aws_secret_key=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
5export VAULT_ADDR=
7export VAULT_TOKEN=s.sOUFttsBbfnQMNZCT99Mo7nA

4. Add AWS Secrets inside HashiCorp Vault

Let's write some terraform script to implement secure dynamically generated credentials. For this blog post, we are going to create an S3 Bucket using the dynamically generated AWS credentials -

4.1 Setup AWS Engine to generate AWS Secrets which are valid for 2 minutes

As I mentioned early in the post about secure we should generate short-lived AWS Secrets. So let's create AWS resource in which we are going to define -

  1. default lease time : 120 Seconds( 2 min )
  2. max lease time : 240 Seconds( 4 min )

Here is the code block for vault_aws_secret_backend -

1 resource "vault_aws_secret_backend" "aws" {
2  access_key = var.aws_access_key
3  secret_key = var.aws_secret_key
4  path       = "${}-path"
6  default_lease_ttl_seconds = "120"
7  max_lease_ttl_seconds     = "240"

4.2 Setup IAM roles only for S3 Bucket

The next best practice for securing AWS Credentials is to assign the least possible IAM roles. As we are going to create an S3 Bucket so the IAM role should only have S3 roles.

Here is the terraform resource block with only S3 IAM roles -

 1resource "vault_aws_secret_backend_role" "admin" {
 2  backend         =
 3  name            = "${}-role"
 4  credential_type = "iam_user"
 6  policy_document = <<EOF
 8  "Version": "2012-10-17",
 9  "Statement": [
10    {
11      "Effect": "Allow",
12      "Action": [
13        "iam:*", "s3:*"
14      ],
15      "Resource": "*"
16    }
17  ]

4.3 Apply the Terraform Configuration containing AWS Secrets and IAM Roles

  1. Below you will find a complete terraform file containing AWS Secrets and IAM Roles. Let save this file with the name inside a directory tf_aws_secrets_roles
 1variable "aws_access_key" {}
 2variable "aws_secret_key" {}
 3variable "name" { default = "dynamic-aws-creds-vault-admin" }
 5terraform {
 6  backend "local" {
 7    path = "terraform.tfstate"
 8  }
11provider "vault" {}
13resource "vault_aws_secret_backend" "aws" {
14  access_key = var.aws_access_key
15  secret_key = var.aws_secret_key
16  path       = "${}-path"
18  default_lease_ttl_seconds = "120"
19  max_lease_ttl_seconds     = "240"
22resource "vault_aws_secret_backend_role" "admin" {
23  backend         =
24  name            = "${}-role"
25  credential_type = "iam_user"
27  policy_document = <<EOF
29  "Version": "2012-10-17",
30  "Statement": [
31    {
32      "Effect": "Allow",
33      "Action": [
34        "iam:*", "ec2:*", "s3:*"
35      ],
36      "Resource": "*"
37    }
38  ]
43output "backend" {
44  value =
47output "role" {
48  value =
  1. Apply the changes by running the following terraform commands -

Initialize terraform workspace -

1terraform init 

Apply terraform changes -

1terraform apply

You should see at least 2 resources(backend, role) has been created in AWS - Creating... Creation complete after 0s [id=dynamic-aws-creds-vault-admin-path]
 3vault_aws_secret_backend_role.admin: Creating...
 4vault_aws_secret_backend_role.admin: Creation complete after 0s [id=dynamic-aws-creds-vault-admin-path/roles/dynamic-aws-creds-vault-admin-role]
 6Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
10backend = "dynamic-aws-creds-vault-admin-path"
11role = "dynamic-aws-creds-vault-admin-role"

aws backend role

Note - Click here to read more on How to manage AWS IAM user, Roles and Policies with Terraform

5. Provision AWS S3 Bucket using secured and dynamically generated AWS Secrets

In the previous step, we have generated the secret along with the roles. Now in this step, we are going to set up a S3 Bucket using the same dynamically generated secret and roles.

For setting up S3 Bucket we first need to create a separate directory .i.e. - setup_s3_bucket and create file inside it.

But first, let's try to understand each terraform resource block which we will need for setting up the S3 Bucket

5.1 Retrieve the terraform state file

In Step 4 we have generated dynamic AWS secrets and roles. Let's try to retrieve the state file so that it can be used for setting up the S3 bucket -

1data "terraform_remote_state" "admin" {
2  backend = "local"
4  config = {
5    path = var.path
6  }

5.2 Retrieve dynamic short-lived AWS credentials from Vault

In the next step, we are going to retrieve the dynamic short-lived AWS Credentials using the following terraform data block -

1data "vault_aws_access_credentials" "creds" {
2  backend = data.terraform_remote_state.admin.outputs.backend
3  role    = data.terraform_remote_state.admin.outputs.role

5.3 Setup S3 bucket using terraform aws_s3_bucket, aws_s3_bucket_object, aws_s3_bucket_public_access_block

Now we need 4 more resources for setting up an S3 bucket -

  1. aws - For fetching the AWS Secret(Access Key, secret key, and region)
  2. aws_s3_bucket - For Setting up bucket name
  3. aws_s3_bucket_object - For managing S3 bucket
  4. aws_s3_bucket_public_access_block - For managing the S3 bucket access roles.

Apart from creating S3 bucket, we are also uploading some test files(jhooq-test1.txt, jhooq-test2.txt) from the upload directory -

 1provider "aws" {
 2  region     = var.region
 3  access_key = data.vault_aws_access_credentials.creds.access_key
 4  secret_key = data.vault_aws_access_credentials.creds.secret_key
 7resource "aws_s3_bucket" "jhooq-s3-bucket" {
 8  bucket = "jhooq-s3-bucket"
 9  acl = "private" 
11resource "aws_s3_bucket_object" "object1" {
12  for_each = fileset("uploads/", "*")
13  bucket =
14  key = each.value
15  source = "uploads/${each.value}"
18resource "aws_s3_bucket_public_access_block" "app" {
19 bucket =
21 block_public_acls       = true
22 block_public_policy     = true
23 ignore_public_acls      = true
24 restrict_public_buckets = true

Let's apply the above terraform configuration and create an S3 bucket. But before that put, this terraform script inside a file inside the directory aws_s3_bucket.

Run the following command to apply the above changes -

1terraform init 

Then run terraform apply

1terraform apply

After hitting the terraform apply you should see the following onto your console -

 1Plan: 4 to add, 0 to change, 0 to destroy.
 3Do you want to perform these actions?
 4  Terraform will perform the actions described above.
 5  Only 'yes' will be accepted to approve.
 7  Enter a value: yes
 9aws_s3_bucket.jhooq-s3-bucket: Creating...
10aws_s3_bucket.jhooq-s3-bucket: Creation complete after 2s [id=jhooq-s3-bucket]
11aws_s3_bucket_object.object1["test2.txt"]: Creating... Creating...
13aws_s3_bucket_object.object1["test1.txt"]: Creating... Creation complete after 1s [id=jhooq-s3-bucket]
15aws_s3_bucket_object.object1["test2.txt"]: Creation complete after 1s [id=test2.txt]
16aws_s3_bucket_object.object1["test1.txt"]: Creation complete after 1s [id=test1.txt]

Create S3 bucket using terraform and securely generated aws credentials

5.4 Verify S3 bucket

Now we have successfully executed terraform script let go to AWS Console and verify the S3 Bucket.

On the AWS page you can simply search S3 Bucket and look for the bucket which you have created. In my case the name of the bucket is - jhooq-s3-bucket.

S3 Bucket has been created using terraform with dynamically generated AWS secrets

6. Delete AWS S3 Bucket

After successfully testing and verifying the S3 bucket let's destroy the S3 bucket using the terraform destroy command.

1terraform destroy

It should destroy your S3 bucket along with content present inside the S3 bucket -

 1Terraform will perform the following actions:
 3  # aws_s3_bucket.jhooq-s3-bucket will be destroyed
 4  - resource "aws_s3_bucket" "jhooq-s3-bucket" {
 5      - acl                         = "private" -> null
 6      - arn                         = "arn:aws:s3:::jhooq-s3-bucket" -> null
 7      - bucket                      = "jhooq-s3-bucket" -> null
 8      - bucket_domain_name          = "" -> null
 9      - bucket_regional_domain_name = "" -> null
10      - force_destroy               = false -> null
11      - hosted_zone_id              = "Z21DNDUVLTQW6Q" -> null
12      - id                          = "jhooq-s3-bucket" -> null
13      - region                      = "eu-central-1" -> null
14      - request_payer               = "BucketOwner" -> null
15      - tags                        = {} -> null
17      - versioning {
18          - enabled    = false -> null
19          - mfa_delete = false -> null
20        }
21    } 

Terraform destroy S3 Bucket

7. Conclusion

This blog highlights how to secure your AWS Secret which is considered to be the most valuable info for maintaining your AWS infra. After reading this blog post you should be able to -

  1. Dynamically generate the AWS secrets while using terraform
  2. You will have a better understanding of how to integrate Terraform with AWS Secrets and IAM Policies.
  3. Also you will get a glance at managing AWS Secrets using HashiCorp Vault.

Read More - Terragrunt -

  1. How to use Terragrunt?

Posts in this series