Copy files to EC2 and S3 bucket using Terraform

This blog post will guide you on how to use Terraform-Provisioners to copy/upload files to EC2 as well as the S3 bucket.

Table of Content

  1. Pre-requisites
  2. Setup AWS credentials in the Terraform file
  3. Setup an EC2 instance, a security group, and SSH key pair resources.
  4. Use file provisioner to upload the file to EC2
  5. Upload files to the S3 bucket using Terraform.
  6. Initialize and Apply the Terraform configuration.
  7. Conclusion

Let's take a look at the prerequisites:

1. Pre-requisites

  1. AWS Account - You must have a registered AWS account with active billing. If you are working in a corporate AWS environment, then you must have the necessary permissions to create and manage EC2 and S3 buckets.
  2. Terraform Installed - It very obvious that you should have Terraform installed on your working machine. Please refer to this guide on how to install the terraform.
  3. AWS CLI installed - The last thing you need is to have AWS CLI installed on your working machine. Although AWS CLI is not mandatory, it is always recommended to have AWS CLI installed for troubleshooting as well as using the credentials file inside the Terraform file.

2. Setup AWS credentials in the Terraform file

After completing the pre-requisites you must setup the AWS credentials correctly so that your Terraform code can authenticate and communicate with your AWS environment.

There are a few ways to setup your AWS credentials inside your Terraform file, Please choose either of the following:

  1. Using the credentials file - To use the AWS credentials file inside your Terraform file, you must install the AWS CLI beforehand.

Here is a Terraform code snippet:

1# Note - Please replace the path with your credentials files
2
3provider "aws" {
4  region                   = "eu-central-1"
5  shared_credentials_files = ["/<path>/<to-aws-credentials>/.aws/credentials"]
6}
  1. Hard Code Access Key and Secret Access key- The second way would be to hard code the access key and secret key, but I would not recommend this approach because, with this approach, your AWS credentials might end up in the versioning system.

Here is the example code snippet -

1# Replace the values with your AWS credentials
2
3provider "aws" {
4  region     = "eu-central-1"
5  access_key = <PLACE-YOUR-ACCESS-KEY>
6  secret_key = <PLACE-YOUR-SECRET-KEY>
7}
  1. Export AWS Credentials as Environment Variables The third way would be to export the AWS credentials as environment variables.

Use the following command to export the AWS credentials -

1# Replace the values with your AWS credentials
2
3export AWS_ACCESS_KEY_ID="your_access_key"
4export AWS_SECRET_ACCESS_KEY="your_secret_key"

3. Setup an EC2 instance, a security group, and SSH key pair resources.

Let's setup the EC2 instance along with the security group so that the same EC2 instance can be used later to copy the files.

Here is the Terraform code stack for the same:

  1. Step 1- Resource block for an EC2 instance
  2. Step 2- Resource block for Security Group
  3. Step 3- Setup SSH Key Pair & private key)
 1# Step 1 - Resource block for EC2 instance
 2resource "aws_instance" "ec2_example" {
 3
 4  ami                    = "ami-0767046d1677be5a0"
 5  instance_type          = "t2.micro"
 6  key_name               = "aws_key"
 7  vpc_security_group_ids = [aws_security_group.main.id]
 8}
 9
10# Step 2 - Resource block for Security Group
11resource "aws_security_group" "main" {
12  egress = [
13    {
14      cidr_blocks      = ["0.0.0.0/0",]
15      description      = ""
16      from_port        = 0
17      ipv6_cidr_blocks = []
18      prefix_list_ids  = []
19      protocol         = "-1"
20      security_groups  = []
21      self             = false
22      to_port          = 0
23    }
24  ]
25  ingress = [
26    {
27      cidr_blocks      = ["0.0.0.0/0",]
28      description      = ""
29      from_port        = 22
30      ipv6_cidr_blocks = []
31      prefix_list_ids  = []
32      protocol         = "tcp"
33      security_groups  = []
34      self             = false
35      to_port          = 22
36    }
37  ]
38}
39
40# Step 3 - Set up the SSH key pair
41# To generate SSH key refer to - https://jhooq.com/terraform-generate-ssh-key
42resource "aws_key_pair" "deployer" {
43  key_name   = "aws_key"
44  public_key = "<PLACE-YOUR-PUBLIC-KEY>"
45}

4. Use file provisioner to upload the file to EC2

In the previous steps, we set up the EC2 instance. Now we need to use the file provisioner to copy/upload the files to the EC2 instance.

Let's update the code snipped from the Step-3 and add the file provisioner to the same terraform code tack.

Here is the code snippet:

 1# Step 1: Resource block for EC2 instance
 2# File provisioner: Added the file provisioner
 3# To use the file provisioner, you need to specify the following:
 4# Source: file to be copied from;
 5# Destination: where file needs to be copied
 6
 7resource "aws_instance" "ec2_example" {
 8
 9  ami                    = "ami-0767046d1677be5a0"
10  instance_type          = "t2.micro"
11  key_name               = "aws_key"
12  vpc_security_group_ids = [aws_security_group.main.id]
13
14  # File provisioner with source and destination
15  provisioner "file" {
16    source      = "/home/rahul/Jhooq/keys/aws/test-file.txt"
17    destination = "/home/ubuntu/test-file.txt"
18  }
19
20  # Connection is necessary for file provisioner to work
21  connection {
22    type        = "ssh"
23    host        = self.public_ip
24    user        = "ubuntu"
25    private_key = file("/home/rahul/Jhooq/keys/aws/aws_key")
26    timeout     = "4m"
27  }
28}
29
30
31# Step 2 - Resource block for Security Group
32resource "aws_security_group" "main" {
33  egress = [
34    {
35      cidr_blocks      = ["0.0.0.0/0",]
36      description      = ""
37      from_port        = 0
38      ipv6_cidr_blocks = []
39      prefix_list_ids  = []
40      protocol         = "-1"
41      security_groups  = []
42      self             = false
43      to_port          = 0
44    }
45  ]
46  ingress = [
47    {
48      cidr_blocks      = ["0.0.0.0/0",]
49      description      = ""
50      from_port        = 22
51      ipv6_cidr_blocks = []
52      prefix_list_ids  = []
53      protocol         = "tcp"
54      security_groups  = []
55      self             = false
56      to_port          = 22
57    }
58  ]
59}
60
61# Step 3 - Set up the SSH key pair
62# To generate SSH key refer to - https://jhooq.com/terraform-generate-ssh-key
63resource "aws_key_pair" "deployer" {
64  key_name   = "aws_key"
65  public_key = "<PLACE-YOUR-PUBLIC-KEY>"
66}

5. Upload files to the S3 bucket using Terraform.

Uploading files to **S3 bucket is relatively easy compared to EC2 instance. There are a couple of key parameters you need to keep in mind while working with S3 Bucket for uploading the files.

  1. key - The destination directory where you want to upload the file
  2. source - The source directory from where you want to upload the file

Here is the code snippet-

1resource "aws_s3_bucket_object" "example" {
2  bucket       = aws_s3_bucket.example.bucket
3  key          = "path/to/remote/file"
4  source       = "path/to/local/file"
5  etag         = filemd5("path/to/local/file")
6  content_type = "text/plain"
7}

5.1 Uploading multiple files to an S3 bucket

Taking the previous example where we have uploaded only a single file to the S3 bucket, let's modify the same code to upload multiple files onto the S3 bucket.

for_each - For uploading more than one file, we must use for_each loop inside the aws_s3_bucket_object resource block.

1resource "aws_s3_bucket_object" "object1" {
2  for_each = fileset("uploads/", "*")
3  bucket = aws_s3_bucket.spacelift-test1-s3.id
4  key = each.value
5  source = "uploads/${each.value}"
6  etag = filemd5("uploads/${each.value}")
7}

6. Initialize and Apply the Terraform configuration.

Once you have completed your terraform stack, it is time to initialize and apply terraform configuration.

Use the following terraform command from the terminal:

1# Initialize terraform
2terraform init
3
4# Plan your changes
5terraform plan
6
7# Apply the changes
8terraform apply

7. Conclusion

By following the above steps, you should be able to copy and upload files to EC2 and AWS S3 buckets using Terraform. This method is particularly useful for automating the deployment of static assets and configuration files in cloud environments.

Posts in this Series