What is terraform provisioner?
Terraform Provisioners are used to performing certain custom actions and tasks either on the local machine or on the remote machine.
The custom actions can vary in nature and it can be -
- Running custom shell script on the local machine
- Running custom shell script on the remote machine
- Copy file to the remote machine
Also there are two types of provisioners -
- Generic Provisioners (file, local-exec, and remote-exec)
- Vendor Provisioners (chef, habitat, puppet, salt-masterless)
Generic Provisioners - Generally vendor independent and can be used with any cloud vendor(GCP, AWS, AZURE)
Vendor Provisioners - It can only be used only with its vendor. For example, chef
provisioner can only be used with chef for automating and provisioning the server configuration.
Table of Content
- file provisioner
- local-exec provisioner
- remote-exec provisioner
- chef provisoners
- habitat provisoners
- puppet provisoners
- salt-masterless provisoners
(Note - All the provisioners must be used in moderation, it is not advisable to use provisioners in excess)
1. file provisioner
As the name suggests file provisioner can be used for transferring and copying the files from one machine to another machine.
Not only file but it can also be used for transferring/uploading the directories.
So when we talk about copying files or directories from one machine to another machine then it has to be secured and file provisioner supports for ssh
and winrm
type of connections which can help you to achieve secure file transfer between the source machine and destination machine.
Let us take an example to understand how to implement terraform file provisioner. The following code snippet shows -
- How to write your file provisioner
- How to specify
source and
destination` for copying/transferring the file.
1provisioner "file" {
2 source = "/home/rahul/Jhooq/keys/aws/test-file.txt"
3 destination = "/home/ubuntu/test-file.txt"
4}
In the above code snippet, we are trying to copy file test-file.txt
from its source =/home/rahul/Jhooq/keys/aws/test-file.txt
to its destination =/home/ubuntu/test-file.txt
Here is the complete terraform script which demonstrates on how to use terraform file provisioner
1provider "aws" {
2 region = "eu-central-1"
3 access_key = "AKIATQ37NXBxxxxxxxxx"
4 secret_key = "JzZKiCia2vjbq4zGGGewdbOhnacmxxxxxxxxxxxx"
5
6}
7
8resource "aws_instance" "ec2_example" {
9
10 ami = "ami-0767046d1677be5a0"
11 instance_type = "t2.micro"
12 key_name= "aws_key"
13 vpc_security_group_ids = [aws_security_group.main.id]
14
15 provisioner "file" {
16 source = "/home/rahul/Jhooq/keys/aws/test-file.txt"
17 destination = "/home/ubuntu/test-file.txt"
18 }
19 connection {
20 type = "ssh"
21 host = self.public_ip
22 user = "ubuntu"
23 private_key = file("/home/rahul/Jhooq/keys/aws/aws_key")
24 timeout = "4m"
25 }
26}
27
28resource "aws_security_group" "main" {
29 egress = [
30 {
31 cidr_blocks = [ "0.0.0.0/0", ]
32 description = ""
33 from_port = 0
34 ipv6_cidr_blocks = []
35 prefix_list_ids = []
36 protocol = "-1"
37 security_groups = []
38 self = false
39 to_port = 0
40 }
41 ]
42 ingress = [
43 {
44 cidr_blocks = [ "0.0.0.0/0", ]
45 description = ""
46 from_port = 22
47 ipv6_cidr_blocks = []
48 prefix_list_ids = []
49 protocol = "tcp"
50 security_groups = []
51 self = false
52 to_port = 22
53 }
54 ]
55}
56
57resource "aws_key_pair" "deployer" {
58 key_name = "aws_key"
59 public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDbvRN/gvQBhFe+dE8p3Q865T/xTKgjqTjj56p1IIKbq8SDyOybE8ia0rMPcBLAKds+wjePIYpTtRxT9UsUbZJTgF+SGSG2dC6+ohCQpi6F3xM7ryL9fy3BNCT5aPrwbR862jcOIfv7R1xVfH8OS0WZa8DpVy5kTeutsuH5FMAmEgba4KhYLTzIdhM7UKJvNoUMRBaxAqIAThqH9Vt/iR1WpXgazoPw6dyPssa7ye6tUPRipmPTZukfpxcPlsqytXWlXm7R89xAY9OXkdPPVsrQA0XFQnY8aFb9XaZP8cm7EOVRdxMsA1DyWMVZOTjhBwCHfEIGoePAS3jFMqQjGWQd rahul@rahul-HP-ZBook-15-G2"
60}
Here is one thing to note - You need to generate the ssh keys to connect to your EC2 instance running in the AWS cloud. You can use the command ssh-keygen -t aws_key
to generate the key-pair. You can read this blog post on Terraform how to do SSH in AWS EC2 instance?
Supporting arguments for file provisioners
1. source - The source argument is used to specify the location from where you want to pick the file. The source location can be relative to your project structure.
Here are some examples where I have used relative path for the source arguments -
1provisioner "file" {
2 source = "../../../Jhooq/keys/aws/test-file.txt"
3 destination = "/home/ubuntu/test-file.txt"
4}
2. content - The content argument is useful when you do not want to copy or transfer the file instead you only want to copy the content
or string
.
Here is an example of a content resource argument -
1provisioner "file" {
2 content = "I want to copy this string to the destination file server.txt"
3 destination = "/home/ubuntu/server.txt"
4}
The above provisioner script will copy the string I want to copy this string to the destination file server.txt
to the destination file /home/ubuntu/server.txt
3. destination - As the name suggest you need to input the final destination path where you want your file to be.
Read More- How to Copy files to EC2 and S3 bucket using Terraform
2. local-exec provisioner
The next provisioner we are gonna talk about is local-exec provisioner. Basically, this provisioner is used when you want to perform some tasks onto your local machine where you have installed the terraform.
So local-exec
provisioner is never used to perform any kind task on the remote machine. It will always be used to perform local operations onto your local machine.
Example - Consider the following example where we are trying to create a file hello-jhooq.txt
on the local machine
1provisioner "local-exec" {
2 command = "touch hello-jhooq.txt"
3}
In the command section, we can write a bash script. In the above example, I am trying to create a hello-jhooq.txt
file on the local machine.
Here is the complete terraform script for the above example -
1provider "aws" {
2 region = "eu-central-1"
3 access_key = "AKIATQ37NXBxxxxxxxxx"
4 secret_key = "JzZKiCia2vjbq4zGGGewdbOhnacmxxxxxxxxxxxx"
5
6}
7
8resource "aws_instance" "ec2_example" {
9
10 ami = "ami-0767046d1677be5a0"
11 instance_type = "t2.micro"
12 tags = {
13 Name = "Terraform EC2"
14 }
15
16 provisioner "local-exec" {
17 command = "touch hello-jhooq.txt"
18 }
19}
Supporting arguments for local provisioners
1. command - Here are the key facts about the command arguments
-
This is a mandatory argument which you always need to pass along whenever you are implementing the
local-exec
provisioners. -
Always consider command as shell script executioner because whatever you pass in the command will be executed as a bash shell script.
3.You can write even mention the relative path of your shell script location and pass it the command.
1provisioner "local-exec" {
2 command = "touch hello-jhooq.txt"
3}
2. working_dir - Here are the key facts about the working_dir
arguments
-
It is an optional argument which and you do not necessarily need to pass along with the command argument.
-
This is a supporting argument for the
command
because once you specify theworking_dir
you are explicitly telling terraform to execute the command at that particular location. -
You can mention the relative path of the
working_dir
.
3. interpreter - With the help of interpreter
you can explicitly specify in which environment(bash, PowerShell, perl etc.) you are going to execute the command.
- It is an optional argument.
- If you do not specify the
interpreter
argument the default will be taken into consideration based on the operating system.
Example 1: - Here I am trying to specify the interpreter as perl, so anything which I mention inside the command argument will be executed as perl command
1resource "null_resource" "example1" {
2 provisioner "local-exec" {
3 command = "open WFH, '>hello-world.txt' and print WFH scalar localtime"
4 interpreter = ["perl", "-e"]
5 }
6}
Example 2: - In this example I will be using the PowerShell
interpreter to write a string to a file
1resource "null_resource" "example2" {
2 provisioner "local-exec" {
3 command = "This will be written to the text file> completed.txt"
4 interpreter = ["PowerShell", "-Command"]
5 }
6}
4. environment - This is again an optional parameter that can be passed alongside the command
argument.
-
With the help of
environment
you can define or set environment variables that can be accessible later or inside your terraform execution. -
environment
arguments are generally the key-value pair and you can define as many variables as you can.
Here is an example of the environment
arguments
1
2provisioner "local-exec" {
3 command = "echo $VAR1 $VAR2 $VAR3 >> my_vars.txt"
4
5 environment = {
6 VAR1 = "my-value-1"
7 VAR2 = "my-value-2"
8 VAR3 = "my-value-3"
9 }
10 }
3. remote-exec provisioner
As the name suggests remote-exec
it is always going to work on the remote machine. With the help of the remote-exec
you can specify the commands of shell scripts that want to execute on the remote machine.
As we discussed ssh
and winrm
for secure data transfer in local-exec, here also all the communication and file transfer is done securely.
Let us take an example of how to implement the remote-exec
provisioner -
1provisioner "remote-exec" {
2 inline = [
3 "touch hello.txt",
4 "echo helloworld remote provisioner >> hello.txt",
5 ]
6}
In the above example -
- First we are going to create a file named
hello.txt
- We are going to write the message
helloworld remote provisioner
inside thehello.txt
file. - Everything will happen on the remote machine
Here is the complete example of remote-exec
-
1provider "aws" {
2 region = "eu-central-1"
3 access_key = "AKIATQ37NXBxxxxxxxxx"
4 secret_key = "JzZKiCia2vjbq4zGGGewdbOhnacmxxxxxxxxxxxx"
5
6}
7
8resource "aws_instance" "ec2_example" {
9
10 ami = "ami-0767046d1677be5a0"
11 instance_type = "t2.micro"
12 key_name= "aws_key"
13 vpc_security_group_ids = [aws_security_group.main.id]
14
15 provisioner "remote-exec" {
16 inline = [
17 "touch hello.txt",
18 "echo helloworld remote provisioner >> hello.txt",
19 ]
20 }
21 connection {
22 type = "ssh"
23 host = self.public_ip
24 user = "ubuntu"
25 private_key = file("/home/rahul/Jhooq/keys/aws/aws_key")
26 timeout = "4m"
27 }
28}
29
30resource "aws_security_group" "main" {
31 egress = [
32 {
33 cidr_blocks = [ "0.0.0.0/0", ]
34 description = ""
35 from_port = 0
36 ipv6_cidr_blocks = []
37 prefix_list_ids = []
38 protocol = "-1"
39 security_groups = []
40 self = false
41 to_port = 0
42 }
43 ]
44 ingress = [
45 {
46 cidr_blocks = [ "0.0.0.0/0", ]
47 description = ""
48 from_port = 22
49 ipv6_cidr_blocks = []
50 prefix_list_ids = []
51 protocol = "tcp"
52 security_groups = []
53 self = false
54 to_port = 22
55 }
56 ]
57}
58
59
60resource "aws_key_pair" "deployer" {
61 key_name = "aws_key"
62 public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDbvRN/gvQBhFe+dE8p3Q865T/xTKgjqTjj56p1IIKbq8SDyOybE8ia0rMPcBLAKds+wjePIYpTtRxT9UsUbZJTgF+SGSG2dC6+ohCQpi6F3xM7ryL9fy3BNCT5aPrwbR862jcOIfv7R1xVfH8OS0WZa8DpVy5kTeutsuH5FMAmEgba4KhYLTzIdhM7UKJvNoUMRBaxAqIAThqH9Vt/iR1WpXgazoPw6dyPssa7ye6tUPRipmPTZukfpxcPlsqytXWlXm7R89xAY9OXkdPPVsrQA0XFQnY8aFb9XaZP8cm7EOVRdxMsA1DyWMVZOTjhBwCHfEIGoePAS3jFMqQjGWQd rahul@rahul-HP-ZBook-15-G2"
63}
Supporting arguments for remote provisioners
1. inline - With the help of an inline argument you can specify the multiple commands which you want to execute in an ordered fashion.
Here is an example in which I have added two separate commands -
1provisioner "remote-exec" {
2 inline = [
3 "touch hello.txt",
4 "echo helloworld remote provisioner >> hello.txt",
5 ]
6}
2. script - It can be used to copy the script from local machine to remote machine and it always contains a relative path.
In the script, you can not specify multiple scripts. You can only mention one script which needs to be copied to the remote machine.
3. scripts - Here you can specify the multiple local scripts which want to copy or transfer to the remote machine and execute over there.
Always remember the order of the file will not change and it going to execute in the same order way you have mentioned.
4. chef provisioner
Now are talking about vendor-specific provisioners and the first one in our list is chef.
chef is an infrastructure automating open-source tool for managing and automating the servers remotely. Terraform has really good integration with chef.
If you are using chef for managing your infrastructure then you can take the advantage of the chef provisioner but always use provisioners in the moderation and when it is necessary.
As with chef we are handling the remote servers so every communication is happening over either ssh
or winrm
.
Here is an example code snippet of the chef provisioner -
1resource "aws_instance" "chef_provisioner_example" {
2
3 provisioner "chef" {
4 attributes_json = <<EOF
5 {
6 "key": "value",
7 "app": {
8 "cluster1": {
9 "nodes": [
10 "webapp1",
11 "webapp2"
12 ]
13 }
14 }
15 }
16 EOF
17
18 environment = "_default"
19 client_options = ["chef_license 'accept'"]
20 run_list = ["cookbook::recipe"]
21 node_name = "webapp1"
22 secret_key = "${file("../encrypted_data_bag_secret")}"
23 server_url = "https://chef.company.com/organizations/org1"
24 recreate_client = true
25 user_name = "bork"
26 user_key = "${file("../bork.pem")}"
27 version = "15.10.13"
28 # If you have a self signed cert on your chef server change this to :verify_none
29 ssl_verify_mode = ":verify_peer"
30 }
31}
Note: Reference taken from terraform.io
5. habitat provisioner
chef habitat is also an open-source automation framework which can help you to define, packaging and delivering an application to any environment of your choice regardless of the operating system.
Terraform uses the habitat provisioner
and it also uses the ssh
for file transfer.
Here is an example code snippet for the same -
1resource "aws_instance" "habitat_provisioner" {
2 count = 1
3
4 provisioner "habitat" {
5 peers = [aws_instance.habitat_provisioner[0].private_ip]
6 use_sudo = true
7 service_type = "systemd"
8 accept_license = true
9
10 service {
11 name = "core/redis"
12 topology = "leader"
13 user_toml = file("conf/redis.toml")
14 }
15 }
16}
Note: Reference taken from terraform.io
6. puppet provisioner
With the help of puppet provisioner, you can install and configure the puppet agent on the remote machine and it also supports the ssh
and winrm
But few things which you need to consider before using the puppet provisioners -
- You should make sure that
ssh
andcURL
is available on the remote host - In case you are using the
winrm
then you should make sure thatPowerShell 2.0
is available on the remote host. - Third you need to install the Bolt onto your workstation or local machine before you start using the puppet provisioner
Here is an example of a puppet provisioner -
1resource "aws_instance" "web" {
2 provisioner "puppet" {
3 server = aws_instance.puppetmaster.public_dns
4 server_user = "ubuntu"
5 extension_requests = {
6 pp_role = "webserver"
7 }
8 }
9}
Note: Reference taken from terraform.io
7. salt-masterless
salt-masterless provisions the machine build by Terraform suing Salt states. I have not used the salt-masterless provisioners, so could not add more details here.
But if you are interested then here is an example code snippet that I found from terraform.io
1
2provisioner "salt-masterless" {
3 "local_state_tree" = "/srv/salt"
4}
Read More - Terragrunt -
Posts in this Series
- Securing Sensitive Data in Terraform
- Boost Your AWS Security with Terraform : A Step-by-Step Guide
- How to Load Input Data from a File in Terraform?
- Can Terraform be used to provision on-premises infrastructure?
- Fixing the Terraform Error creating IAM Role. MalformedPolicyDocument Has prohibited field Resource
- In terraform how to handle null value with default value?
- Terraform use module output variables as inputs for another module?
- How to Reference a Resource Created by a Terraform Module?
- Understanding Terraform Escape Sequences
- How to fix private-dns-enabled cannot be set because there is already a conflicting DNS domain?
- Use Terraform to manage AWS IAM Policies, Roles and Users
- How to split Your Terraform main.tf File into Multiple Files
- How to use Terraform variable within variable
- Mastering the Terraform Lookup Function for Dynamic Keys
- Copy files to EC2 and S3 bucket using Terraform
- Troubleshooting Error creating EC2 Subnet InvalidSubnet Range The CIDR is Invalid
- Troubleshooting InvalidParameter Security group and subnet belong to different networks
- Managing strings in Terraform: A comprehensive guide
- How to use terraform depends_on meta argument?
- What is user_data in Terraform?
- Why you should not store terraform state file(.tfstate) inside Git Repository?
- How to import existing resource using terraform import comand?
- Terraform - A detailed guide on setting up ALB(Application Load Balancer) and SSL?
- Testing Infrastructure as Code with Terraform?
- How to remove a resource from Terraform state?
- What is Terraform null Resource?
- In terraform how to skip creation of resource if the resource already exist?
- How to setup Virtual machine on Google Cloud Platform
- How to use Terraform locals?
- Terraform Guide - Docker Containers & AWS ECR(elastic container registry)?
- How to generate SSH key in Terraform using tls_private_key?
- How to fix-Terraform Error acquiring the state lock ConditionalCheckFiledException?
- Terraform Template - A complete guide?
- How to use Terragrunt?
- Terraform and AWS Multi account Setup?
- Terraform and AWS credentials handling?
- How to fix-error configuring S3 Backend no valid credential sources for S3 Backend found?
- Terraform state locking using DynamoDB (aws_dynamodb_table)?
- Managing Terraform states?
- Securing AWS secrets using HashiCorp Vault with Terraform?
- How to use Workspaces in Terraform?
- How to run specific terraform resource, module, target?
- How Terraform modules works?
- Secure AWS EC2s & GCP VMs with Terraform SSH Keys!
- What is terraform provisioner?
- Is terraform destroy needed before terraform apply?
- How to fix terraform error Your query returned no results. Please change your search criteria and try again?
- How to use Terraform Data sources?
- How to use Terraform resource meta arguments?
- How to use Terraform Dynamic blocks?
- Terraform - How to nuke AWS resources and save additional AWS infrastructure cost?
- Understanding terraform count, for_each and for loop?
- How to use Terraform output values?
- How to fix error configuring Terraform AWS Provider error validating provider credentials error calling sts GetCallerIdentity SignatureDoesNotMatch?
- How to fix Invalid function argument on line in provider credentials file google Invalid value for path parameter no file exists
- How to fix error value for undeclared variable a variable named was assigned on the command line?
- What is variable.tf and terraform.tfvars?
- How to use Terraform Variables - Locals,Input,Output
- Terraform create EC2 Instance on AWS
- How to fix Error creating service account googleapi Error 403 Identity and Access Management (IAM) API has not been used in project before or it is disabled
- Install terraform on Ubuntu 20.04, CentOS 8, MacOS, Windows 10, Fedora 33, Red hat 8 and Solaris 11