Launching Infrastructure using EFS via Terraform

B.V.Rohan Bharadwaj
7 min readJul 21, 2020

We are trying to work with Elastic File System as a storage method in our infrastructure, In a previous article , we used Elastic Block Storage for doing the same task.

AWS’s Storage Services:

Amazon Web Service consists of three types of storage , with each type having different or say unique use cases.

i) Block Storage: Elastic Block Storage

ii)Object Storage:Simple Storage Service ( or S3)

iii) File Storage: Elastic File Storage

Why pick EFS instead of EBS or S3?

EBS is strictly recommended based on region based services and not just that, it only supports once instance at once so sharing the data between instances isn’t much of a choice here

That brings us to S3 this can do things which EBS can’t ,like sharing data between instances and run between regions .But this has a disadvantage , being the fact that all files uploaded into the S3 bucket are not editable , as they are in a raw format . Thus being only useful for sharing static content while working with it is recommended .

So,our last option would be EFS , which is simple and elegant as it does everything S3 and EBS can’t. From sharing data from multiple instances to working in between regions . It’s storage capacity is large in the sense that it can hold up to petabytes of data without any provisional storage.

It works by a simple mount in a directory in the instance, where the data we work on is held. So if we ever need to make changes to our content/data it reflects on the other instances too Since it uses a special protocol called Network File System ( which runs on the port number : 2049 ).

Task Objectives:

1. Create the key and security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Procedure:

1] As Terraform is a plugin based application we have to install the plugins for the cloud service we work with , this is done via HCL code with a “.tf extension.

provider "aws" {
region = "ap-south-1"
profile = " <profile_name> "
}

Start off by creating a IAM user and downloading the access key and the secret access key

aws configure --profile "<prfile_name>"

Once you run the above command , you will be requested to enter the following:

Enter the details.

Next , you may initialize the required plugins by entering:

terraform init 
Initializing is done.

2] Generating Key-pairs :

resource "tls_private_key" "<key_name>" {
algorithm = "RSA"
}
resource "aws_key_pair" "generated_key" {
key_name = "<key_name>"
public_key = "${tls_private_key.<key_name>.public_key_openssh}"
depends_on = [
tls_private_key.<key_name>
]
}
resource "local_file" "key" {
content = "${tls_private_key.<key_name>.private_key_pem}"
filename = "<key_name>.pem"
depends_on = [
tls_private_key.<key_name>
]
}

Once that’s run , then a key-pair is generated , and downloaded into the working directory with the assigned name and a “.pem” extension .

my Key is called t2key

3] Creating a Security Group:

A security group is essential for an instance as it acts like a virtual firewall in order to control all incoming and outgoing traffic

resource "aws_security_group" "sg" {
name = "sg"
description = "This firewall allows SSH,HTTP and NFS"
vpc_id = "${aws_vpc.my_vpc.id}"

ingress {
description = "SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
description = "NFS"
from_port = 2049
to_port = 2049
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}

tags = {
Name = "sg"
}
}

4] Launching an EC2 Instance:

resource "aws_instance" "<instance_name>" {
ami = "<ami>"
instance_type = "t2.micro"
key_name = "${aws_key_pair.generated_key.key_name}"
vpc_security_group_ids = [ "${aws_security_group.sg.id}" ]
subnet_id = "${aws_subnet.public.id}"

connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.<key>.private_key_pem
host = aws_instance.<instance_name>.public_ip
}

provisioner "remote-exec" {
inline = [
"sudo yum update -y",
"sudo yum install httpd php git -y",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd",
]
}

tags = {
Name = "<instance_name>"
}
}

In our case , we have the need of HTTPD webserver and git in our instance.For that, we can mention in our provisioner (remote-exec) in Terraform , which will then execute the listed commands above will install all the required tools for connecting remotely .

5] Creating EFS

resource "aws_efs_file_system" "efs" {
depends_on = [ aws_security_group.sg, aws_instance.ec2, ]
creation_token = "efs"

tags = {
Name = "my_efs"
}
}
resource "aws_efs_mount_target" "efsmount" {
file_system_id = aws_efs_file_system.efs.id
subnet_id = aws_subnet.public.id
security_groups = [aws_security_group.sg.id]
depends_on = [ aws_efs_file_system.efs,]
}

6] Mounting the EFS to /var/www/html directiory

resource "null_resource" "nulre" {
depends_on = [ aws_efs_mount_target.efsmount, ]

connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.t2key.private_key_pem
host = aws_instance.ec2.public_ip
}

provisioner "remote-exec" {
inline = [
"sudo echo ${aws_efs_file_system.efs.dns_name}:/var/www/html efs defaults,_netdev 0 0 >> sudo /etc/fstab",
"sudo mount ${aws_efs_file_system.efs.dns_name}:/ /var/www/html",
"sudo curl https://raw.githubusercontent.com/rohan6820/SampleFile/blob/master/page.html > page.html",
"sudo cp page.html /var/www/html/",
]
}
}

7] Creating an S3 Bucket:

resource "aws_s3_bucket" "my_s3" {
depends_on = [ null_resource.nulre, ]
bucket = "t2-my-s3b"
force_destroy = true
acl = "public-read"

}
locals {
s3_origin_id = "aws_s3_bucket.my_s3.id"
}

To create a S3 Bucket , we have to use the resource “aws_s3_bucket” .Make sure the value of the bucket is always unique so that S3 can obtain the public access to share that data. Having Access Control Lists ( or “ acl ” for short ), is a huge deal as it enables us to maintain the bucket and its content , basically it assigns which kind of users are given access to while also presenting the access type as well. In my case i set it to public read.

8] Uploading content to our S3 Bucket:

resource "aws_s3_bucket_object" "myobj" {
depends_on = [ aws_s3_bucket.my_s3,
null_resource.nulre,
]
bucket = aws_s3_bucket.my_s3.id
key = "one"
source = "E:/proj/AWS/awstask2/izuku.jpg"
etag = "E:/proj/AWS/awstask2/izuku.jpg"
acl = "public-read"
content_type = "image/jpg"
}

9] Creating Cloud Front Distribution :

resource "aws_cloudfront_origin_access_identity" "o" {
comment = "any comment"
}
resource "aws_cloudfront_distribution" "task_2_s3_distribution" {
origin {
domain_name = aws_s3_bucket.my_s3.bucket_regional_domain_name
origin_id = local.s3_origin_id
s3_origin_config {
origin_access_identity = aws_cloudfront_origin_access_identity.o.cloudfront_access_identity_path
}
}
enabled = true
is_ipv6_enabled = true
comment = "Some comment"
default_root_object = "izuku.jpg"
logging_config {
include_cookies = false
bucket = aws_s3_bucket.my_s3.bucket_domain_name

}
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
# Cache behavior with precedence 0
ordered_cache_behavior {
path_pattern = "/content/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD", "OPTIONS"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
compress = true
viewer_protocol_policy = "redirect-to-https"
}
price_class = "PriceClass_200"
restrictions {
geo_restriction {
restriction_type = "whitelist"
locations = ["US", "IN","CA", "GB", "DE"]
}
}
tags = {
Environment = "production"
}
viewer_certificate {
cloudfront_default_certificate = true
}
}
To keep things simple, I named it web

10] Updating the page.html file

resource "null_resource" "nulre2" {
depends_on = [ aws_cloudfront_distribution.task_2_s3_distribution, ]
connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.t2key.private_key_pem
host = aws_instance.ec2.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo su << EOF",
"echo \"<img src='https://${aws_cloudfront_distribution.task_2_s3_distribution.domain_name}/${aws_s3_bucket_object.myobj.key }'>\" >> /var/www/html/page.html",
"EOF"
]
}
}

And that’s it!

All that’s left is to finish our infrastructure by applying all of it.

terraform apply 
I had to face an issue in the middle so , I applied again right after the small fix.

Once the work with the infrastructure is done we can destroy it by entering:

terraform destroy
There , it’s destroyed

And that’s how easy it is to launch an Infrastructure using EFS via Terraform!

Thank you for the time!

--

--