Ends in
00
days
00
hrs
00
mins
00
secs
ENROLL NOW

🎁 Get 20% Off - Christmas Big Sale on All Practice Exams, Video Courses, and eBooks!

Release with a Pipeline: Continuous Delivery to AWS with GitHub Actions

Home » Others » Release with a Pipeline: Continuous Delivery to AWS with GitHub Actions

Release with a Pipeline: Continuous Delivery to AWS with GitHub Actions

This is the final part of a three-part article about a Web Application Project from building a private infrastructure to building a deployment pipeline using AWS’ cloud-native continuous delivery service AWS CodePipeline, and now finalizing the infrastructure to be accessible in a public domain and building a pipeline for continuous deployment using a third-party CD tool – GitHub Actions.

From the private infrastructure previously built, we will update the S3 policy to add a statement for an allowed action for the CloudFront resource. As best practice, this statement will be added to the Terraform script of the infrastructure to make it reusable when deploying across environments or accounts. Before this can be added, the CloudFront Origin Access Control resource has to be built first.

{
   "Sid": "AllowCloudFrontServicePrincipalReadWrite",
   "Effect": "Allow",
   "Principal": {
       "Service": "cloudfront.amazonaws.com"
   },
   "Action": [
       "s3:GetObject",
       "s3:PutObject"
   ],
   "Resource": "arn:aws:s3:::s3-reactapp-bucket/*",
   "Condition": {
       "StringEquals": {
           "AWS:SourceArn": "arn:aws:cloudfront::123456789012
:distribution/A1BCDEFGH23IJ"
       }
   }
}

Amazon CloudFront will serve as a custom origin for the website endpoint. The main purpose of Amazon CloudFront is to minimize latency to milliseconds when serving the web application, as it can cache resources at the Edge Locations of the AWS Backbone Network. Once done, an A record will be added to Route 53 to direct traffic to Amazon CloudFront. We will again use ACM to issue the public certificate of the web application. The public infrastructure to be built is shown below.

 

Release with a Pipeline: Continuous Delivery to AWS with GitHub Actions

Building a Terraform Script for the Public Infrastructure

The code structure for Terraform will look something like below. Here, we logically grouped resources as S3, S3 policy, and Amazon CloudFront. Then, we will add the A record of the public domain manually using the AWS management console.

Release with a Pipeline Continuous Delivery to AWS with GitHub Actions2

S3 Module

First, let’s build the S3 resource. We will also configure the lifecycle rule (set to website) to ignore changes to the internal website object.

resource "aws_s3_bucket" "tutorialsdojo-s3" {
 bucket = "${var.s3name}"
 lifecycle {
   ignore_changes = [
     website
   ]
 }
}

Since S3 is serving website content, we also need to configure the index and error documents.

resource "aws_s3_bucket_website_configuration" "tutorialsdojo-s3websiteconfig" {
 bucket = aws_s3_bucket.tutorialsdojo-s3.bucket
 index_document {
   suffix = "index.html"
 }
 error_document {
   key = "index.html"
 }
}

Other S3 configurations will also need to be adjusted as best practice in securing S3, as well as file versioning strategies.

resource "aws_s3_bucket_public_access_block" "tutorialsdojo-s3publicaccess" {
 bucket                  = aws_s3_bucket.tutorialsdojo-s3.id
 block_public_acls       = true
 block_public_policy     = true
 ignore_public_acls      = true
 restrict_public_buckets = true
}


resource "aws_s3_bucket_ownership_controls" "tutorialsdojo-s3bucketowner" {
 bucket = aws_s3_bucket.tutorialsdojo-s3.id
 rule {
   object_ownership = "BucketOwnerEnforced"
 }
}


resource "aws_s3_bucket_versioning" "versioning_example" {
 bucket = aws_s3_bucket.tutorialsdojo-s3.id
 versioning_configuration {
   status = "${var.s3versioning}"
 }
}

S3 Policy

For the S3 policy, we only need two things: the aws_iam_policy_document and the aws_s3_bucket_policy. The first is the actual data that will be inserted in the policy statement, and the second is a resource that links this data to the s3 bucket.

Tutorials dojo strip
resource "aws_s3_bucket_policy" "tutorialsdojo-s3bucketpolicy" {
 bucket = "${var.s3BucketARN}"
 policy = data.aws_iam_policy_document.tutorialsdojo-s3bucketpolicydata.json
}

data "aws_iam_policy_document" "tutorialsdojo-s3bucketpolicydata" {
 statement {
   sid = "S3Access"
   principals {
     type        = "Service"
     identifiers = ["cloudfront.amazonaws.com"]
   }
   actions = [
     "s3:GetObject",
      "s3:PutObject"
   ]
   resources = [
     "${var.s3BucketARN}/*"
   ]
   condition {
      test     = "StringEquals"
      variable = "aws:SourceArn"
      values   = "arn:aws:cloudfront::123456789012:distribution/A1BCDEFGH23IJ"
  }
 }
 statement {
   sid = "Deployment"
   principals {
     type        = "AWS"
     identifiers = ["${var.cicdIAMRole}"]
   }
   actions = [
     "s3:ListBucket",
     "s3:GetObject",
     "s3:PutObject"
   ]
   resources = [
     "${var.s3BucketARN}",
     "${var.s3BucketARN}/*"
   ]
 }
}

CloudFront

Next is the Amazon CloudFront resource. We only need two resources to build this out: aws_cloudfront_distribution and aws_cloudfront_origin_access_control. This will be built using the minimum recommended configurations for a standard CloudFront resource for website access. We will assume that you have a standard WAF that we can just refer to here to attach to CloudFront.

resource "aws_cloudfront_distribution" "tutorialsdojo-distribution" {
 enabled             = "${var.cfenabled}"
 aliases             = [ "${var.distributionendpoint}" ]
 default_root_object = "index.html"


 origin {
   domain_name = "${var.publicdomain}"
   origin_id   = "${var.publicdomain}"
   origin_access_control_id = aws_cloudfront_origin_access_control.tutorialsdojo-oac.id
 }


 default_cache_behavior {
   allowed_methods         = ["GET","HEAD"]
   cached_methods          = ["GET","HEAD"]
   target_origin_id        = "${var.publicdomain}"
   viewer_protocol_policy  = "redirect-to-https"


   forwarded_values {
     headers       = []
     query_string  = true
    
     cookies {
       forward     = "all"
     }
   }
 }
 restrictions {
   geo_restriction {
     restriction_type = "none"
   }
 }


 viewer_certificate {
   acm_certificate_arn = "${var.ACMcertificateARN}"
   ssl_support_method = "sni-only"
   minimum_protocol_version = "TLSv1.2_2021"
 }
 custom_error_response {
   error_caching_min_ttl = 10
   error_code = 403
   response_page_path = "/index.html"
   response_code = 200
 }


 custom_error_response {
   error_code = 404
   response_page_path = "/index.html"
   response_code = 200
 }


 web_acl_id = "${var.WAF}"
 }

resource "aws_cloudfront_origin_access_control" "tutorialsdojo-oac" {
 name                              = "Tutorials Dojo OAC"
 description                       = "Origin Access for Tutorials Dojo"
 origin_access_control_origin_type = "s3"
 signing_behavior                  = "always"
 signing_protocol                  = "sigv4"
}

To put everything together, we will use modules with the source pointed to the folder directory where the Terraform scripts reside for each module. Note that there are some module inputs that require the prior module to be created first, as some of its input fields come from the output of the previous module. One example of this is the s3arn field of tutorialsdojo-cloudfront module. The s3arn field takes its value from module.tutorialsdojo-s3bucket.s3BucketARN fields which is generated once the s3 bucket is created by the previous module.

module "tutorialsdojo-s3bucket" {
 source                = "../../modules/s3"
 aws_profile           = var.aws_profile
 aws_region            = var.aws_region
 s3name                = var.s3BucketARN
 s3versioning          = var.s3versioning
}
module "tutorialsdojo-cloudfront" {
 source                = "../../modules/cloudfront"
 aws_profile           = var.aws_profile
 aws_region            = var.aws_region
 cfenabled             = var.cfenabled
 distributionendpoint  = var.publicdomain
 acmcertificatearn     = var.ACMcertificateARN
 publicdomain          = module.tutorialsdojo-s3bucket.domainname
 s3arn                 = module.tutorialsdojo-s3bucket.s3BucketARN
}
module "tutorialsdojo-s3policy" {
 source                = "../../modules/s3policy"
 aws_profile           = var.aws_profile
 aws_region            = var.aws_region
 s3name                = module.tutorialsdojo-s3bucket.tutorialsdojo-s3name
 s3arn                 = module.tutorialsdojo-s3bucket.tutorialsdojo-s3arn
 cicdIAMRole           = var.cicdIAMRole
 iamARN                = module.tutorialsdojo-cloudfront.tutorialsdojo-iamarn
}

You can then proceed to run the script by using the terraform apply command. This builds out the infrastructure.

Deploy with Continuous Delivery to AWS

Now that we have the public infrastructure built out, we can build the pipeline that will continuously deliver the compiled code to S3 every time a developer checks in a code change. We also need to add an invalidation command to the pipeline to ensure that CloudFront is always showing the latest code from S3 if there is an update. We will be using GitHub Actions to do this.

The pipeline will look like the image below.

Release with a Pipeline: Continuous Delivery to AWS with GitHub Actions1

Initial Part and Setting up OIDC

To start the pipeline, create a blank .yaml file under .github/workflows folder where your code resides. Add the following code:

name: WebApplication Deployment Pipeline


# Controls when the workflow will run
on:
 # Triggers the workflow on push or pull request events but only for the "main" branch
 push:
   branches: [ "main" ]


 # Allows you to run this workflow manually from the Actions tab
 workflow_dispatch:

From the code above, there are 2 actions that will trigger the action to run. First, when the code is pushed to the “main” branch. Second is workflow_dispatch, which is a manual trigger in the GitHub Actions tab that will be available once the .yaml code is checked in.

Since the GitHub actions workflow will interact with AWS, we also need to set up an OpenID token that will have sufficient access to deploy to AWS. You can follow the steps here to set this up.

Deployment Job Steps

For the steps, we will follow the code below:

jobs:
 deploy:
   name: Upload to Amazon S3
   runs-on: ubuntu-latest
   # These permissions are needed to interact with GitHub's OIDC Token endpoint.
   permissions:
     id-token: write
     contents: read
   steps:
   - name: Checkout
     uses: actions/checkout@v2


   - name: Configure AWS credentials
     uses: aws-actions/configure-aws-credentials@v1
     with:
       role-to-assume: arn:aws:iam::112358132134:role/cicdrole
       aws-region: us-east-1


   - name: Add Build/Compile Steps Here


   - name: Copy files to the test website with the AWS CLI
     run: |
       aws s3 sync ./staticwebsite s3://s3-reactapp-bucket
       aws cloudfront create-invalidation --distribution-id A1BCD2EFGHI3JK --paths 
"/*"

The actual code deployment will make use of aws-cli commands. The first command (aws s3 sync) will simply copy over the compiled files from the ./staticwebsite directory towards the s3://s3-reactapp-bucket S3 bucket from the web application infrastructure. The second command (aws cloudfront create-invalidation) will force clear the cached objects in Amazon CloudFront so that the checked-in code will automatically reflect on the next requests done on the website.

Add an A record in Route 53

Once everything is ready, we can add the public domain A record to point to the Amazon CloudFront resource to complete the infrastructure.

Choosing the Right DevOps Tools

When preparing for a CICD project, there are several available tools to choose from, each with its pros and cons. It is important to consider the requirements when choosing the right tool to use. Categories such as cost, ease of use, maintainability, speed, and security all play a factor when deciding on your CICD strategy. In a software development project, it is not only the code and website infrastructure that you have to think of. Deployment is also important, especially when designing a pipeline for a critical application.

References:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteEndpoints.html

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html

Free AWS Courses

https://registry.terraform.io/providers/hashicorp/aws/3.75.1/docs/resources/s3_bucket_website_configuration

https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html#create-oac-overview-s3

Get 20% Off – Christmas Big Sale on All Practice Exams, Video Courses, and eBooks!

Tutorials Dojo portal

Learn AWS with our PlayCloud Hands-On Labs

Tutorials Dojo Exam Study Guide eBooks

tutorials dojo study guide eBook

FREE AWS Exam Readiness Digital Courses

FREE AWS, Azure, GCP Practice Test Samplers

Subscribe to our YouTube Channel

Tutorials Dojo YouTube Channel

Follow Us On Linkedin

Recent Posts

Written by: Kaye Alvarado

Kaye is a DevOps Engineer and the offshore lead of the API and Integration Management Team at Asurion. She is an AWS Community Builder, and a core member of AWSUG BuildHers+. She holds multiple AWS certifications, and volunteers to mentor others on DevOps skills training, and certification review sessions both inside and outside the company. On her free time, she creates comic strips about funny encounters in IT titled GIRLWHOCODES.

AWS, Azure, and GCP Certifications are consistently among the top-paying IT certifications in the world, considering that most companies have now shifted to the cloud. Earn over $150,000 per year with an AWS, Azure, or GCP certification!

Follow us on LinkedIn, YouTube, Facebook, or join our Slack study group. More importantly, answer as many practice exams as you can to help increase your chances of passing your certification exams on your first try!

View Our AWS, Azure, and GCP Exam Reviewers Check out our FREE courses

Our Community

~98%
passing rate
Around 95-98% of our students pass the AWS Certification exams after training with our courses.
200k+
students
Over 200k enrollees choose Tutorials Dojo in preparing for their AWS Certification exams.
~4.8
ratings
Our courses are highly rated by our enrollees from all over the world.

What our students say about us?