Enable GZIP Compression for Web Container on Amazon ECS


The performance of the web apps plays a vital role in user experience and impacts their engagement. Though web apps need to be performant, as most of the time, they can be accessed on Desktop/Mobile devices short span of time.

Websitestes usually comprise static assets such as .js .css and .json text files. Enabling GZIP compression on web servers is one of the simplest and most efficient ways to achieve that. This helps to reduce the bandwidth requirement, speeding up the website rendering.

All modern browsers include support for GZIP compression by default. However, to serve the compressed resources to users with no hiccups, we must configure your server properly.

In this post, I will discuss how to compress AWS ECS-based Web Containers by implementing NGINX reverse proxy sidecar with the internet blazing fast.

Excited? Let’s decompress!



As you can refer to this architecture, Users request the Application Load Balancer, which then distributes the traffic to the NGINX reverse proxy sidecars. The NGINX reverse proxy then forwards the request to the application server and returns its response to the client via the load balancer.

With this network design, we can achieve compression by enabling GZIP as well as filtering unwanted traffic to the web applications.

NGINX configuration.

worker_processes  1;

events {
  worker_connections  1024;
}

http {

  gzip on;
  gzip_min_length 1000;
  gzip_proxied expired no-cache no-store private auth;
  gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
  
  upstream apps {
    server app-container:3000; # Same Name as App Container
  }
  
  server {
    listen 80;
    server_name  localhost;

    
    location / {
      proxy_pass         http://apps;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection 'upgrade';
      proxy_set_header Host $host;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_cache_bypass $http_upgrade;
    }
  }
}

upstream apps: Define same as container application container name.

GZIP is a file format that uses DEFLATE internally, along with some interesting blocking, filtering heuristics, a header, and a checksum. In general, the additional blocking and heuristics that GZIP uses give it better compression ratios than DEFLATE alone.

gzip_types: Define types of files for compressed.

Note: I always deploy an environment using IaaC to make my life easier. So, I have developed Terraform code to deploy Fargret ECS (Nginx and App), ALB, Target Group for ALB


Container Definition

 [
    {
        "name": "app-container",
        "secrets": [],
        "image": "Your App ECR URL Here",
        "cpu": 512,
        "memory": 512,
        "portMappings": [
            {
                "containerPort": ${app_port},
                "protocol": "tcp",
                "hostPort": 0
            }
        ],
        "essential": true
    },
    {
        "name": "nginx-container",
        "image": "Your Nginx ECR URL Here",
        "memory": 256,
        "cpu": 256,
        "essential": true,
        "portMappings": [
            {
                "containerPort": ${web_port},
                "protocol": "tcp",
                "hostPort": 0
            }
        ],
        "links": [
            "app-container:app-container"
        ],
        "logConfiguration": {
            "logDriver": "awslogs",
            "secretOptions": null,
            "options": {
                "awslogs-group": "/ecs/nginx-log",
                "awslogs-region": "${region}",
                "awslogs-stream-prefix": "ecs"
            }
        }
    }
]

Main.tf

data "template_file" "template" {
    template = file("./templates/container-definition.json")
    vars = {
        region            = var.AWS_REGION
        app_port          = var.container_port
        web_port          = var.nginx_port
    }
}

As you can see from the main.tf file container ports passed as variables. var.container_port = 3000 and var.nginx_port = 80

Task Definition

resource "aws_ecs_task_definition" "task_definition" {
    container_definitions = data.template_file.template.rendered
    family = "SideCar"
    requires_compatibilities = ["FARGATE"]
    network_mode = "bridge"
    execution_role_arn = "arn:aws:iam::YourAccountID:role/ecsTaskExecutionRole"
    task_role_arn = "arn:aws:iam::YourAccountID:role/ecsTaskExecutionRole"
}

requires_compatibilities = ["FARGATE"] - Define that container deployed as fargrate instance.

execution_role_arn = "arn:aws:iam::YourAccountID:role/ecsTaskExecutionRole" - AWS managed role for Task Execution, Please replay with your AWS Account ID

task_role_arn = "arn:aws:iam::YourAccountID:role/ecsTaskExecutionRole" - AWS managed role for Task Execution, Please replay with your AWS Account ID

ECS Service

resource "aws_ecs_service" "ecs_service" {
  cluster = aws_ecs_cluster.cluster.id
  name = "WebApp"
  task_definition = aws_ecs_task_definition.task_definition.arn
  iam_role = "arn:aws:iam::YourAccountID:role/aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS"
  load_balancer {
    target_group_arn = aws_alb_target_group.target_group.arn
    container_name = "nginx-container"
    container_port = var.nginx_port
  }
  desired_count = var.desired_count
 lifecycle {
    ignore_changes = [desired_count]
  }
}

iam_role = "arn:aws:iam::YourAccountID:role/aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS" - AWS managed role for AWS Service, Please replay with your AWS Account ID

cluster = aws_ecs_cluster.cluster.id - I have created aws cluster in terraform file and I have refer to here

load_balancer - I have created Application Load Balancer and map to the Nginx Container port  request comes from ALB will be forwarded to the Nginx Container

Load Balancer Listner Rule

resource "aws_alb_listener" "listener" {
    load_balancer_arn = aws_alb.fargate.id
    port              = "80"
    protocol          = "HTTP"
    default_action {
    type             = "forward"
    target_group_arn = aws_alb_target_group.target_group.arn
    }

    condition {
    host_header {
      values = ["webapp.com"]
    }
  }
    condition {
        path_pattern {
        values = var.alb_rule
        }

    }
}

I have added Port 80 Listner to the Application Load Balancer and defined the Listener rule as follows.

When client request like webapp.com or the request path container /api, which will be passed to the Nginx container.


Find the Full code here


Conclusion

Gzip compresses, reduced hosting charges and brings a better user experience.



CI/CD Jenkins Pipeline With AWS - DevOps - Part 02

Hello Everyone,

In my previous tutorial, I discussed How to deploy the Jenkins Server (Master) and Build server (Node/Slave) on EC2 Instances. Also, I mentioned that I will write how to build a Pipeline and deploy the application. So, this blog is a continuation of the previous tutorial. To read my previous tutorial click the link below

CI/CD Jenkins Pipeline With AWS - DevOps - Part 01

In this tutorial, I will show how to build Pipeline, Connect Github, and Continues Deployment of web application on Elastic Beanstalk. 

Let's jump into the tutorial

How to build Pipeline, Connect Github, and Continues Deployment of web application on Elastic Beanstalk.

In this case, I am using GitHub as Version Control System and I going to build and deploy the Node-Js application on ElasticBeantalk. 

1. Install NodeJs Plugins on Jenkins

Navigate to Manage Plugins and click on the Available tab. Then Search for NodesJs

Manage Jenkins > Plugin Manager > Install NodeJS plugin.Now the Plugin is successfully installed.

Now Goto the Global Configuration and Set your compatible node version. 

Manage Jenkins > Global Tool Configuration > NodeJS
Bottom of your Global Tool Configuration, you can able to view nodejs installation tab.
Click On Add NodeJS and Configuration as follows,Name: NodeJs
Version: NodeJs Application version install on your Jenkins slave #12.18.3
Global npm packages to install: npm install -g
Leave everything as default and click Save

2. Install GitHub Plugins
Note: I have installed GitHub Plugin on the post-installation the Jenkins. However, you can able to install plugins the same as above.
Manage Jenkins > Plugin Manager > Install GitHub
Once the GitHub Plugin is installed, connect your GitHub to Jenkins.
Note: We can connect our GitHub account in several ways. In my case, I am going to use an SSH key to connect my GitHub account. therefore I am going to Add Credentials to my Global configuration - Reason, my GitHub account is secured with 2FA.
Navigate to Manage Jenkins > Manage Credentials > Click Jenkins under "Stores scoped to Jenkins" Table > Global credentials (unrestricted).
Configuration as follows,
Kind: SSH Username with Private Key,
ID: GitHub,
Description: GitHub Credential,
Username: <your GitHub username>
Private Key > enter directly: Past your private key
Passphrase: <add a password if your key is secured with password>

3. Setup Jenkins Continuous integration for each activity in your GitHub Repository.
Now, we have configured the basics of Jenkins. Let's create a GitHub repository, push our code and then create a webhook to build our code on each change on our repository.

3.1. Create a Private GitHub repository
In most cases, enterprises use private repositories so that they can able to secure their repository from out-siders. let's practice our CI/CD with industry standards.
Note: I skip the repo creation since it does not involve technical stuff.

3.2. Create a webhook and add it to Jenkins.
Once the GitHub Repository is created, Navigate to your repository and select setting > webhook > add webhookPayload URL: <paste your Jenkins environment URL/github-webhook/>. 
At the end of this URL add /github-webhook/. 
Content type: ‘application/json’ 
Secret: leave the field empty.

On the page "Which events would you like to trigger this webhook?" choose "Let me select individual events." Then, check "Pull Requests" and "Pushes". At the end of this option, make sure that the "Active" option is checked and click on "Add webhook".
Note: Often organization manages their repositories by creating "New Organization" and from there, they create Repositories and Teams to work with a project. Then team members pull and push their code to the repository for new features, issues, and releases. This procedure allows the project manager to maintain and keep track of the project status. 

Later in part 3 of this tutorial, I will talk in more detail about maintaining repository and release management to achieve Continuous Integration and Continuous Deployment.

Based on the above hook, I am setting up our Jenkins Continuous Integration whenever pull or push happened to our repository.

3.3. Push our code to the repository
I have already created my portfolio using react which is works as a static front-end web app. let's use it for our code. The code can be found here.
download the repo and modify it as you like then push back to the repository that you have created earlier.

3.4. Setup Jenkins for Continuous Integration.
As we have discussed earlier, I am going to deploy the code to ElasticBeanstalk for each PULL and PUSH. Deploy on ElasticBeanstalk, we need to install AWSEB Deployment and AWS Elastic Beanstalk Publisher Plugins.
Navigate to Manage Jenkins > Pluging Manager > Available Tab > Hit Search Elastic Beanstalk > Click Install without restart.
This will install necessary module for post-build process.
Once the installation is done, let's set up AWS IAM credentials our Jenkins to deploy our application to AWS Elastic Beanstalk.
Navigate to Manage Jenkins > Configure System > Scroll to Bottom > Deploy into AWS Elastic Beanstalk.
Once you entered the IAM credential, click save and create our first Jenkins job to build and deploy our application.

On your Jenkins DashBoard Click New Item > Enter a name for our project > Select Freestyle Project > Press Ok

Now the Project configuration is as follows,From the General Tab,
Description: <your description>
GitHub project: <Select GitHub Project and Paste you Project URL>
Leave everything as empty

Under Source Code Management
Select Git
Repository URL: <SSH URL>
Credentials: <The SSH GitHub credential which you have created earlier>
Note: I am Using GitHub credentials as SSH instead of Password or Token. you can select for your choice.

For this project, I am maintaining a master branch and main branch. The master in which where I test my code and later merge my master branch to main. The main branch act as the main/common branch for other branches such as the feature branch, todo branch, etc.
So, you can select the branch as per your SCM.
I select GitHub hook trigger as build trigger. As we configured our GitHub to trigger Jenkins for each pull and push.
Under the Build section, select the AWS IAM credentials which you have added earlier for Elastic Beanstalk Application and Select region as per your choice.
Note: Please take a note of the region that will be used later in this tutorial to store our beanstalk application in S3.
Application Name: <your choice>
Environment Names: <depend on your environment>
Version Label Format
We deploy our application on changes to ElasticBeanstalk and AWS automatically creates versions for each deployment. This allows us to manage multiple versions of the same application and also we can roll back to the previous version if the latest version fails.
I am have configured my version label as <app name>-<commit id>-<build tag>
As I mentioned earlier, ElasticBeanstalk stores different versions of the application to the S3 bucket. So let's configure applications for deploy to S3.
S3 Bucket Name: <the bucket which to store>
S3 Key Prefix: <the object you want store application versions>
S3 Bucker Versions: <same as beanstalk region>

Leave everything as default and click Save.

Once the project is saved Jenkins automatically starts to build and deploy your application.
Now Navigate to your build history and click console output to check if the build is successful.



Viola! The Application is now deployed to the ElasticBeanstalk. Navigate AWS Beanstalk and check whether the application is deployed.

Click on the URL of the beanstalk application page and verify the application is deployed.

Now we have successfully deployed our portfolio to ElaticBeanstalk using Jenkins continuous integration.

This is a foundation of a simple CI/CD. While we are building and deploying the application to AWS, this solution is not production-ready. 

I really want to talk much more in detail about CI/CD pipeline with code review, code quality, test, or security stages, and additional environment & test to ensure the artifact which fulfills all requirements before deploying to production. 

In part 03, let's have contexts about the Source Control Management, Continuous Test, Release Management and App Performance and Monitoring (APM)

That's pretty much it for this tutorial. PEACE!

CI/CD Jenkins Pipeline With AWS - DevOps - Part 01

 

Hello Everyone,

As you know, I am reading about AWS services and automate the deployment process of any AWS services using Terraform, Ansible, PowerShell, and bash and also often talk about DevOps.

DevOps is one of the things right now in the term of CI/CD with the development and release of software in the DevOps process. Every development and the release of the software have to be as fast as possible.

So, In this tutorial, show you how the deployment and release of a web application agile as possible using Jenkins, Github, AWS EC2 Instances, and AWS Elastic Beanstalk to create a deployment pipeline for a web application that is updated automatically every time you change your code.

This tutorial Divided into three parts, 

In part 01, I will be explaining how to deploy the Jenkins server (master) and Build server (node/slave).

In part 02, I will explain meticulously how to build Pipeline, Connect Github, and Continues Deployment of web application on Elastic Beanstalk.

In part 03, let's have contexts about the Continuous Test, Release Management and App Performance and Monitoring (APM)
----------------------------------------------------------------------------------------------------------

How to deploy the Jenkins server (master) and Build server (node/slave) on EC2 Instances.

As we know, Jenkins is a continuous integration tool that pulls the latest code from the repository whenever there is a commitment made to the code to perform operations like build, test and run and produce test case reports. In a real-time environment, Jenkins also uses multiple slaves because there might be chances that require different test case suites to be run for different environments once the code commits are done.

By considering that in my mind, I am going to deploy Jenkins master (Master instance) to pull codes and slaves to perform build and test (Node instance).

The deployed Architecture as follows,


Note: In this tutorial, I won't be explaining VPC, Subnet, Route Table creation. This tutorial only about Jenkins Master/ Slave Installation & Configuration AWS Plattform.

Prerequisites,
1. Launch 2 EC2 instances under the same VPC/Subnets.
2. Name 1st instance as Jenkins Master and 2nd Instance as Jenkins Slave

Let's Dive into the Master/Slave Configurations.

1. Jenkins Master Instance Configuration.
SSH to your Master Instance and Switch to root user and install Jenkins.

Jenkins requires a Java runtime environment. Java packages are available in the system upstream repositories.
We will install OpenJDK 11 on the server.
Note: Hit the y key on your keyboard when asked before installation commences.

ubuntu@ip-10.0.0.172:~$ sudo -s
root@@ip-10.0.0.172:-# sudo amazon-linux-extras install java-openjdk11
...
Transaction Summary
======================================
Install  1 Package (+31 Dependent packages)

Total download size: 46 M
Installed size: 183 M
Is this ok [y/d/N]: y
root@@ip-10.0.0.172:~#
root@@ip-10.0.0.172:~#

Once installation is over, Confirm the installation by checking the Java version.

ubuntu@ip-10.0.0.172:~# java --version
openjdk 11.0.7 2020-04-14 LTS
OpenJDK Runtime Environment 18.9 (build 11.0.7+10-LTS)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.7+10-LTS, mixed mode, sharing)
root@@ip-10.0.0.172:~#

Add Jenkins repository to Jenkins Master Instance. we’ll use the package installation method. Therefore a package repository is required to install Jenkins.

root@@ip-10.0.0.172:~# tee /etc/yum.repos.d/jenkins.repo<<EOF
[jenkins]
name=Jenkins
baseurl=http://pkg.jenkins.io/redhat
gpgcheck=0
EOF


Import GPG repository key.

root@@ip-10.0.0.172:~# rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key

Update the list of repositories to confirm it is working.

root@@ip-10.0.0.172:~#  sudo yum repolist

Install Jenkins Server

root@@ip-10.0.0.172:~# yum install jenkins

Start and enable the Jenkins service to start at the OS boot.

root@@ip-10.0.0.172:~# systemctl start jenkins
root@@ip-10.0.0.172:~# systemctl enable jenkins
root@@ip-10.0.0.172:~# systemctl status jenkins
● jenkins.service - LSB: Jenkins Automation Server
   Loaded: loaded (/etc/rc.d/init.d/jenkins; bad; vendor preset: disabled)
   Active: active (running) since Wed 2020-12-09 17:52:04 UTC; 9s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 2911 ExecStart=/etc/rc.d/init.d/jenkins start (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/jenkins.service
           └─2932 /etc/alternatives/java -Dcom.sun.akuma.Daemon=daemonized -Djava.awt.headless=true -DJENKINS_HOME=/var/lib/jenkins -jar /usr/lib/jenkins/jenki...

Dec 09 17:52:03 amazon-linux systemd[1]: Starting LSB: Jenkins Automation Server...
Dec 09 17:52:03 amazon-linux runuser[2916]: pam_unix(runuser:session): session opened for user jenkins by (uid=0)
Dec 09 17:52:04 amazon-linux jenkins[2911]: Starting Jenkins [  OK  ]
Dec 09 17:52:04 amazon-linux systemd[1]: Started LSB: Jenkins Automation Server.

By default Jenkins running on Port 8080. Confirm Jenkins Running on the Default port by executing the below command
root@@ip-10.0.0.172:~# ss -tunelp | grep 8080
tcp  LISTEN 0      50                                     *:8080              *:*          users:(("java",pid=2932,fd=139)) uid:996 ino:26048 sk:f v6only:0 <->

Everything is just done. But, Accessing the Jenkins console over port 8080 is kinda weird for me.

let's install NGINX as a reverse proxy to access our Jenkins Master over port 80 ( Like Normal browsing ).

root@@ip-10.0.0.172:~# yum install nginx
root@@ip-10.0.0.172:~# service nginx start

Configure /etc/nginx/conf.d/jenkins.conf file to set upstream group as jenkins application
root@@ip-10.0.0.172:~# vim /etc/ngnix/conf.d/jenkins.conf
upstream jenkins {
    server 127.0.0.1:8080;
}

server {
    listen      80 default;
    #server_name your_jenkins_site.com;#

    access_log  /var/log/nginx/jenkins.access.log;
    error_log   /var/log/nginx/jenkins.error.log;

    proxy_buffers 16 64k;
    proxy_buffer_size 128k;

    location / {
        proxy_pass  http://jenkins;
        proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
        proxy_redirect off;

        proxy_set_header    Host            $host;
        proxy_set_header    X-Real-IP       $remote_addr;
        proxy_set_header    X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header    X-Forwarded-Proto http;
    }

}

Once you edit the configuration restart the Nginx
root@@ip-10.0.0.172:~# systemctl reload nginx
Now you can able to access the Jenkins server without defining port 8080 on your browser.

Access Jenkins Server
When you navigate to the Jenkins server you will be prompt to enter the Jenkins default password.

Jenkins default login password is store in this file: /var/lib/jenkins/secrets/initialAdminPassword
root@@ip-10.0.0.172:~# cat /var/lib/jenkins/secrets/initialAdminPassword
7c893b9829dd4ba08244ad77fae9fe4f

Configure Jenkins DashBoard
Once enter the Jenkins default password you will be prompt to configure the dashboard

Once the plugins are installed create the first admin user.


Click Start Using Jenkins and You will be redirected to the Jenkins Dashboard.

Now we have done with Jenkins installation successfully. Moreover, Jenkins provides an email notification service through which we can report the build status and testing results to the team.

Now let's configure to send Email Notifications in Jenkins.

Note: There are two ways to configure email notifications in Jenkins. Using Email Extension Plugin and the 2nd is Using Default Email Notifier.

I am gonna use the Default Email Notifier for this tutorial.

Since we are using AWS services for the Jenkins server, lets configure AWS SES Services to our email server.

Navigate to the SES services and click Email Addresses










Fill in your email address and get verified, Then Navigate to SMTP Setting Make a note of your SMTP Server Name and Click Create My SMTP Credentials. 
Create My SMTP Credential page redirect to IAM Page and Click Create to get your SMTP Credentials for Jenkins Configuration.

Now Make of note your SMTP Credential and Go Back to your Jenkins Master Node and Configure SMTP for Email Notification under Jenkins configurations. 

On the Jenkins URL Tab add your SES Email which you have configured


Scroll Down to bottom of the configuration page and fill in the SMTP Server and SMTP Authentication information



2. Jenkins Slave Instance Configuration
As we discussed above, the Slave Instance acts as a build server. Later in this tutorial (part 2) we will build and deploy the application on the elastic beanstalk. With that in mind, let's create IAM Role for the slave instance to Beanstalk full access.

Navigate to IAM and Click on Create Roles and Choose use case as EC2 and Attach Beanstalk full Acess policy.


Now Launch an instance with the role which you have created and configure security group inbound for SSH and only allow your Jenkins Master instance.

Note: Again I am not going to provide a tutorial for how to launch an instance and security group configuration. Please configure it yourself.

Disclaimer: To access your slave server from Jenkins master you need to add the private key of your slave instance to Jenkins master. Please take note of the SSH keys.

Go back to your Jenkins Master instance Configuration > Credentials > Global Credential (Unrestricted) Path and add the SSH private key





Now Start your Slave instance and install the following software.
1. Install Java
ubuntu@ip-10.0.0.168:~$ sudo -s
root@@ip-10.0.0.168:-# amazon-linux-extras install java-openjdk11
2. Install Git
root@@ip-10.0.0.168:-# yum -y install git
3. install elastic beanstalk CLI
root@@ip-10.0.0.168:-# /usr/bin/easy_install awsebcli
Once you have done with the installation, Take a note of the Slave instance private DNS and add this Slave instance to the Jenkins Master.
MyPrivateDNS: ip-10-0-0-168.us-west-2.compute.internal

Now go back to your Jenkins Master, Click Configuration > Manage Nodes and Cloud
You will see the master on the node page. Click the setting button and set the Number of executors to 0. Meaning we are disabling our Jenkins Master to build any application. Because we have deployed slave instance to do build.
Once you save it, Click on New Node and the Slave instance your worker node to build your application on the Continuous integration. and define the following fields on the menu.




Description: As you want
Remote Root Directory: Slave instance User's Directory, # /home/ec2-user
Usage: Use this node as much as possible
Launch method: Launch via SSH
Host: Your Private DNS, #ip-10-0-0-168.us-west-2.compute.internal
Credentials: Credentials that have created previously
Host Key Verification Strategy: non verifying verification strategy

Then click Save and you will be redirected to the Node page. On the page, you can able to view created node

Now click on the Node which you have created and click the Launch Agent button.

The Jenkins Master instance tries to connect your slave instance via ssh. You can able see the Log of the connectivity and on success will get a log that the agent has been launched.



Now you have successfully connected your Jenkins Slave Node with Master Node and you fully utilize your Slave node to build the application.

That's it for the CI/CD Jenkins Pipeline With AWS - DevOps - Part 01
In this blog, we have discussed how to deploy the Jenkins server (master) and Build server (node/slave).

I know it was a bit wild to configure back and forth. In the end, we have achieved our goal.

In part two, let's discuss how to build Pipeline, Connect Github, and Continues Deployment of web application on Elastic Beanstalk with the real-time example.

That's pretty much it, catch you guys in part two.