Launch EC2 Instances in Autoscaling group with Load balancing

I have been learning AWS Services and writing my experience throughout this blog. I have discussed how to launch an instance using IaaC. eventually, I forget to test out AWS Autoscaling with Loadbalancers.

In this tutorial, I am gonna show up how to Launch EC2 Instances in the Autoscaling group with Load balancing.

What is Load balancers and Autoscaling?
Load balancers: IT is referred to as an ELB which manage and control the flow of inbound requests to a group of targets by distributing these requests evenly across the targeted resource group

Autoscaling: auto-scaling is a mechanism that automatically allows us to scale up/down EC2 resources to meet the demand based on custom-defined metrics and thresholds.

Steps
1. Launch an instance
2. Deploy the application on the instance
3. Build the AMI
4. Create a Launch Configuration
5. Create an Elastic Load Balancer
6. Create and configure an Auto Scaling Group

Find the full code here

Let's dive in,
1. launch an instance
A prerequisite for autoscaling involves building an AMI containing your working application, which will be used to launch new instances. We'll start by launching a new instance onto which we can deploy our application. Create the following files:
---
# group_vars/all.yml

default_region: us-east-1
az1: us-east-1a
az2: us-east-1b
az3: us-east-1c
group_id: sg-abcd1234
instance_type: t2.micro
instance_name: template_ami
volumes:
  - device_name: /dev/sda1
    device_type: gp2
    volume_size: 20
    delete_on_termination: true
key_name: ec2_key
---
# site.yml

- hosts: localhost
  connection: local
  gather_facts: no
  roles:
    - luanch_ec2_instance
---
#luanch_ec2_instance/tasks/main.yml
---
# tasks file for luanch_ec2_instance
  - name: deploy ec2 instance to create ami template
    ec2_ami_facts:
      owners: 099720109477
      filters:
        name: "ubuntu/images/ubuntu-zesty-17.04-*"
    register: ami_result

  - name: launch an instances
    ec2:
      region: "{{ default_region }}"
      keypair: "{{ key_name }}"
      zone: "{{ az1 }}"
      group: "{{ group_id }}"
      image: "{{ ami_result.image_id }}"
      instance_type: "{{ instance_type }}"
      instance_tags:
      Name: "{{ instance_name }}"
      volumes: "{{ volumes }}"
      wait: yes
    register: ec2

  - name: Add new instances to host group
    add_host:
      name: "{{ item.public_dns_name }}"
      groups: "{{ instance_name }}"
      ec2_id: "{{ item.id }}"
    with_items: ec2.instances

  - name: Wait for instance to boot
    wait_for:
      host: "{{ item.public_dns_name }}"
      port: 22
      delay: 30
      timeout: 300
      state: started
    with_items: ec2.instances

2. Deploy the application on the instance
let's use Ansible to deploy our application and start it on our instance.
---
# site.yml

#install nginx on my luanched instances
  - hosts: template_ami
    roles:
      - nginx

---
# nginx/tasks/main.yml
  - name: Install Nginx
    apt:
      pkg: nginx
      state: present
    sudo: yes

  - name: Configure Nginx
    copy:
      src: nginx.conf
      dest: /etc/sites-enabled/default
    sudo: yes

  - name: Enable and start Nginx
    service:
      name: nginx
      enabled: yes
      state: restarted
    sudo: yes
# nginx/files/nginx.conf

server {
  listen 80 default_server;
  location / {
    proxy_pass http://127.0.0.1:8000;
  }
}

Running the playbook each time will launch another instance, deploy our application, and set up Nginx as our web server. If you browse to the newest instance at its hostname, you should see an "Nginx Home" page.
3. Build the AMI
Now that the application is deployed and running, we can use the newly launched instance to build an AMI.
---
# site.yml
  - hosts: control
    connection: local
    gather_facts: no
    roles:
      - create_ami

---
# create_ami/tasks/main.yml
# tasks file for create_ami
- name: Create AMI
  ec2_ami:
    region: "{{ default_region }}"
    instance_id: "{{ ec2_id }}"
    name: "webapp-{{ ansible_date_time.iso8601 | regex_replace('[^a-zA-Z0-9]', '-') }}"
    wait: yes
    state: present
  register: ami
Note: Each time the playbook is run, Ansible launches a new instance. At this rate, we'll keep accumulating instances that we don't need, so we will add another role and a new task to locate these instances and terminate them. It will terminate any existing instances immediately afterward.
Terminate old instances
---
# site.yml
 ---
  - name: Find existing instance(s)
    hosts: "tag_Name_ami-build"
    gather_facts: false
    tags: find
    tasks:
      - name: Add to old-ami-build group
        group_by:
          key: old-ami-build
  - hosts: old-ami-build
    roles:
      - terminate_instance

---
# create_ami/tasks/main.yml
- name: Terminate old instance(s)
  ec2:
    instance_ids: "{{ ec2_id }}"
    region: "{{ default_region }}"
    state: absent
    wait: yes

4. Create a Launch Configuration
Now we want to create a new Launch Configuration to describe the new instances that should be launched from this AMI.
---
# site.yml
 - hosts: control
    connection: local
    gather_facts: no
    roles:
      - create_ami
      - luanch_config

---
# luanch_config/tasks/main.yml- name: Create Launch Configuration
  ec2_lc:
    region: "{{ defaul_region }}"
    name: "webapp-{{ ansible_date_time.iso8601 | regex_replace('[^a-zA-Z0-9]', '-') }}"
    image_id: "{{ ami.image_id }}"
    key_name: "{{ key_name }}"
    instance_type: "{{ instance_type }}"
    security_groups: "{{ group_id }}"
    volumes: "{{ volumes }}"
    instance_monitoring: yes


5. Create an Elastic Load Balancer
Clients will connect to an Elastic Load Balancer which will distribute incoming requests among the instances we have launched into our upcoming Auto Scaling Group.
---
# site.yml
  - hosts: control
    connection: local
    gather_facts: no
    roles:
      - create_ami
      - luanch_config
      - lb

---
# lb/tasks/main.yml- name: Configure Elastic Load Balancers
  ec2_elb_lb:
    region: "{{ default_region }}"
    name: webapp
    state: present
    zones:
      - "{{ az1 }}"
      - "{{ az2 }}"
      - "{{ az3 }}"
    connection_draining_timeout: 60
    listeners:
      - protocol: http
        load_balancer_port: 80
        instance_port: 80
    health_check:
      ping_protocol: http
      ping_port: 80
      ping_path: "/"
      response_timeout: 10
      interval: 30
      unhealthy_threshold: 6
      healthy_threshold: 2
  register: elb_result

6. Create and configure an Auto Scaling Group
Let's create an Auto Scaling Group and configure it to use the Launch Configuration we previously created. Within the boundaries that we define, AWS will launch instances into the ASG dynamically based on the current load across all instances. Equally when the load drops, some instances will be terminated accordingly. Exactly how many instances are launched or terminated is defined in one or more scaling policies, which are also created and linked to the ASG.
---
# site.yml
  - hosts: control
    connection: local
    gather_facts: no
    roles:
      - create_ami
      - luanch_config
      - lb
      - asg

---
# asg/tasks/main.yml---
# tasks file for asg
- name: Configure Auto Scaling Group and perform rolling deploy
  ec2_asg:
    region: "{{ default_region }}"
    name: webapp
    launch_config_name: webapp
    availability_zones:
      - "{{ az1 }}"
      - "{{ az2 }}"
      - "{{ az3 }}"
    health_check_type: ELB
    health_check_period: 300
    desired_capacity: 5
    replace_all_instances: yes
    min_size: 2
    max_size: 10
    load_balancers:
      - webapp
    state: present
  register: asg_result

- name: Configure Scaling Policies
  ec2_scaling_policy:
    region: "{{ default_region }}"
    name: "{{ item.name }}"
    asg_name: webapp
    state: present
    adjustment_type: "{{ item.adjustment_type }}"
    min_adjustment_step: "{{ item.min_adjustment_step }}"
    scaling_adjustment: "{{ item.scaling_adjustment }}"
    cooldown: "{{ item.cooldown }}"
  with_items:
    - name: "Increase Group Size"
      adjustment_type: "ChangeInCapacity"
      scaling_adjustment: +1
      min_adjustment_step: 1
      cooldown: 180
    - name: "Decrease Group Size"
      adjustment_type: "ChangeInCapacity"
      scaling_adjustment: -1
      min_adjustment_step: 1
      cooldown: 300
  register: sp_result


- name: Determine Metric Alarm configuration
  set_fact:
    metric_alarms:
      - name: "{{ asg_name }}-ScaleUp"
        comparison: ">="
        threshold: 50.0
        alarm_actions:
          - "{{ sp_result.results[0].arn }}"
      - name: "{{ asg_name }}-ScaleDown"
        comparison: "<="
        threshold: 20.0
        alarm_actions:
          - "{{ sp_result.results[1].arn }}"

- name: Configure Metric Alarms and link to Scaling Policies
  ec2_metric_alarm:
    region: "{{ defaul_region }}"
    name: "{{ item.name }}"
    state: present
    metric: "CPUUtilization"
    namespace: "AWS/EC2"
    statistic: "Average"
    comparison: "{{ item.comparison }}"
    threshold: "{{ item.threshold }}"
    period: 60
    evaluation_periods: 5
    unit: "Percent"
    dimensions:
      AutoScalingGroupName: "{{ asg_name }}"
    alarm_actions: "{{ item.alarm_actions }}"
  with_items: metric_alarms
  when: max_size > 1
  register: ma_result
There's lot going on here. Here we have configured our CloudWatch alarms to trigger based on aggregate CPU usage within our autoscaling group. When the average CPU utilization exceeds 50% across your instances for 5 consecutive samples taken every 60 seconds (i.e. 5 minutes), a scaling event will be triggered that launches a new instance to relieve the load. A corresponding CloudWatch alarm also triggers a scaling event to terminate an instance from the autoscaling group when the average CPU utilization drops below 20% across your instances for the same sample period.

The minimum and maximum sizes for the autoscaling group are set to 2 and 10 respectively. It's important to get these values right for your application workload. It's a good practice to have at least 2 instances in service.

Find the full code here

Next tutorial scale up/down Autoscaling group for cost optimization

That's Pretty much it! PEACE

, , ,

No comments:

Post a Comment