Showing posts with label Autoscaling. Show all posts
Showing posts with label Autoscaling. Show all posts
Optimize the Cost by Shutting down EC2 Instances using Lambda
Recently, I have written an article about launching Instances on the AutoScaling Group in which I have configured CloudWatch alarms to trigger to launch EC2 instances based on aggregate CPU usage within the autoscaling group (used scaling policy).
find the previous article here
Then, I came across that if EC2 instances in AWS are managed through Auto Scaling Groups, it is easy to schedule startup and shutdown of those instances and It opens up a question that a cloud infrastructure always comes at a cost and how to Optimize the Cost of our AWS full Architecture?
Scenario: Developers working on a testing phase of their project and to reduce the cost. The manager decided to Shut down all the AWS Infrastructure services after office hours (shutting down the instances at nights and launch them back on the working mornings). So it becomes cumbersome for the cloud team to re-provision all the environment on a daily basis.
Though, I decided to automate this process using lambda and cloud watch rule.
Note: As usual, I used IaaC (terraform) to all the processes.
Let's dive in,
Steps:
1. Create AWS IAM Role and Policies for Lambda
2. Create AWS LAMBDA function to start and stop instances
3. Use Cron AWS Cloudwatch Rules to trigger Lambda function
Find the full code here
1. Create AWS IAM Role and Policies for Lambda
I created an IAM role for assuming lambda and then attached an inline policy to access autoscaling group
2. Create AWS LAMBDA function to start and stop instances
The below function creates AWS Lambda function and autoscaling.py lambda code has attached to the function.
Now lambda function has created and let's create both start and stop CloudWatch rules
Not please write down below scale-up-daown-lamda.tf file
To Stop the autoscaling EC2 Instance
As you can see from the above code, schedule_expression = "0 0 22 1/1 * ? *" is a Cron expression and which specify at which day(s) and time this event should trigger the event (in my case, 10 pm).
See this website for a handy Cron calculator.
To stop the instance, we have set the MinSize and DesiredSize to 0. So I passed min, desired value as 0 to my lambda function as Jason format below.
To Sart the autoscaling EC2 Instance
The code Exactly the same as the stop instances.schedule_expression = "0 0 7 1/1 * ? *" Cron expression specified to start the autoscaling group at 7am moring.
And MinSize and DesiredSize have given as we configured autoscaling from the beginning. Meaning that it brings back the EC2 instances as Exactly we created the autoscaling group earlier at launch first launch.
Now we can easily we can stop our EC2 instances at night to save money on our cloud bill and automate this process using lambda. To make sure your autoscaling is done properly, we should monitor the EC2 instance with Cloudwatch.
Find the full repo here
That's pretty much it guys! Thank you
find the previous article here
Then, I came across that if EC2 instances in AWS are managed through Auto Scaling Groups, it is easy to schedule startup and shutdown of those instances and It opens up a question that a cloud infrastructure always comes at a cost and how to Optimize the Cost of our AWS full Architecture?
Scenario: Developers working on a testing phase of their project and to reduce the cost. The manager decided to Shut down all the AWS Infrastructure services after office hours (shutting down the instances at nights and launch them back on the working mornings). So it becomes cumbersome for the cloud team to re-provision all the environment on a daily basis.
Though, I decided to automate this process using lambda and cloud watch rule.
Note: As usual, I used IaaC (terraform) to all the processes.
Let's dive in,
Steps:
1. Create AWS IAM Role and Policies for Lambda
2. Create AWS LAMBDA function to start and stop instances
3. Use Cron AWS Cloudwatch Rules to trigger Lambda function
Find the full code here
1. Create AWS IAM Role and Policies for Lambda
I created an IAM role for assuming lambda and then attached an inline policy to access autoscaling group
# /iam.tf
#create a role for lamda access aws resouces
resouce "aws_iam_role" "lamda-iam"{
#role name
name="lamda-asg"
#assume role to a services
assume_role_policy =<<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
# create policy for role to assume autoscaling
resource "aws_iam_role_policy" "autoscaling-lambda-policy"{
name = "autoscaling_lambda_policy"
role = "${aws_iam_role.lamda-iam.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:*"
],
"Resource": "*"
}
]
}
EOF
}
2. Create AWS LAMBDA function to start and stop instances
The below function creates AWS Lambda function and autoscaling.py lambda code has attached to the function.
# /scale-up-daown-lamda.tf
# create lambda
# to invoke autoscaling group
resource "aws_lambda_function" "lambda_test"{
function_name = "lambda_stop_start_autoscaling_instances"
filename = "./autoscaling.zip/autoscaling.py"
role = "${aws_iam_role.lamda-iam.arn}"
handler = "autoscaling.handler"
runtime = "python2.7"
}
Lambda function# /autoscaling.py
import boto3
import os
# Boto Connection
# aws_region, autoscaling-group-name,min,max and desired capacity has to be passed
asg = boto3.client('autoscaling','os.environ[aws_region]')
def lambda_handler(event, context):
response = asg.update_auto_scaling_group(AutoScalingGroupName=os.environ['asg_name'],MinSize=os.environ['min'],DesiredCapacity=os.environ['desired'],MaxSize=os.environ['max'])
Not please write down below scale-up-daown-lamda.tf file
To Stop the autoscaling EC2 Instance
# /scale-up-daown-lamda.tf
#...
#_______stop instances at 10pm everyday_________________
resource "aws_cloudwatch_event_rule" "stop" {
name = "everyday-10"
description = "stop instances everyday 10 pm"
schedule_expression = "0 0 22 1/1 * ? *"
event_pattern = <<PATTERN
{
"aws_region" : "eu-west-1",
"asg_name" : "webapp",
"min" : "0",
"desired" : "0",
"max" : "10"
}
PATTERN
}
resource "aws_cloudwatch_event_target" "lambda-stop" {
rule = "${aws_cloudwatch_event_rule.stop.name}"
target_id = "lambda"
arn = "${aws_lambda_function.lambda_test.arn}"
}
resource "aws_lambda_permission" "lamda-permision" {
statement_id = "AllowExecutionbycloudwatchrule"
action = "lambda:InvokeFunction"
function_name = "${aws_lambda_function.lambda_test.function_name}"
principal = "events.amazonaws.com"
source_arn = "${aws_cloudwatch_event_rule.stop.arn}"
}
#_______end of stop instances at 10pm_________________
See this website for a handy Cron calculator.
To stop the instance, we have set the MinSize and DesiredSize to 0. So I passed min, desired value as 0 to my lambda function as Jason format below.
event_pattern = <<PATTERN
{
"aws_region" : "eu-west-1",
"asg_name" : "webapp",
"min" : "0",
"desired" : "0",
"max" : "10"
}
PATTERN
Now, the stop rule has been created. resource "aws_cloudwatch_event_target" "lambda-stop".. used to select a target to invoke when an event matches. (literally trigger the lambda function on cloud watch rule event).To Sart the autoscaling EC2 Instance
# /scale-up-daown-lamda.tf
#...
#_______start instances at 7am everyday_________________
resource "aws_cloudwatch_event_rule" "start" {
name = "everyday-7am"
description = "start instances everyday 7am"
schedule_expression = "0 0 7 1/1 * ? *"
event_pattern = <<PATTERN
{
"aws_region" : "eu-west-1",
"asg_name" : "webapp",
"min" : "2",
"desired" : "5",
"max" : "10"
}
PATTERN
}
resource "aws_cloudwatch_event_target" "lambda-start" {
rule = "${aws_cloudwatch_event_rule.start.name}"
target_id = "lambda1"
arn = "${aws_lambda_function.lambda_test.arn}"
}
resource "aws_lambda_permission" "lamda-permision" {
statement_id = "AllowExecutionbycloudwatchrule"
action = "lambda:InvokeFunction"
function_name = "${aws_lambda_function.lambda_test.function_name}"
principal = "events.amazonaws.com"
source_arn = "${aws_cloudwatch_event_rule.start.arn}"
}
And MinSize and DesiredSize have given as we configured autoscaling from the beginning. Meaning that it brings back the EC2 instances as Exactly we created the autoscaling group earlier at launch first launch.
#luanch instance as we configured autoscaling from the begining
event_pattern = <<PATTERN
{
"aws_region" : "eu-west-1",
"asg_name" : "webapp",
"min" : "2",
"desired" : "5",
"max" : "10"
}
PATTERN
Now we can easily we can stop our EC2 instances at night to save money on our cloud bill and automate this process using lambda. To make sure your autoscaling is done properly, we should monitor the EC2 instance with Cloudwatch.
Find the full repo here
That's pretty much it guys! Thank you
Launch EC2 Instances in Autoscaling group with Load balancing
I have been learning AWS Services and writing my experience throughout this blog. I have discussed how to launch an instance using IaaC. eventually, I forget to test out AWS Autoscaling with Loadbalancers.
In this tutorial, I am gonna show up how to Launch EC2 Instances in the Autoscaling group with Load balancing.
What is Load balancers and Autoscaling?
Load balancers: IT is referred to as an ELB which manage and control the flow of inbound requests to a group of targets by distributing these requests evenly across the targeted resource group
Autoscaling: auto-scaling is a mechanism that automatically allows us to scale up/down EC2 resources to meet the demand based on custom-defined metrics and thresholds.
Steps
1. Launch an instance
2. Deploy the application on the instance
3. Build the AMI
4. Create a Launch Configuration5. Create an Elastic Load Balancer
6. Create and configure an Auto Scaling Group
Find the full code here
Let's dive in,
1. launch an instance
A prerequisite for autoscaling involves building an AMI containing your working application, which will be used to launch new instances. We'll start by launching a new instance onto which we can deploy our application. Create the following files:
---
# group_vars/all.yml
default_region: us-east-1
az1: us-east-1a
az2: us-east-1b
az3: us-east-1c
group_id: sg-abcd1234
instance_type: t2.micro
instance_name: template_ami
volumes:
- device_name: /dev/sda1
device_type: gp2
volume_size: 20
delete_on_termination: true
key_name: ec2_key
---
# site.yml
- hosts: localhost
connection: local
gather_facts: no
roles:
- luanch_ec2_instance
---
#luanch_ec2_instance/tasks/main.yml
---
# tasks file for luanch_ec2_instance
- name: deploy ec2 instance to create ami template
ec2_ami_facts:
owners: 099720109477
filters:
name: "ubuntu/images/ubuntu-zesty-17.04-*"
register: ami_result
- name: launch an instances
ec2:
region: "{{ default_region }}"
keypair: "{{ key_name }}"
zone: "{{ az1 }}"
group: "{{ group_id }}"
image: "{{ ami_result.image_id }}"
instance_type: "{{ instance_type }}"
instance_tags:
Name: "{{ instance_name }}"
volumes: "{{ volumes }}"
wait: yes
register: ec2
- name: Add new instances to host group
add_host:
name: "{{ item.public_dns_name }}"
groups: "{{ instance_name }}"
ec2_id: "{{ item.id }}"
with_items: ec2.instances
- name: Wait for instance to boot
wait_for:
host: "{{ item.public_dns_name }}"
port: 22
delay: 30
timeout: 300
state: started
with_items: ec2.instances
2. Deploy the application on the instance
let's use Ansible to deploy our application and start it on our instance.
--- # site.yml #install nginx on my luanched instances - hosts: template_ami roles: - nginx
--- # nginx/tasks/main.yml - name: Install Nginx apt: pkg: nginx state: present sudo: yes - name: Configure Nginx copy: src: nginx.conf dest: /etc/sites-enabled/default sudo: yes - name: Enable and start Nginx service: name: nginx enabled: yes state: restarted sudo: yes
# nginx/files/nginx.conf server { listen 80 default_server; location / { proxy_pass http://127.0.0.1:8000; } }
Running the playbook each time will launch another instance, deploy our application, and set up Nginx as our web server. If you browse to the newest instance at its hostname, you should see an "Nginx Home" page.
3. Build the AMI
Now that the application is deployed and running, we can use the newly launched instance to build an AMI.
Now that the application is deployed and running, we can use the newly launched instance to build an AMI.
--- # site.yml - hosts: control connection: local gather_facts: no roles: - create_ami
--- # create_ami/tasks/main.yml # tasks file for create_ami - name: Create AMI ec2_ami: region: "{{ default_region }}" instance_id: "{{ ec2_id }}" name: "webapp-{{ ansible_date_time.iso8601 | regex_replace('[^a-zA-Z0-9]', '-') }}" wait: yes state: present register: ami
Note: Each time the playbook is run, Ansible launches a new instance. At this rate, we'll keep accumulating instances that we don't need, so we will add another role and a new task to locate these instances and terminate them. It will terminate any existing instances immediately afterward.
Terminate old instances
--- # site.yml --- - name: Find existing instance(s) hosts: "tag_Name_ami-build" gather_facts: false tags: find tasks: - name: Add to old-ami-build group group_by: key: old-ami-build
- hosts: old-ami-build roles: - terminate_instance
--- # create_ami/tasks/main.yml - name: Terminate old instance(s) ec2: instance_ids: "{{ ec2_id }}" region: "{{ default_region }}" state: absent wait: yes
4. Create a Launch Configuration
Now we want to create a new Launch Configuration to describe the new instances that should be launched from this AMI.
--- # site.yml - hosts: control connection: local gather_facts: no roles: - create_ami - luanch_config
--- # luanch_config/tasks/main.yml- name: Create Launch Configuration ec2_lc: region: "{{ defaul_region }}" name: "webapp-{{ ansible_date_time.iso8601 | regex_replace('[^a-zA-Z0-9]', '-') }}" image_id: "{{ ami.image_id }}" key_name: "{{ key_name }}" instance_type: "{{ instance_type }}" security_groups: "{{ group_id }}" volumes: "{{ volumes }}" instance_monitoring: yes
Subscribe to:
Posts (Atom)