,

Auto-Scaling Infrastructure as Code – Elastic Compute on K5 IaaS

In today’s article we’ll walk through how to deploy and test Fujitsu’s K5 IaaS AutoScaling feature. Overview So, what do I mean by the K5 auto-scaling feature? Well, imagine that you have a virtual server that has peeks and troughs in demand. Take the Ticket Master webserver example, 99% of the time the server sits…

In today’s article we’ll walk through how to deploy and test Fujitsu’s K5 IaaS AutoScaling feature.

Overview

So, what do I mean by the K5 auto-scaling feature? Well, imagine that you have a virtual server that has peeks and troughs in demand. Take the Ticket Master webserver example, 99% of the time the server sits there idling at 5% utilisation. However, when Westlife regroup (and they will) and launch their ticket sales for their new tour the Ticket Master server becomes overloaded whilst all the once teenies, now cash rich thirty somethings all try to login and buy tickets at the same time. [Replace Westlife with iPhoneX launch if that makes more sense to you].

Along comes Auto-Scaling to the rescue – also often referred to as elastic compute. If the Tick Master server was created using K5’s autoscaling feature then when the load on the server begins to rise and crosses a predefined threshold, K5 automagically builds you a new server and adds it into your load balanced instance pool – all with zero effort on your part.

And it gets even better, once everyone has their tickets and the load balanced servers are no longer heavily utilised K5 will again scale back in the infrastructure to match the current demand – automatically.

Sounds like a lot of work – the good news is that K5 has already enhanced HEAT and leverages K5’s OpenStack Ceilometer to do all this work for you – these enhancements are documented in the K5 Heat Guide which is located here.

All you need to do is configure and deploy a heat stack such as this one:


# Basic K5 template to demonstrate Fujitsu's HEAT Autoscaling enhancements
# Author: Graham J Land
# Date: 10/10/2017
heat_template_version: 2013-05-23
description:
Fujitsu Cloud Service K5 IaaS AutoScaling Example Template.
# The prerequisites for a successful deployment
parameters:
# target availability zone
az:
type: string
default: uk-1b
# server to be scaled – simple nodejs app in this demo
param_image_id:
type: string
default: bc4d2c64-1694-4488-80e2-e089bd18fc42
# t-shirt size to use
param_flavor:
type: string
default: S-1
# ssh keys to be injected into scaled servers
key_name:
type: string
description: SSH key to connect to the servers
default: LEMP-KP-AZ2
# existing router in project with external gateway configured
autoscale_router:
type: string
default: 5b29b682-df94-4178-b1b4-9bf487055787
# what actually gets built
resources:
# create a private network
autoscale_private_net_az:
type: OS::Neutron::Net
properties:
availability_zone: { get_param: az }
name: "autoscale_private_net"
# create a new subnet on the private above
autoscale_private_subnet_az:
type: OS::Neutron::Subnet
depends_on: autoscale_private_net_az
properties:
availability_zone: { get_param: az }
name: "autoscale_private_subnet_az"
network_id: { get_resource: autoscale_private_net_az }
cidr: "192.168.200.0/24"
gateway_ip: "192.168.200.254"
allocation_pools:
– start: "192.168.200.100"
end: "192.168.200.150"
dns_nameservers: ["62.60.42.9", "62.60.42.10"]
# connect an interface on the network's subnet to the existing router
az_router_interface:
type: OS::Neutron::RouterInterface
depends_on: [autoscale_private_subnet_az]
properties:
router_id: { get_param: autoscale_router }
subnet_id: { get_resource: autoscale_private_subnet_az }
# create a new security group for your PC's access
# just google "what's my ip" to determine your public NAT address
# mine was 31.53.253.24 during the demo below
security_group_01:
type: OS::Neutron::SecurityGroup
properties:
description: Add security group rules for server
name: AutoScaleServer
rules:
# allow ssh (port 22) connection from my pc
– remote_ip_prefix: 31.53.253.24/32
protocol: tcp
port_range_min: 22
port_range_max: 22
# allow ping packets from my pc
– remote_ip_prefix: 31.53.253.24/32
protocol: icmp
# create open security group for everyone to access the public LBaaS
security_group_02:
type: OS::Neutron::SecurityGroup
properties:
description: Add security group rules for server
name: AutoScaleLBaaS
rules:
# allow http (port 80) traffic from 'whole internet'
– remote_ip_prefix: 0.0.0.0/0
protocol: tcp
port_range_min: 80
port_range_max: 80
# define the scaling server pool
web_server_group:
depends_on: [ az_router_interface ]
type: FCX::AutoScaling::AutoScalingGroup
properties:
AvailabilityZones: [{get_param: az}]
LaunchConfigurationName: {get_resource: launch_config}
MinSize: '1'
MaxSize: '3'
VPCZoneIdentifier: [ {get_resource: autoscale_private_subnet_az} ]
LoadBalancerNames: [ {get_resource: eLBint} ]
# this is the actual scalable unit of deployment – the web server
launch_config:
type: FCX::AutoScaling::LaunchConfiguration
depends_on: [ security_group_01, az_router_interface ]
properties:
ImageId: { get_param: param_image_id }
InstanceType: { get_param: param_flavor }
KeyName: {get_param: key_name}
SecurityGroups: [ {get_resource: security_group_01}, {get_resource: security_group_02} ]
BlockDeviceMappingsV2: [{source_type: 'image', destination_type: 'volume', boot_index: '0', device_name: '/dev/vda', volume_size: '3',uuid: {get_param: param_image_id}, delete_on_termination: true}]
UserData: |
#!/bin/bash
sudo hostname `hostname`
echo "Rebooting Hack"
sudo reboot
# create the load balancer that will be used to
# manage the scaling instances
eLBint:
type: FJ::ExpandableLoadBalancer::LoadBalancer
depends_on: [ security_group_01, az_router_interface ]
properties:
Subnets: [ {get_resource: autoscale_private_subnet_az} ]
Listeners:
– {LoadBalancerPort: '80', InstancePort: '80', Protocol: 'HTTP', InstanceProtocol: 'HTTP' }
HealthCheck: {Target: 'HTTP:80/', HealthyThreshold: '2', UnhealthyThreshold: '3', Interval: '5', Timeout: '5'}
Version: 2014-09-30
Scheme: public
LoadBalancerName: autoscaler
SecurityGroups: [ {get_resource: security_group_02} ]
# create the scale out policy
web_server_scaleout_policy:
type: FCX::AutoScaling::ScalingPolicy
properties:
AdjustmentType: ChangeInCapacity
AutoScalingGroupName: {get_resource: web_server_group}
Cooldown: '10'
ScalingAdjustment: '1'
# create the scale in policy
web_server_scalein_policy:
type: FCX::AutoScaling::ScalingPolicy
properties:
AdjustmentType: ChangeInCapacity
AutoScalingGroupName: {get_resource: web_server_group}
Cooldown: '10'
ScalingAdjustment: '-1'
# create the ALARM event which triggers when
# the server is overloaded
cpu_alarm_high:
type: OS::Ceilometer::Alarm
properties:
description: Scale-out if the average CPU > 50% for 1 minute
meter_name: fcx.compute.cpu_util
statistic: avg
period: '60'
evaluation_periods: '1'
threshold: '50'
alarm_actions:
– {get_attr: [web_server_scaleout_policy, AlarmUrl]}
matching_metadata: {'metadata.user_metadata.groupname': {get_resource: web_server_group}}
comparison_operator: gt
# create the 'reset' ALARM event when services return to normal
# workloads
cpu_alarm_low:
type: OS::Ceilometer::Alarm
properties:
description: Scale-in if the average CPU < 15% for 1 minute
meter_name: fcx.compute.cpu_util
statistic: avg
period: '60'
evaluation_periods: '1'
threshold: '15'
alarm_actions:
– {get_attr: [web_server_scalein_policy, AlarmUrl]}
matching_metadata: {'metadata.user_metadata.groupname': {get_resource: web_server_group}}
comparison_operator: lt

So let’s try this now –

Template Prerequisites:

az: Enter the target availability zone, in my case uk-1b
param_image_id:

If you don’t have your own server to test with the build image that I’m using is a simple Ubuntu server that has Nodejs installed with the following application /var/helloworld/helloworld.js (don’t forget to “npm install –save express) –


const express = require('express')
const app = express()
const serverName = require('os').hostname();
const messageTop = '<div class="middle">\
<img src="https://www.fujitsu.com/uk/Images/K5-climber-580x224_tcm23-2619235.jpg&quot; alt="K5 Autoscale">\
<h1>Hello from '
const messageTail = '</h1>\
<hr>\
</div>\
</div>'
app.get('/', function (req, res) {
res.send(messageTop + serverName + messageTail)
})
app.listen(80, function () {
console.log('Example app listening on port 80')
})

view raw

helloworld.js

hosted with ❤ by GitHub

To run this application as a service on Ubuntu copy the following helloworld.conf file to /etc/init on the server. Ensure to adjust the filename and path in the file below to match both the name and location where you stored the above file –


description "Fujitsu K5 IaaS AutoScaling Demo Node.js Server"
author "Graham J Land"
start on started mountall
stop on shutdown
respawn
respawn limit 99 5
script
export HOME="/root"
exec /usr//bin/nodejs /var/helloworld/helloworld.js >> /var/log/node.log 2>&1
end script
post-start script
echo "Started HelloWorld Demo NodeJS Webserver"
end script

view raw

helloworld.conf

hosted with ❤ by GitHub

param_flavor: t-shirt server size, S-1

key_name: your public ssh key, I’ve used LEMP-KP-AZ2
autoscale_router: this is the id of an existing router in your project that has an external gateway configured.

Launch Template:

Once you’ve satisfied the above prerequisites and modified the template with your local configuration it’s time to deploy our infrastructure. I’ll use the K5 portal in the example here however you could also use the native K5 APIs or even the openstack client.

Watch the Movie – for free here – to see how it all comes together.

 

Happy Stacking!

Graham.

Tags:

Response to “Auto-Scaling Infrastructure as Code – Elastic Compute on K5 IaaS”

  1. mohclips

    Reblogged this on Kapua and commented:
    Awesome Scaling demo by Graham. Watch the video at the bottom.

    Like

Leave a comment