Auto-Scaling Infrastructure as Code – Elastic Compute on K5 IaaS

In today’s article we’ll walk through how to deploy and test Fujitsu’s K5 IaaS AutoScaling feature.

Overview

So, what do I mean by the K5 auto-scaling feature? Well, imagine that you have a virtual server that has peeks and troughs in demand. Take the Ticket Master webserver example, 99% of the time the server sits there idling at 5% utilisation. However, when Westlife regroup (and they will) and launch their ticket sales for their new tour the Ticket Master server becomes overloaded whilst all the once teenies, now cash rich thirty somethings all try to login and buy tickets at the same time. [Replace Westlife with iPhoneX launch if that makes more sense to you].

Along comes Auto-Scaling to the rescue – also often referred to as elastic compute. If the Tick Master server was created using K5’s autoscaling feature then when the load on the server begins to rise and crosses a predefined threshold, K5 automagically builds you a new server and adds it into your load balanced instance pool – all with zero effort on your part.

And it gets even better, once everyone has their tickets and the load balanced servers are no longer heavily utilised K5 will again scale back in the infrastructure to match the current demand – automatically.

Sounds like a lot of work – the good news is that K5 has already enhanced HEAT and leverages K5’s OpenStack Ceilometer to do all this work for you – these enhancements are documented in the K5 Heat Guide which is located here.

All you need to do is configure and deploy a heat stack such as this one:

So let’s try this now –

Template Prerequisites:

az: Enter the target availability zone, in my case uk-1b
param_image_id:

If you don’t have your own server to test with the build image that I’m using is a simple Ubuntu server that has Nodejs installed with the following application /var/helloworld/helloworld.js (don’t forget to “npm install –save express) –

To run this application as a service on Ubuntu copy the following helloworld.conf file to /etc/init on the server. Ensure to adjust the filename and path in the file below to match both the name and location where you stored the above file –

param_flavor: t-shirt server size, S-1

key_name: your public ssh key, I’ve used LEMP-KP-AZ2
autoscale_router: this is the id of an existing router in your project that has an external gateway configured.

Launch Template:

Once you’ve satisfied the above prerequisites and modified the template with your local configuration it’s time to deploy our infrastructure. I’ll use the K5 portal in the example here however you could also use the native K5 APIs or even the openstack client.

Watch the Movie – for free here – to see how it all comes together.

 

Happy Stacking!

Graham.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s