Fujitsu K5 Infrastructure as Code (Cookie Cutter)

My latest challenge was to develop a process to deploy a predefined infrastructure model, simple network with 3 nodes, in a consistent and repeatable fashion. However, this model needed to be deployed in every flavor type with different disk configurations.

Basically a customer wanted a repeatable mechanism to performance test the different node flavors at scale.

We could simply build on the previous HEAT examples and write a massive YAML template that contains the required infrastructure. However, this is prone to errors, difficult to debug and not very efficient or flexible.

What we need is to separate the static and dynamic infrastructure components – define a template that matches the static components and allows the dynamic components to be passed as parameters. Any coders out there will be familiar with the DRY code principle, Don’t Repeat Yourself – the same applies here.

As we were deploying into a project with existing infrastructure I also passed some of these details into the heat stack as input parameters – e.g. routerId, kpName

The basic infrastructure template looked like this :

heat_template_version: 2013-05-23
# Author: Graham Land
# Date: 08/03/2017
# Purpose: Fujitsu K5 OpenStack IaaS Heat Template that deploys 3 servers on a new network and attaches the network to a given router.
# Input parameters –
# routerId – unique id of the router that the network should be attached to
# imageName – the image OS that will be deployed
# flavorName – the vcpu and ram size of the servers
# dataVolume – the size of the data volume to be attached to the servers
# cidr – private network ip address details
# azName – the availability zone to deploy the servers in – obviously this need to be the same as the router location
# kpName – the name of an existing ssh key pair to use in the availability zone
#
#
# Output parameters – the ip addresses of the 3 servers
#
# Twitter: @allthingsclowd
# Blog: https://allthingscloud.eu
#
description: Fujitsu K5 OpenStack IaaS Heat Template that deploys 3 servers on a new network and attaches the network to a given router.
# Input parameters
parameters:
imageName:
type: string
label: Image name or ID
description: Image to be used for compute instance
default: "Ubuntu Server 14.04 LTS (English) 02"
flavorName:
type: string
label: Flavor
description: X vCPU and XXXXMB RAM
default: "T-1"
kpName:
type: string
label: Key name
description: Name of key-pair to be used for compute instance
default: "k5-loadtest-az1"
cidr:
type: string
label: ip address details
description: network address range
default: "10.99.99.0/24"
dataVolume:
type: string
label: volume size
description: size in GB of datavolume to attach to server
default: "3"
osVolume:
type: string
label: volume size
description: size in GB of OS volume to attach to server
default: "20"
azName:
type: string
label: Availability Zone
description: Region AZ to use
default: "uk-1a"
securityGroup:
type: string
label: Existing K5 security group name
description: Project Security Group
default: "demosecuritygroup"
routerId:
type: string
label: External Router
description: Router with external access for global ip allocation
default: "fcb1dddc-e0c8-4dd5-8a3f-4eee3b042912"
# K5 Infrastructure resources to be built
resources:
############################ Network Resources ####################
# Create a private network in availability
demostack_private_net :
type: OS::Neutron::Net
properties:
name: "private"
availability_zone: { get_param: azName}
# Create a new subnet on the private network
demostack_private_subnet :
type: OS::Neutron::Subnet
depends_on: demostack_private_net
properties:
availability_zone: { get_param: azName}
network_id: { get_resource: demostack_private_net }
cidr: { get_param: cidr}
dns_nameservers: ["62.60.39.9", "62.60.39.10"]
# Connect an interface on the demostacks network's subnet to the router
router_interface:
type: OS::Neutron::RouterInterface
depends_on: [demostack_private_subnet ]
properties:
router_id: { get_param: routerId }
subnet_id: { get_resource: demostack_private_subnet }
################## Servers Resources ###########################
################################ create server demo-mgmt1-server ##############################
# Create a data volume for use with the server
demo-mgmt1-server-data-vol:
type: OS::Cinder::Volume
properties:
availability_zone: { get_param: azName}
description: data Storage
size: { get_param: dataVolume}
volume_type: "M1"
# Create a system volume for use with the server
demo-mgmt1-server-sys-vol:
type: OS::Cinder::Volume
properties:
availability_zone: { get_param: azName}
size: { get_param: osVolume}
volume_type: "M1"
image : { get_param: imageName }
# Build a server using the system volume defined above
demo-mgmt1-server:
type: OS::Nova::Server
depends_on: [ demostack_private_subnet ]
properties:
key_name: { get_param: kpName }
image: { get_param: imageName }
flavor: { get_param: flavorName }
security_groups: [{get_param: securityGroup}]
block_device_mapping: [{"volume_size": { get_param: osVolume}, "volume_id": {get_resource: demo-mgmt1-server-sys-vol}, "delete_on_termination": True, "device_name": "/dev/vda"}]
admin_user: "ubuntu"
metadata: { "fcx.autofailover": True, "Example Custom Tag": "Multiple Server Build" }
user_data:
str_replace:
template: |
#cloud-config
write_files:
– content: |
#!/bin/bash
voldata_id=%voldata_id%
voldata_dev="/dev/disk/by-id/virtio-$(echo ${voldata_id} | cut -c -20)"
mkfs.ext4 ${voldata_dev}
mkdir -pv /mnt/appdata
echo "${voldata_dev} /mnt/appdata ext4 defaults 1 2" >> /etc/fstab
mount /mnt/appdata
chmod 0777 /mnt/appdata
path: /tmp/format-disks
permissions: '0700'
runcmd:
– /tmp/format-disks
params:
"%voldata_id%": { get_resource: demo-mgmt1-server-data-vol }
user_data_format: RAW
networks: ["uuid": {get_resource: demostack_private_net} ]
# Attach previously defined data-vol to the server
attach-demo-mgmt1-server-data-vol:
type: OS::Cinder::VolumeAttachment
depends_on: [ demo-mgmt1-server-data-vol, demo-mgmt1-server ]
properties:
instance_uuid: {get_resource: demo-mgmt1-server}
mountpoint: "/dev/vdb"
volume_id: {get_resource: demo-mgmt1-server-data-vol}
################################ create server demo-mgmt2-server ##############################
# Create a data volume for use with the server
demo-mgmt2-server-data-vol:
type: OS::Cinder::Volume
properties:
availability_zone: { get_param: azName}
description: data Storage
size: { get_param: dataVolume}
volume_type: "M1"
# Create a system volume for use with the server
demo-mgmt2-server-sys-vol:
type: OS::Cinder::Volume
properties:
availability_zone: { get_param: azName}
size: { get_param: osVolume}
volume_type: "M1"
image : { get_param: imageName }
# Build a server using the system volume defined above
demo-mgmt2-server:
type: OS::Nova::Server
depends_on: [ demostack_private_subnet ]
properties:
key_name: { get_param: kpName }
image: { get_param: imageName }
flavor: { get_param: flavorName }
block_device_mapping: [{"volume_size": { get_param: osVolume}, "volume_id": {get_resource: demo-mgmt2-server-sys-vol}, "delete_on_termination": True, "device_name": "/dev/vda"}]
admin_user: "ubuntu"
security_groups: [{get_param: securityGroup}]
metadata: { "fcx.autofailover": True, "Example Custom Tag": "Multiple Server Build" }
user_data:
str_replace:
template: |
#cloud-config
write_files:
– content: |
#!/bin/bash
voldata_id=%voldata_id%
voldata_dev="/dev/disk/by-id/virtio-$(echo ${voldata_id} | cut -c -20)"
mkfs.ext4 ${voldata_dev}
mkdir -pv /mnt/appdata
echo "${voldata_dev} /mnt/appdata ext4 defaults 1 2" >> /etc/fstab
mount /mnt/appdata
chmod 0777 /mnt/appdata
path: /tmp/format-disks
permissions: '0700'
runcmd:
– /tmp/format-disks
params:
"%voldata_id%": { get_resource: demo-mgmt2-server-data-vol }
user_data_format: RAW
networks: ["uuid": {get_resource: demostack_private_net} ]
# Attach previously defined data-vol to the server
attach-demo-mgmt2-server-data-vol:
type: OS::Cinder::VolumeAttachment
depends_on: [ demo-mgmt2-server-data-vol, demo-mgmt2-server ]
properties:
instance_uuid: {get_resource: demo-mgmt2-server}
mountpoint: "/dev/vdb"
volume_id: {get_resource: demo-mgmt2-server-data-vol}
################################ create server demo-mgmt3-server ##############################
# Create a data volume for use with the server
demo-mgmt3-server-data-vol:
type: OS::Cinder::Volume
properties:
availability_zone: { get_param: azName}
description: data Storage
size: { get_param: dataVolume}
volume_type: "M1"
# Create a system volume for use with the server
demo-mgmt3-server-sys-vol:
type: OS::Cinder::Volume
properties:
availability_zone: { get_param: azName}
size: { get_param: osVolume}
volume_type: "M1"
image : { get_param: imageName }
# Build a server using the system volume defined above
demo-mgmt3-server:
type: OS::Nova::Server
depends_on: [ demostack_private_subnet ]
properties:
key_name: { get_param: kpName }
image: { get_param: imageName }
flavor: { get_param: flavorName }
block_device_mapping: [{"volume_size": { get_param: osVolume}, "volume_id": {get_resource: demo-mgmt3-server-sys-vol}, "delete_on_termination": True, "device_name": "/dev/vda"}]
admin_user: "ubuntu"
security_groups: [{get_param: securityGroup}]
metadata: { "fcx.autofailover": True, "Example Custom Tag": "Multiple Server Build" }
user_data:
str_replace:
template: |
#cloud-config
write_files:
– content: |
#!/bin/bash
voldata_id=%voldata_id%
voldata_dev="/dev/disk/by-id/virtio-$(echo ${voldata_id} | cut -c -20)"
mkfs.ext4 ${voldata_dev}
mkdir -pv /mnt/appdata
echo "${voldata_dev} /mnt/appdata ext4 defaults 1 2" >> /etc/fstab
mount /mnt/appdata
chmod 0777 /mnt/appdata
path: /tmp/format-disks
permissions: '0700'
runcmd:
– /tmp/format-disks
params:
"%voldata_id%": { get_resource: demo-mgmt3-server-data-vol }
user_data_format: RAW
networks: ["uuid": {get_resource: demostack_private_net} ]
# Attach previously defined data-vol to the server
attach-demo-mgmt3-server-data-vol:
type: OS::Cinder::VolumeAttachment
depends_on: [ demo-mgmt3-server-data-vol, demo-mgmt3-server ]
properties:
instance_uuid: {get_resource: demo-mgmt3-server}
mountpoint: "/dev/vdb"
volume_id: {get_resource: demo-mgmt3-server-data-vol}
outputs:
server1_ip:
description: fixed ip assigned to the server 1
value: { get_attr: [demo-mgmt1-server, networks, "private", 0]}
server2_ip:
description: fixed ip assigned to the server 2
value: { get_attr: [demo-mgmt2-server, networks, "private", 0]}
server3_ip:
description: fixed ip assigned to the server 3
value: { get_attr: [demo-mgmt3-server, networks, "private", 0]}

Now that we have our Infrastructure as Code, how do we deploy it at scale whilst changing the input parameters? Well, this is where the ‘API Economy’ comes to the forefront. Fujitsu K5 is based on OpenStack which is an API first platform – in English rather than marketing this effectively means that the platform can be driven 100% through API only interaction….still confused? I get to use more code!

I can send the heat template above along with the different sets of parameters to K5’s orchestration engine using a python script which will send the data to the orchestration endpoint.

def deploy_heat_stack(k5token, stack_name, stack_to_deploy, stack_parameters):
"""Summary : K5 HEAT API call to send a heat stack, wrapped in a string, to a K5 Project
Returns:
TYPE: JSON Object containing the new Stack Id or Error Codes
"""
orchestrationURL = unicode(get_endpoint(k5token, "orchestration")) + unicode("/stacks")
print orchestrationURL
token = k5token.headers['X-Subject-Token']
try:
response = requests.post(orchestrationURL,
headers={
'X-Auth-Token': token, 'Content-Type': 'application/json', 'Accept': 'application/json'},
json={
"files": {},
"disable_rollback": True,
"parameters": stack_parameters,
"stack_name": stack_name,
"template": stack_to_deploy,
"timeout_mins": 60
})
return response
except:
return ("\nUnexpected error:", sys.exc_info())

view raw
Deploy_Heat_Stack.py
hosted with ❤ by GitHub

The advantage that you have here is that the entire process is now defined in code – and this can easily be version controlled which helps to guarantee consistent deployments.

The complete version of this solution can be checked-out here: https://github.com/allthingsclowd/Fujitsu_OpenStack_K5_Heat_Cookie_Cutter

view raw
K5_Cookie_Cutter.md
hosted with ❤ by GitHub

Happy Stacking!

#withk5youcan

 

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s