So I’m back from the Boston OpenStack Summit almost 2 weeks now – what a fun week for a geek. Eating, drinking and sleeping technology…I love it. It was great to meet so many fellow geeks and nice to learn what customers are discovering on their various cloud journeys. If anyone wishes to see a re-run of my Fujitsu Cloud Service K5 presentation you can check it out on youtube here – (I definitely need to shed a few lbs)
A prevalent theme at this OpenStack Summit was how Kuberenetes and OpenStack are good companions, just like strawberry jam and cream – anyone for a cream tea…it’s almost summer.
However the great British cream tea debate lives on…
Some folks want to use Kubernetes to deploy OpenStack as a ‘kube’ application whilst others see Kubernetes as the container orchestration layer on top of OpenStack. I can see merit in both routes but will sit on the fence until the community decides if there’s ever going to be a definitive winner. Either way these will be complex solutions to debug when something goes wrong – whomever has the simplest self healing tooling will win this race in my opinion.
Infrastructure as Code – OpenStack Heat Templates to Deploy Kubernetes
In the following example I’ve built a heat stack that deploys Kubernetes on top of Fujitsu Cloud Service K5 IaaS. The heat stack builds out the pre-requisite infrastructure in K5 and then one of the deployed servers leverages the Kubernetes Kargo Project which uses an ansible-playbook to install and configure the Kubernetes PaaS layer.
I’ve provided you with a choice of 2 templates:
- Simple, non-HA 3+1 server deployment. Suitable to evaluate Kubernetes and deploy applications.
- Highly Available, multi K5 AZ 6+1 server deployment. Suitable to evaluate a clustered etcd Kubernetes and deploy applications.
Notes: I’m no Kubernetes guru – these are example Heat templates that will enable you to get Kubernetes up and running quickly on Fujitsu K5 for evaluation purposes. They have not been designed with production workloads in mind. Kubernetes is still a relatively young product that needs enhancements to it’s multi-tenancy and security components before it will be ready for the general enterprise production market – unless of course you have a team of developers who are keen to get coding – that’s the beauty of open source initiatives.
The Kubernetes LoadBalancer TYPE is API compatible with GCE, surprise, surprise and also AWS. Though I have attempted to stick a K5 LoadBalancer in front of the minions it looks like it needs more debug as I only get partial page returns and have run out of time to debug further….I get to play with Kubernetes by nights only at present. As we don’t use a native OpenStack LBaaS v1 or v2, these weren’t fit-for-purpose at K5 deployment time, unfortunately I could not leverage their Kubernetes LBaaS solution either.
If you do need a production grade solution on K5, Kubernetes has been deliberately architected to facilitate plugins so it wouldn’t be that difficult to ‘roll your own’ LBaaS API. And thanks to the community spirit, Concur, has already done most of the hard work for us providing a helper template here – https://github.com/concur/kubegowatcher.
Please pay close attention to the input parameters and ensure that you configure them to match you environment before deployment.
Once deployed login to the JumpBox and from there you can access all the kube nodes.