The number of Openstack projects related to container technologies are growing fast, along with the confusion surrounding their differing use cases. Hopefully this post will help clarify when and why to use these projects.
First of all however, why containers? What’s the big deal? Aren’t these just another developer led “flavour of the month”? If you have yet to read about containers or use them in development/production then it is highly likely you are working for an enterprise that’s still using traditional, monolithic applications and deployment methods which can be in either private data centres or a public cloud.
Over the past 5 years a container company, Docker, released software and toolsets known by the same name (though the open source project was recently confusingly renamed Moby) which vastly simplified consumption of the mechanism of isolating processes and their resources within an operating system from other processes. These mechanisms (network namespaces, cgroups, etc.) have been around for many years but you needed to be a rocket scientist (no pun intended) to comprehend the system calls necessary to invoke the isolation. Docker wrapped these system calls in a very easy to use API which facilitated mass adoption of containers – people like me were suddenly empowered.
Initially linux was the only supported host OS but in the last two years Microsoft has gone all out to ‘mimic’ the container experience on Windows – it’s still rough around the edges but improving rapidly. It can also be run on some mainframes now too.
However, Docker didn’t just leave it at that, they very cleverly setup a free global repository that could be used to store container images (yaml files). The software industry rapidly embraced this new technology along with the https://hub.docker.com repository, quickly filling it with distributions of their software offerings packaged as ready-made containers.
So now we have a container engine with a fantastic library of applications that can be simply and more importantly consistently deployed with a standard set of commands.
As a result of this repository, and a great API, Docker has now become the ‘defacto’ standard for container engines – though thankfully we’ve got the Open Container Innitiative [OCI] which Docker adheres to that should ensure we could switch container engines if required with minimal effort.
This revolution, and I see it as a revolution not evolution, both empowered and frustrated the developer community. The developer frustration comes when they try to deploy their container based applications into non-container based production environments. They are back to long lead times for environment build out and this results in long application cycle times. The following Openstack container projects will hopefully help to solve many of these challenges.
So why use containers?
- Improved Server Utilisation which results in reduced OS licencing costs. This is similiar to the VM revolution in the last decade where we P2V’ed many physical servers to virtual servers reducing the number of required physical servers. However, every virtual machine still required it’s own OS (operating system) whereas containers can, and do, share a single host OS.
It’s possible to launch many containers within a single host OS – thus we again gain an order of magnitude improvement on both resource utilisation and OS licence requirements.
- Modernising Traditional Applications. Please don’t confuse this with application transformations. This is where you build your monolithic (usually stateful) application inside containers.
They still suffer from the same traditional application constraints – typically they require costly third party services/solutions to manage their state from a high availability perspective.
However you do gain the benefits of IaaS abstraction (same experience whether deployed to private or public cloud) and consistent, codified deployments. You are effectively now playing in the Containers as a Service (CaaS) space with traditional applications.
- Cloud Native Application Development. These applications tend to utilise micro-service patterns extensively today and micro-services wrapped in containers are a very powerful and complimentary combination.
- DevOps Pipeline Deployments. Think continuous integration and continuous deployment. By standardising on containers as your application deployment best practice you effectively abstract away the unnecessary complexities of the underlying infrastructure. Developers can ensure the same deployment experience whether targeting their own laptop, development server or even Fujitsu’s K5, Amazon’s AWS or Microsoft’s Azure public clouds. These last three public clouds are all underpinned by very different infrastructure technologies and APIs but we no longer care! All these environments simply need the ability to run the same container engine.
What has Openstack got to do with containers?
Openstack’s various container initiatives are here to help solve container deployment and management in production on baremetal and virtual machines that have been built using the Openstack software suite. The projects target both infrastructure operators who can themselves leverage the benefits of containers for their own Openstack deployments but the projects are also designed with developers in mind to help alleviate their pipeline deployment woes ensuring they can have a consistent, fast and agile process to swiftly move their applications into production.
This is the container ochestration engine API. Let’s say you wish to deploy and manage a kubernetes cluster AND/OR a docker swarm cluster AND/OR an Apache Mesos cluster, well Openstack Magnum will provide a consistent API that can wrap these different technologies and provide a consistent interface for infrastructure operators. This means that operators should be able to leverage the same process and familiar API calls to deploy these very different products.
From the project website “Zun (ex. Higgins) is the OpenStack Containers service. It aims to provide an API service for running application containers without the need to manage servers or clusters.” Basically Zun is to containers what Nova is to virtual machines and Ironic is to baremetal servers. The Nova-Docker service was an early attempt to manage containers through the Nova compute API. Zun is not bounded by the Nova API.
Once again the project website provides a good summary – “The idea behind Kuryr, is to be able to leverage the abstraction and all the hard work that was put in Neutron and its plugins and services and use that to provide production grade networking for containers use cases. Instead of each independent Neutron plugin or solution trying to find and close the gaps, we can concentrate the efforts and focus in one spot – Kuryr. Kuryr aims to be the “integration bridge” between the two communities, Docker (or another container engine) and Neutron and propose and drive changes needed in Neutron (or in Docker) to be able to fulfill the use cases needed specifically to containers networking. It is important to note that Kuryr is NOT a networking solution by itself nor does it attempt to become one. The Kuryr effort is focused to be the courier that delivers Neutron networking and services to Docker.”
This is production grade OpenStack services delivered as ready-made containers. Any devs reading this will immediately understand how empowering this is – OpenStack deployed as containers! For anyone like myself, who went through the pain of large bash installations of OpenStack in customer datacentres, as the D:ream song goes “Things can only get better”… and they have. Containers are indeed the future of application deployment. They provide a consistent, codified and thus controllable mechanism for rapid deployments (or rollbacks should that be required). These Kolla container images will be used in combination with the LOCI project/module.
“The goal of LOCI is to provide CI/CD friendly, lightweight OCI compliant images and tooling for OpenStack services. You can build LOCI images in an air-gapped environment with no modification (assuming you have a git, package, and pypi mirror).” The Telecommunications sector, big adopters of OpenStack, envisage LOCI as a vital component in driving the future of Openstack in the Edge Computing space. Imagine your BT, Virgin Media or AT&T home router or set-top box running a container. This would give these companies AND home users much greater agility and flexibility. You could potentially swap providers in seconds without the need for new hardware – Win win all around. For small businesses and edge datacentres this can also remove the need for single task dedicated hardware devices – IoT, Big Data and Government regulation will rapidly drive growth in the edge compute space in the coming years.
There have been many other significant enhancements to Openstack in this release particularly focused around GPU virtualisation, so for the full details read https://www.openstack.org/software/queens/