Header background

What are containers, how they relate to Kubernetes, and why this matters to OpenStack

Containers and Kubernetes are hotter than hot because they let developers focus on their applications, without worrying about the underlying infrastructure that delivers them. And while OpenStack didn’t replace AWS, it clearly is a success story in the open infrastructure space. Here’s what you need to know about them, and why they matter to each other.

What’s up with containers?

If you’ve been in IT for a long time, you may have started hearing about containers since the beginning of the 2000’s. However, the concept really began gaining traction around 2014 with the release of Docker 1.0 – a buzz that meanwhile has become a roar.

In a nutshell, containers are a technology that allow developers to quickly create ready-to-run self-contained applications, broken down into components, that can be deployed, tested and updated independently from each other. It also enables them to create a fully functional development environment to work with, isolated from other application or system components.

To better understand the essence of this “new” technology I’ve found it helpful to compare it to Virtual Machines, so bear with me.

While a hypervisor uses the entire device, containers just abstract the operating system kernel. This means, containers don’t require a direct access to the physical hardware. By doing so, they allow for much lower resource consumption and much better cost effectiveness – one of the major differences between containers and VMs.

I keep hearing rumors that creating and running containers was possible way before Docker appeared – however, it needed tons of hacks and was merely a nightmare. The beauty of Docker is that it made containerization easy, so it all can happen with a few commands – therefore the big roar around containers.

Benefits of using containers

  1. Ease-of-use: Containers let developers, systems admins, architects and practically anyone package applications on their laptop and run them unmodified on any public cloud, private cloud, or even bare metal. This accelerates the DevOps lifecycle, enables the super-fast deployment of new services anywhere, and ultimately makes life easier for all involved.
  2. Speed and efficiency: Since containers are isolated code-units running on the kernel, they are very lightweight. Ergo, they take up fewer resources. So, when developers want to create and run a new Docker container, they can do this in seconds, while creating and running VMs might take longer because they must boot up a full virtual operating system every time.
  3. Modularity and scalability: Last, but not least, containers make it easy to break down an application’s functionalities into individual components. For example, a developer might want to have his MongoDB database running in one container and his RabbitMQ server in another one, while his Ruby app is in another. Docker links these containers together to create the application, making it easy to scale or update components independently in the future.

It’s no wonder that everyone is rushing to adopt Docker as fast as possible. But however useful containers may be, without a proper management system their benefits will not be entirely realized.

Welcome to Kubernetes.

What’s up with Kubernetes?

Originally created by Google, Kubernetes 1.0 was released in 2015. Shortly thereafter Google partnered with the Linux Foundation to create the Cloud Native Computing Foundation (CNCF), and donated Kubernetes as a seed technology to the organization. The primary purpose of the CNCF is to promote container technology.

Kubernetes, aka K8s, is an open-source cluster manager software for deploying, running and managing Docker containers at scale. It lets developers focus on their applications, and not worry about the underlying infrastructure that delivers them. And the beauty of it: Kubernetes can run on a multitude of cloud providers, such as AWS, GCE and Azure, on top of the Apache Mesos framework and even locally on Vagrant (VirtualBox).

But what’s the point of Kubernetes?

To better understand the essence of a cluster manager software, imagine you have an important business application running on multiple nodes with hundreds of containers. In a world without Kubernetes, you’d need to manually update hundreds of containers every time your team releases a new application feature. Doing it manually takes a lot of time, is error prone, and errors are bad for your business.

Kubernetes is designed to automate deploying, scaling, and operating application containers. It basically categorizes an application’s closely-related containers into functional groups (“pods”) for easy management and discovery.  On the top of the pod infrastructure, Kubernetes provides another layer, which allows for scheduling and services management of containers.

How does a container know which computer to run on? Kubernetes checks with the scheduler. What if a container crashes? Kubernetes creates a new one. Whenever you need to rollout a new version of your app, Kubernetes has you covered. It automates and simplifies your daily business with containers.

Benefits of using Kubernetes

  1. It’s portable: The philosophy of cloud-native application development can be summarized in one word: “portability”. Being on the front of CNCF, portability is also Kubernetes’s main concept: it eliminates infrastructure lock-in and gives developers complete flexibility to run Kubernetes on any infrastructure, in any cloud they want.
  2. It’s extensible: Because it’s designed extensible, Kubernetes offers freedom of choice when choosing operating systems, container runtimes, storage engines, processor architectures, or cloud platforms. It also lets developers integrate their own applications in the Kubernetes API, as well as to scale or roll out new innovative features through the Kubernetes tooling.
  3. It’s self-healing: Kubernetes continuously performs repairs, guarding your containerized application against any failures that might affect reliability. Thus, it reduces the burden on operators and improves the overall reliability of the system. It also improves developer velocity because the time and energy a developer might otherwise have spent on troubleshooting can instead be spent on developing new features.

Hello, my name is OpenStack, and I am not easy to work with.

If you are a regular follower of this blog, you might have already read about What is OpenStack, which are the most common OpenStack monitoring tools, or how we approach OpenStack monitoring beyond the Elastic (ELK) Stack.

If not, here’s a short recap: OpenStack is an open-source cloud operating system used to develop private- and public-cloud environments. It consists of multiple interdependent microservices, and provides a production-ready IaaS layer for your applications and virtual machines.

Still getting dinged on its complexity, OpenStack currently has around 60 components, also referred to as “services”, six of which are core components, controlling the most important aspects of the cloud. There are components for the compute, networking and storage management of the cloud, for identity, and also for access management. With these, the OpenStack project aims to provide an open alternative to giant cloud providers like AWS, Google Cloud, Microsoft Azure or DigitalOcean.

The reasons behind the explosive growth in OpenStack’s popularity are quite straightforward. Because it offers open-source software for companies looking to deploy their own private cloud infrastructure, it’s strong where most public cloud platforms are weak. Perhaps the biggest advantage of using OpenStack is the vendor-neutral API it offers. Its open API removes the concern of a proprietary, single vendor lock-in for companies and creates maximum flexibility in the cloud.

Since they solve similar problems, but on different layers of the stack, OpenStack and Kubernetes can be a great combination. By using them together, DevOps teams can have more freedom to create cloud-native applications than ever before.

However…

Containers + Kubernetes + OpenStack: the platform of the future?

What we see at our customers is that, however important security and control might be for them, they don’t necessarily want to use OpenStack only. They want so much more:

  • ease of deployment (expected from public cloud providers)
  • control (expected from private clouds)
  • cost efficiency (expected everywhere)
  • flexibility to choose the best place to run any given application
  • scalability
  • reliability
  • security

More often than not, companies bigger than a startup want to enjoy “hybrid” possibilities. They want to control their on-premises infrastructure, but at the same time scale to the public cloud if necessary. But what we experience also is that it’s not always so easy to fully enjoy the benefits of hybrid scenarios. Unfortunately, moving workloads between infrastructures is still a rather difficult task.

This is where Kubernetes can come very handy. Because it powers both private and public clouds, Kubernetes users can unlock the real power of a hybrid infrastructure.

Back where we started: containers

Remember containers from the beginning of the article? Their beauty is that they let you:

  • run containerized applications on OpenStack, or
  • containerize your own OpenStack services by using Docker.

Both ways you can benefit from Kubernetes.

Benefits of running OpenStack on Kubernetes

Due to its great support for cloud-native applications, Kubernetes can make OpenStack cool again: it can enable rolling updates, versioning, and deployments of new OpenStack components and features, thus improving the overall OpenStack lifecycle management. Also, OpenStack users can benefit from self-healing infrastructure, making OpenStack more resilient to the failure of core services and individual compute nodes. Last, but not least, by running OpenStack on Kubernetes, users can also benefit from the resource efficiencies that come with a container-based infrastructure. OpenStack’s Kolla project can be of great help here: it provides production-ready containers and deployment tools for operating OpenStack clouds that are scalable, fast, and reliable.

Benefits of running Kubernetes on OpenStack

On the other hand, by deploying K8s on top of OpenStack, Kubernetes users get access to a robust framework for deploying and managing applications. As more and more enterprises embrace the cloud-native model, they are faced with the challenge of managing hybrid architectures containing public- and private clouds, containers and virtual machines. OpenStack has never been famous for its interoperability – which might be good news for some, but bad news for most. By bringing in containers and Kubernetes, users have the freedom to choose the best cloud environment to run any given application, or part of an application, while still enjoying scalability, control, and security. Kubernetes can be deployed by using Magnum, an OpenStack API service making container orchestration engines available as first-class resources in OpenStack. This gives Kubernetes pods all the benefits of shared infrastructure.

Wrapping it up

Enterprises today want many things, but “using a single cloud infrastructure and being locked into it eventually” is not very high on their list. Instead, they want to reap the benefits of public clouds (e.g. ease of deployment), those of private clouds (e.g. security) and, today more than ever, they want faster time-to-market. Therefore, they increasingly move towards cloud-native technologies and practices.

But however wonderful in theory, setting up and effectively using hybrid architectures is still a difficult task. Most cloud infrastructures – including OpenStack – have not been designed to allow the easy moving of workloads between each other.

But there is Docker, who came and made containerization easy for everyone.

And there is Kubernetes, who came and automated working with containers.

And there is OpenStack, who came and offered a vendor-neutral, secure, production-ready IaaS layer for containerized applications.

By combining the three, enterprises have the chance to fully realize the benefits of hybrid architectures, be more agile, and deliver innovation faster.

But wait, there’s more!

Check out this cool infographic to understand the most important layers of a cloud native stack, as well as the different tools and technologies to build, run and manage cloud native applications.

We also illustrate three typical cloud environments that we see our customers running on OpenShift, Azure, and Cloud Foundry.

Are you already using Docker and Kubernetes, maybe even combined with OpenStack? How do you find this combination? Share your thoughts in the comments section below, as I learn just as much from you as you do from me.