Getting on track: Kubernetes container engines options

The enterprise just can’t seem to get enough of Kubernetes, and it’s led to all sorts of confusion with regards open sourcing, commercial distributions, and services being built around it. To allay any suspicion that Kubernetes is riding on the coattails of Docker’s success, or maybe just to give users more choice with regards to container runtimes, the new CRI (Container Runtime Interface) allows you to plug in any container runtime to replace Docker or rkt. Just to refresh memories, the rkt container engine from CoreOS was enabled as an alternative to the Docker container engine with Kubernetes version 1.3. The Container Runtime Interface was introduced in version 1.5 in Alpha stage and moved to beta with Version 1.6.

Containers and their engines

Now, there’s definitely confusion when words like container runtime, container engine, and runtime environment are used interchangeably, but they more or less all mean the same thing in one sense or the other. The runtime is the actual container implementation that manages the namespace isolation and resource allocation at the operating system level.

Kubernetes container engines

Just to shed a bit more light on the subject, a runtime environment’s responsibilities are managing container lifecycle, execution, supervision, image distribution, and storage. All these functions are built around a piece of code like “libcontainer” or “runC” that interacts with the kernel to use facilities like cgroups and namespaces to build containers on top of an operating system. So if you take just runC, you have a runtime that needs a lot of work, but is still a runtime.

As you go up the stack from the runtime, you have all these added levels of management that have been built around runC, and it’s pretty much the same thing as you go higher, with added features and services all being built around a script that helps spawn containers.

Engine swapping

The CRI is an API developed by and for Kubernetes to let any container engine interact with a Kubelet. A Kubelet is that piece of Kubernetes that sits on every node (physical or virtual) and does the job of a supervisor to make sure that all pods are running as per specification mentioned in a PodSpec or a Pod manifest, which is a JSON or YAML object used to describe a pod.

Kubernetes container engines

To understand how it works, it’s important to know that in a traditional Kubernetes deployment, the Kubelet is the local agent on each host that talks to the container runtime. With CRI, however, the Kubelet communicates with a CRI “shim” over gRPC (an open source RPC framework) that in turn relays orders to the runtime.

A shim is typically something written specifically to maintain backwards compatibility and is often used to support a new API in an old environment or an old API in a new environment. In simple terms, you can think of it as an adapter that sits between the Kubelet and the runtime, transparently intercepting API calls, changing arguments, and redirecting operations as required.

To make things easier, in the words of Dan Gillespie of CoreOS, “It [CRI] is a generic way to address a container agent, this allows you to swap out container engines.”

Since the bottom line is that container runtime environments are swappable in Kubernetes, let’s take a look at why someone would want to swap out the Docker engine and what would the advantages be to do something like that.

RunC

Again, this is an option that people who are very proficient with using containers should consider as it really requires some technical know-how to build on runC. Containerd is probably a better option, though it may not have any significant advantages of Docker being a subset of Docker itself. runC is a low-level container runtime and an implementation of the Open Container Initiative specification and expects a user to understand low-level details of the host operating system and configuration. It requires the user to separately download or cryptographically verify container images to prepare the container filesystem. runC does not have a centralized daemon, and, given a properly configured “OCI bundle,” can be integrated with init systems such as upstart and systemd.

Rkt

Kubernetes container engines

Originally developed by CoreOS and now part of the CNCF, rkt is emerging as the No. 1 alternative to using Docker in Kubernetes. Though rkt support was implemented in Kubernetes 1.3, it wasn’t being plugged in through the CRI or run in the Kubelet, which brings with it significant advantages. By integrating with the CRI, rkt gains better support for Kubernetes features that not only make it easier to develop and maintain, but also helps validate the CRI itself. One disadvantage Docker has due to the fact that it uses a centralized daemon to communicate with and manage containers is that init systems are unable to directly track the life of the actual container process. rkt, on the other hand, has a unique architecture with no centralized “init” daemon and instead launches containers directly from client commands.

CRI-O

Project Atomic contributors who work for Red Hat, together with contributors from many of the top Linux, open source, and container companies, started working on CRI-O, formerly named OCID. CRI-O is a Kubernetes incubator project that is meant to provide an integration path between all OCI runtimes and the Kubelet. While CRI-O uses runC by default, you’re not locked-in and it supports any container runtime conformant to the OCI specification. The ultimate aim is to be able to plug in any OCI runtimes with little tweaks to the CRI-O

Clear Containers

Clear Containers recently announced version 2.1.1 and is supposed to be Intel’s attempt to have the best of both worlds with regards to the security of virtual machines and the deployment advantages of containers. Since Clear Containers are compatible with the Open Container Initiative (OCI), they’re not a bad option for someone looking for added security. The way it works is it relies on a kernel-based virtual machine (KVM) QEMU hypervisor, in conjunction with system and kernel optimizations, to minimize memory consumption while maximizing performance. Recent improvements include improved host-guest communication, support for Docker exec and Docker run, additional workload isolation via namespaces, better TTY handling, support for Kubernetes pod semantics, and the ability to start Clear Containers via the Container Runtime Interface.

Frakti

Frakti is definitely a dark horse when it comes to container engines, though the inclusion of it in the CRI is clearly mentioned. Described as a hypervisor based container runtime for Kubernetes, Frakti let’s Kubernetes run pods and containers directly inside hypervisors via HyperContainer.

Frakti features full pod/container/image lifecycle management, streaming interfaces (exec/attach/port-forward), CNI network plugin integration, hybrid Docker and Hypercontainer runtimes to fully support both regular and privileged pods

Although on the surface this update means container engines are swappable, it actually means a lot more than that since it’s in essence changing the way we look at the stack. Till now it was safe to say that the Docker engine was the heart of the stack and you built around it with orchestration, monitoring and alerting tools, storage, etc. With the CRI, what Kubernetes is saying is that orchestration is the heart of the stack and in Kelsey Hightower, a Kubernetes engineer was quoted in an interview with The New Stack saying, “We don’t really need much from any container runtime — whether it’s Docker or rkt, they need to do very little, mainly, just give us an API to the kernel.”

Photo credit: Pixabay (TechGenix photo illustration)

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top