Tackling Kubernetes networking woes with CNI

Microservices is changing the way software is built, managed, and shipped today. While microservices makes application development simple, getting these microservices to communicate with each other is a challenge. As Kubernetes grows in popularity, Kubernetes networking woes are also cropping up. Let’s take a look at some major developments and products that can help alleviate problems.

Kubernetes usage is growing

A recent survey showed that Kubernetes is the most widely used orchestration platform. It’s not surprising considering the momentum Kubernetes has in the open source community.

Kubernetes networking
Kubernetes

In fact, the other orchestration tools are far back in terms of adoption, and none of them have much of a lead over the others.

The survey points to many other positive trends that are leading to the growth of Kubernetes. It shows that many users are taking Kubernetes into production, and are using a large number of containers, definitely more than a year back. Also, Kubernetes tops the list of Cloud Native Computing Foundation (CNCF) projects that are being tested and used by users. These are very promising signs for Kubernetes, and clearly it is on its way to becoming a household name in many IT firms in enterprises.

However, the survey also highlights the key problems that Kubernetes users face.

Kubernetes networking
Kubernetes.

Topping the list is storage, followed by security. Third on the list, and an important element is networking, which is what we’ll focus on in this post.

Kubernetes networking is complex

Kubernetes enables thousands of containers to be created and managed in the cloud, but these containers need to be networked for them to work. In fact, most users of Kubernetes start using simply because of its ability to handle container workloads at scale. This comes from its pedigree of being battle tested at Google, and is one of the key reasons why companies like Box have chosen to go with Kubernetes.

While Kubernetes is great at starting, stopping, and managing container pods and clusters, it is also an open-ended platform with much work to be done to bring it to its full potential. Networking is one area where Kubernetes is still in early stages, and some of the groundwork is being done right now to ensure networking for Kubernetes is going in the right direction. Having representatives from many organizations as part of the Technical Oversight Committee (TOC) helps ensure there’s no lock-in, and no single vendor gets to have more of a say in how Kubernetes networking takes shape.

Networking in Kubernetes needs to have low latency, high throughput, should be easy to configure, and not expensive. In today’s cloud-native world, an open source tool has the best chance of meeting all these expectations.

There are two Kubernetes networking tools that are widely used — Docker’s container network model (CNM), and the container network interface (CNI) from CoreOS. The Kubernetes team has had its eye on both these projects, but from early on, showed a preference. There were signs that the Kubernetes community, notably Google engineers who play a key role in defining Kubernetes’ direction, prefered CNI over CNM.

Docker’s CNM

CNM was the offshoot of Libnetwork, which encapsulated Docker’s plan for container networking way back in 2015. CNM has three components — a network sandbox, an endpoint, and a network. The goal of CNM is to make networking for Docker easily pluggable and extensible.

However, Kubernetes did not adopt CNM because it needs a networking solution that can work with multiple container runtimes, not just Docker. For this reason, it looked to CoreOS’ CNI, which could support multiple container runtimes like CoreOS’ Rocket, Docker, and Hyper.

Enter CNI

CNI was first developed by CoreOS as a networking plugin. It was soon spun off as an open source networking standard on which other organizations can build networking solutions for containers. Soon it became the standard networking tool for Kubernetes, which was a big step for its adoption. A few months later it was even supported by Mesos, one of the big three orchestrators along with Kubernetes and Docker Swarm.

Similar to CNM, CNI also works on a plugin model as it can’t cater to every type of network out there. It depends on plugins to be extensible. CNI sits between the container runtime and the networking plugins. For example, it can act as a bridge between rkt the container runtime and Flannel the networking plugin.

Kubernetes networking
CNCF

CNI is more simple than CNM, and features only two commands to start a container and to remove a container. It supports seven container runtimes, and it has 14 plugins. Two of them are Flannel and Weave.

Flannel & Weave plugins

Flannel is a virtual network also created by CoreOS that acts as an alternative to software-defined networking solutions. Port mapping containers is a pain, and Kubernetes does away with this by giving each pod an IP. Flannel handles Kubernetes networking by giving each host a subnet. It installs an agent on each host and automatically assigns a subnet to the host. Part of the CoreOS’ Tectonic platform, Flannel is well tested as a networking plugin for CNI.

Weave is another networking plugin that could work in place of Flannel. It is resilient, able to route around outages, and automate networking. Unlike Flannel, Weave doesn’t use SkyDNS but rather supports DNS out of the box. However, most Kubernetes have already adopted Flannel, and are happy with its performance, they see no reason to switch to Weave. Also, Kubernetes has an add-on (kube2sky) that supports SkyDNS, making Flannel a safe choice.

Being adopted by CNCF is a big step for CNI, and great progress for Kubernetes, and the container ecosystem in general. CNCF will provide help to bring the CNI project to v1.0 faster. They will do this by helping to test the tool, and writing documentation for it. Additionally, the TOC that takes a hands-off approach on projects is always available for guidance on important decisions when needed. This industry-wide support that CNI enjoys has been a key factor in its adoption.

Networking has been a problem for container orchestration, but now with standardization in the form of CNI we can focus attention on bringing it up to maturity and being as capable as the legacy options like OpenStack Neutron or VMware NSX.

As the container ecosystem matures, it’s great to see that it’s not just one company that is calling the shots. Despite Docker developing CNM, they have joined in support for CNI, and are happy to approve projects that get the consensus of the industry. This is a good sign of a healthy ecosystem. As container workloads increasingly move to production, and companies rely more and more on Docker and Kubernetes for their business-critical applications, they expect these products and projects to be moving in the right direction. They look to the CNCF to provide a stamp of approval on the various projects that are vying to be part of the modern container stack. While all these projects have good intentions, and seem valuable, it’s important to dig into the details of how they work, how open and extensible they are, and how they’ll evolve over the years. CNI has passed this test, and is now the de facto networking solution for Kubernetes. It’s not the end, though. In fact, the Kubernetes networking party has just gotten started.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top