Top 3 Kubernetes networking tools and how they work

Kubernetes networking is probably the hardest part of working with Kubernetes installations. One of the main reasons is because it has quite specific criteria that need to be met with regards to network architecture. These include requirements such as all pods should be able to communicate with each other without network address translation (NAT) and have a visible IP that they are aware of. This makes getting containers to work in a consistent and secure manner especially difficult since you can’t have all your pods communicating with each other in a many-to-many fashion. Additionally, maintaining network connectivity between all pods in clusters across multiple hosts is even more technical and complicated and requires some levels of expertise to say the least. But if you put these three Kubernetes networking tools in your tool belt, you will find your job is a lot easier.

Plug and play

Along with dictating specific networking requirements, Kubernetes also allows for a certain amount of flexibility with regards to implementation. This is why a number of projects have sprung up in this “sub-ecosystem” of Kubernetes, all with the sole intention of making cross-host container networking fast, consistent, secure, and, most importantly, easy to use. The general concept behind these tools is the creation of software designed networks that meet Kubernetes network criteria, but at the same time leave enough control in the hands of the cluster administrators to effectively secure and monitor the system. This is made possible by CNI plugins, where CNI stands for container network interface, a standard for plugin-based networking tools in Linux containers.

In addition to Kubernetes, the CNI also allows these plugins to work on OpenShift, Amazon ECS, and others, where they work by creating an overlay network on top of existing infrastructures. These overlay networks then transparently connect containers across multiple hosts and assign them unique IP addresses that they can then use to directly communicate with other containers. Since all communication is now through this new overlay network that is much easier to monitor, there is no need to tediously setup networking policies on the host, hence saving a huge amount of time. This new level of visibility also considerably increases security as it allows for exposure of chosen containers to the internet, while the rest are kept safely out of harm’s way.

The lumberjack

kuberbetes tools
Wikimedia

Out of all the networking tools available for Kubernetes today, Flannel is one of the earlier CNI plugins, as well as the most popular and easy to use. This is probably because it features a pretty straightforward networking model that can be used in most cases when it’s the basics that you’re after. Flannel is an overlay network designed by CoreOS, that allows containers to communicate across multiple hosts, without letting on that they’re using an overlay. It does this by assigning a range of subnet address, with each container receiving one, regardless of which host it resides on. Flannel then uses packet encapsulation and the open source etcd/key value store to address the entire span of hosts and record the mappings between the addresses as well.

Flannel has a number of options for encapsulation and routing, but for multihost configurations, the default recommended way is to use vxlan with a software switch of some sort, like Linux bridge, for example. The switch is where all traffic destined outside the host is sent to and the vxlan device is where it, in turn, gets forwarded at L2. Flannel runs a daemon called flanneld in each host, which creates route rules in the Kernel’s route table. This daemon collects the packets from the vxlan and puts L3 “wrappers” or UDP encapsulations on them for transport across hosts on a physical network. While this may sound complicated at first, this is by far the simplest version of an overlay network on Kubernetes.

The chain saw

If it’s not just the basics that you’re after, however, it’s probably in your best interest to consider Project Calico. Unlike Flannel, which needs to wrap packets in encapsulations before they can be sent on their way, Calico actually configures an L3 network using BGP routing protocol to directly communicate between hosts. This “pure” L3 approach is not only simpler but also higher scaling, better performing, and more efficient. The absence of any wrappers and “camouflage” also means packets are a lot easier to trace with Calico and the standard debugging tools and procedures can be used here as well. Unfortunately for some, Calico isn’t exactly popular for its simplicity but rather people use it for its advanced features like network policy management and access control lists.

While the Kubernetes NetworkPolicy API allows users to assign ingress policies to pods with the help of ports and labels, other features like assigning egress policies and CIDR are not yet supported. Calico fills in this gap quite nicely by not only supporting a number of policy features that are as yet unsupported but also by preventing outgoing connections from pods by matching pods with namespaces. Additionally, Calico integrates well with Istio, an open source service mesh built for distributed microservices architecture. This means Calico not only enforces policies at the network infrastructure layer but at the service mesh layer as well. All in all, Project Calico is a pretty good choice if your priority is performance. There’s also commercial support if you have the money.

Kubernetes networking tools: A bit of both

f additional complexity and management are off the table but you still want power and performance in your Kubernetes networking tools, Weave Net by Weaveworks is a third option that you probably should be taking a look at. Unlike Calico, which configures an entire L3 network from scratch, Weave Net creates a mesh overlay network that connects each node in the cluster. It also uses a DNS server called weaveDNS to provide features like automated service discovery, load balancing, name resolution, and fault tolerance. While it does encapsulate vxlan packets much like Flannel, it does so directly in the kernel so that the packets go straight to the destination pod without wasting time traveling in and out of userspace. This little detail gives it a considerable edge in performance.

In the case where the network topology doesn’t support this fast datapath, a slower backup encapsulation mode called “sleevemode” is used. Weave Net also features an encryption system that encrypts all traffic between hosts, NaCl encryption for sleeve traffic, and Ipsec ESP for fast datapath traffic. Additionally, network policies are automatically set up on installation so all you need to do is set your rules and you’re good to go. Most importantly, Weave Net is relatively easy to set up and configure and comes with almost everything preconfigured for the user. If you’re looking for power and simplicity, this is probably your best bet.

While Kubernetes is definitely a hard problem to tackle, there’s an entire community of developers working around the clock to make it easier for us as we speak. If you’re aware of your skills and limitations with regards to Kubernetes networking and clear about your requirements and what you’re looking to accomplish with this network plugin, the obvious choice should be staring you in the face. If you’re still not sure, remember these are just the top three and there are a diverse group of CNI plugins suited to different needs. Lastly, today’s enterprise solutions aren’t about standalone tools anymore and more often than not, the answer lies in two or three different tools that work well together.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top