5 ways to improve the network performance of containerized apps

Networking is the next frontier in containers today. As the world comes to grip with the idea of multicloud, the hurdle to cross to get there is networking. Container networking is complex for various reasons. Containers have a short lifespan, much shorter than VMs, making service discovery of vital importance. Containers result in distributed systems that require more complex communication patterns between containers, pods, services, and clusters. All containers need to be able to talk to all other containers. This would make any network admin cringe if they’d heard it about five years ago. Yet, that’s the kind of massive change that containers and containerized apps have brought to the world of application networking.

1. Understand the basics about container namespaces

Key to understanding and operating container networking and containerized apps is the concept of namespaces. In one of the early presentations on Docker containers, Jérôme Petazzoni explains the role of namespaces in how containers function. In a gist, the namespace limits what you can see in a container. There are multiple namespaces for each process that makes up a container, and together they restrict and allow access to the container.

Kubernetes as well follows the same principle of namespaces for isolation of various components such as pods, services, and replication controllers. There is communication happening between containers, between pods, between services, and even externally with other services. Pod-to-pod traffic is east-west, and communication with external services is north-south. Ingress is when traffic is routed from an external service to an internal Kubernetes service. Egress is the opposite when a Kubernetes service makes a call to an external service. While there is complex routing of traffic, namespaces are still the basic control mechanism used to decide when to allow a request and when to deny it.

2. Use policy-based networking

Networking plays a key role in ensuring not just the functioning but also the security of containers, pods, and services. But for networking to function at the massive scale that is common with Kubernetes, it needs to be managed with policies. This is why network policies are critical to doing networking the right way.

Kubernetes supports networking plugins like Flannel that are used to manage network communication. When it comes to policy enforcement, the two key plugins to try are Calico and Weave. Calico is particularly gaining adoption among users and industry-wide support from cloud vendors because of its strengths with policy-based networking. Docker announced that they will include Calico instances in Docker Enterprise 3.0 and their Windows Server offering. Similarly, Google Cloud announced that Calico will be part of its GKE service and help with hybrid cloud setups. The key challenge here is network communication across distributed systems. In the past this was a no-go for network admins, but today, it is the norm in container workloads. Public cloud vendors are working with the best open source tools to enable this kind of distributed network communication via policies.

containerized apps

Network policies can be used to allow or deny all traffic to a Kubernetes service. They can also be used to allow only particular whitelisted sources to communicate with services or to block only particular blacklisted sources. The best part is that once set, these policies can be changed any time, and the change will take effect immediately. This enables great efficiency in making changes to a networking setup. In a dynamic container environment where container lifespans are short, having flexible and equally dynamic network communication is a must-have.

3. Use a service mesh to separate concerns

Service mesh tools have taken center stage in Kubernetes conferences over the past year. Particularly Istio has gained rapid adoption within the industry. The reason for this is its ability to separate two concerns – the data and the management of network traffic. Istio uses Envoy as a sidecar proxy to manage network traffic. There are other service mesh options like Linkerd and SuperGloo but Istio enjoys the position of privilege as of today. Azure recently introduced a Service Mesh Interface in collaboration with Buoyant (creators of Linkerd), Solo.io (creators of SuperGloo), and HashiCorp. Microsoft’s idea is to play it safe and not bet on one horse, but rather, follow the industry trend toward service meshes. They want to give Azure customers the choice to pick whichever service mesh tool they’d like to use and not have to risk it all on any single one. Though the ecosystem is gung-ho about Istio, many like Microsoft see it as fragmented, and would rather take the safer route of not going all in with the darling of the day — Istio.

4. Multicloud needs multicluster connectivity

Since the acquisition of Red Hat by IBM, there has been increasing discussion about multicloud. Indeed, most enterprises today aren’t loyal to just a single cloud provider, they mix and match to suit the needs of each application and each team. Some may run workloads on as much as five cloud vendor platforms. This being the case, there are increasing chances where Kubernetes clusters will need to communicate across cloud platforms, or a cluster needs to be migrated from one platform to another. Knowing this, Rancher has introduced Submariner, an open source tool that enables multicluster networking. While Kubernetes already has provision for inter-cluster communication, it requires ingress controllers and node ports for communication across clusters. Submariner eases this communication by creating tunnels and routing patterns across clusters even if they are on different public cloud platforms. This type of cross-cloud communication will become more commonplace in a future of multicloud applications, and the tools and solutions to support this type of network communication are being built today.

5. Keep an eye on lightweight VMs for the future

The interesting thing about Kubernetes is that there is discussion of it performing the role of a hypervisor in the future. VMs aren’t going away completely, but their superior isolation capabilities are better for security-focused networking. There is still a big need for this in industries like financial services. There are lightweight VM solutions that are emerging like Firecracker from AWS and Kata Containers. It is still early days for these solutions, but considering how the market is extremely fragmented, there’s a very high likelihood that these could find a permanent place.

Containerized apps: Communication is key

Networking is key to containerized apps and container management. For containerized apps to be successful, containers need to communicate with one another whether that’s within the same pod or across different cloud platforms. Networking is what enables this kind of connectivity. It starts with understanding the way namespaces define how communication happens with containers. Beyond that, when network communication needs to scale, policy-based networking is the only way forward. Fortunately, tools like Calico and Weave are up to the task. Alongside these tools, service mesh tools like Istio are made for simplifying all this complexity and giving the network admin more control over the data that flows through a network and the ability to manage it with ease. The future is more of the same — applications becoming even more fragmented by spanning multiple clouds, and instance types that blur the lines between containers and VMs. To thrive in a world of containers and multiple cloud platforms, networking is the critical component. Aware of this, the open source community and cloud vendors are busy devising the most intricate, scalable, and open networking solutions we’ve ever used.

Images: Pixabay

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top