Linkerd enables a modern server mesh in Kubernetes

Microservice applications are quickly becoming the building blocks of modern system architecture. Where there was one, there are now thousands, and it’s important to know how they communicate with each other. That’s worse than watching a soap opera and figuring out who goes with whom, because microservices communicate on a mesh level. A mesh network is a network topology in which each node relays data for the network, and all mesh nodes cooperate in the distribution of data in the network. That’s like a classroom full of noisy children, and as the numbers go up, new challenges like load balancing, failure recovery, service discovery, and monitoring suddenly come into the picture. It wasn’t that long ago that CNI from CoreOS was the “in thing” with regards to networking, but all anyone’s talking about now is Istio and Linkerd. That’s because we’re moving up the stack where CNI is the container network API on top of which we have container networking tools like Weave, Calico, and Flannel. Moving up another layer is where we find Linkerd, Istio, and Docker routing mesh that connect up the microservices.

Linkerd: Meshing the mess

linkerdThe reason microservices are so agile or flexible is because they decouple from each other over the network and reintegrate with APIs as remote procedure calls. Without good communication, services would be doomed to failure as each request requires input from multiple services as it makes a complex journey across the network.

A service mesh makes those problems go away as it adds an extra layer of infrastructure dedicated to microservice communication. It takes responsibility for the delivery of requests across the application. A service mesh is typically an array of lightweight network proxies that are deployed alongside application code, without the application needing to be aware. It’s like if you had a lot of teenage children constantly taking your phone to text their friends, and you finally decide to get them a phone of their own. Not just that, you even install an app to manage and restrict their usage of the phone. A parent’s dream come true, isn’t it. That’s what a service mesh does for microservices.

A single application might consist of hundreds of services, each service might have thousands of instances, and each of those instances are constantly changing. Not only is it difficult to understand the communication between microservices, it is vital to ensuring the end-user experience is up to the mark. In a microservice architecture, applications shouldn’t be managing their own load balancing logic, service discovery, or their own retry and timeout logic. As application architectures become more and more segmented, communication logic needs to move out of the application and into the underlying infrastructure.

New members of the CNCF

linkerd

If you’re building a cloud-native application or looking to break down your monolith into microservices, you need a service mesh. As we progress in this saga of containers and microservice architecture, we venture further and further up the stack and encounter new obstacles. First, it was orchestration, then it was monitoring, then storage, and now networking. If you want to know what the latest problem is just keep an eye on what the CNCF is doing. Of late, they’ve been on a rampage adopting projects involved with Kubernetes networking.

Linker-DEE

One thing about microservice architecture is you’re not looking for problems anymore or you’re not just trying to get things to work. Unlike TCP, the explicit goal of the service mesh is to make service communication visible so it can be monitored, managed, and controlled. Installing Linkerd on a Kubernetes cluster is as simple as installing an app, and using Linkerd to gain visibility into the health of that app’s services. Linkerd provides much more than visibility, it provides latency-aware load balancing, automatic retries and circuit breaking, distributed tracing, and more.

Linkerd is an open source, scalable service mesh for cloud-native applications, and is supposed to give microservices “Twitter style operability.” Linkerd, pronounced Linker-dee, was developed by Twitter infrastructure engineers William Morgan and Oliver Gould to solve the problems they encountered while operating large production systems at companies like Twitter, Yahoo, Google, and Microsoft. According to them, the toughest challenges have nothing to do with the services themselves, but rather the communication between those services. Linkerd addresses these problems not just by controlling the mechanics of this communication but also by decoupling communication mechanics from application code. In this way, Linkerd gives you visibility and control over communication mechanics without making any changes to the application itself.

An advantage of using Linkerd is you are free to choose whichever language is most appropriate for your service. Today, companies around the world use Linkerd in production to power communications between services. Linkerd takes care of the confusing parts like load balancing, connection pooling, TLS, instrumentation, and more. Recently, Buoyant, the company behind Linkerd announced funding of $10.5 million in its Series A round. Investors include Benchmark Capital and a female-led group of current and former Twitter executives called #Angels! In addition, Benchmark’s Peter Fenton, who recently stepped down from Twitter’s board, will be joining Buoyant’s board of directors.

Istio

linkerd

Istio, on the other hand, is the result of a joint collaboration between IBM, Google, and Lyft as a means to support traffic-flow management, access policy enforcement, and the telemetry data aggregation between microservices. That’s a long definition, but it sounds like it does the same thing as Linkerd does, especially since it also works without any changes having to be made to the application code.

Istio is a service-management tool that intercepts all network communication by adding a special sidecar proxy to every service. This is one of the key strengths of Istio, because it’s extremely easy to set up. The Istio sidecar proxies attach themselves automatically, which make them very user friendly. Once attached, they monitor the system, keep it secure, control traffic, and enforce the law. Sounds a lot like the police.

Istio can be split up into its four functionalities: Envoy, Mixer, Pilot, and Istio-Auth. Each of them performs a different function. Envoy is the sidecar proxy, Mixer enforces policies and access control, and Pilot handles traffic across services. The big USP for Istio is that it easily detects new services and includes them in the process.

Like peanut butter and jelly

Linkerd
Wikimedia

So, if you’re still looking for Istio vs. Linkerd comparison, think again. In a recent blog post, Buoyant called them peanut butter and jelly and announced the release of Linkerd 1.1.1, which features integration with the Istio project! They also point out the benefits of using Istio with Linkerd since beyond sharing many of the same goals, the two projects complement each. The advantages are that while Linkerd brings a widely deployed, production-tested service mesh that is extremely focused on cross-platform compatibility and consistent communication across services, Istio adds to that mix excellent APIs, well-designed models, and an expansive, forward-thinking feature set. That’s probably why the CNCF had to have them both, and is a good indication that we’re going to see a lot more tools in this sector in the future.

Photo credit: Buoyant 

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top