Envoy is likely the most important open-source project in the cloud-native networking space. Without it, we wouldn’t have a service mesh like Istio. The Envoy team recently announced Envoy Mobile, which looks to manage mobile applications with the same level of dedication as a datacenter. Another new project spawned by the Envoy project is Kuma. That’s the focus of this post. Kuma joins what is likely the hottest part of cloud-native computing — service meshes. This space is getting crowded of late, but Kuma believes they have something special that stands out compared with the available options. Let’s take a look at Kuma.
Kong is one of the most well-known providers of an API management gateway that has both an open-source and a commercial variant. It competes with the likes of Apigee, Tyk, and AWS API Gateway.
The competition in the API space has heated up recently with the advent of service meshes like Istio and Linkerd. The reason is that there is a lot of overlap between the functionality of an API gateway and a service mesh. Both offer load balancing, service discovery, and monitoring of service requests. However, there are key differences between the two.
An API gateway operates at a high level (the application layer), often acting as a bridge between internal and external services, and it can transform payload data in requests. On the other hand, a service mesh is a low-level component that embeds into the infrastructure layer and acts as a bridge between internal services. It helps to manage and monitor service to service communication in a distributed microservices application. Service meshes were born with a need to have better visibility into workloads run in Kubernetes.
Kong has noticed the overlap and differences. While their API gateway solves issues at the application level, they decided to dig deeper to solve challenges at the infrastructure level with a new kind of service mesh they call Kuma.
Kuma compares itself to other service meshes on many fronts, calling them “first- generation” service meshes. These service meshes are complicated to set up and use. This is certainly true of Istio, which is very ambitious in its goals, but complex to implement. Kuma faults these service meshes, saying that they are great for greenfield-only apps but not for modernizing existing apps. Kuma is out to solve these problems with service mesh, and when they say “modern service mesh,” they mean one that is easy to run in production at scale and can be used for both greenfield and legacy applications.
Another notable distinction with Kuma is its support for VMs. Most other service meshes today focus exclusively on Kubernetes, which means they are meant for container-driven applications. However, Kuma targets the bigger piece of the pie — existing applications that are run on VMs.
Kuma is built on top of Envoy Proxy and is a service mesh for a multicloud world. It delivers multicloud and multizone connectivity for globally distributed applications. This means you can run one service on an AWS EC2 VM in the US, another in a GKE-powered container in Europe, and yet another on Azure Functions in Asia, and have these services talk to each other just like local services. This is powerful and represents the shift underway in the cloud-native space where infrastructure and applications are becoming increasingly distributed.
Despite this growing complexity, Kuma can simplify management of these resources. It does this with its “multi-mesh” capability. Here, it runs multiple instances of Envoy Proxy or Kuma control planes that can all be managed via a single centralized Kuma control plane.
The benefit of this is that teams are easily able to replicate common functionality across services. In doing so, they reduce the amount of manual configuration required and can focus on their applications rather than building connecting pipelines for services.
Kuma enforces L4 and L7 policies for security, observability, service discovery, and routing. These policies are implemented at the Envoy Proxy data plane layer, but users do not need to be familiar with Envoy Proxy. Kuma has conveniently abstracted away the Envoy parts of the solution, so users only worry about managing their meshes.
Getting started and managing Kuma
With a simple setup, you can get up and running with Kuma in minutes. Kuma is a single executable written in GoLang. It takes just a few commands, whether that’s in your Kubernetes cluster or on a Linux OS in your VMs or bare-metal servers. Once installed, Kuma automatically injects the Envoy sidecar proxies to every instance. Envoy comes packaged with every Kuma installation, which is a nice touch and shows that the creators are intentional about making Kuma very user-friendly.
When running Kuma, you can choose between two deployment models — standalone or multizone. With standalone, you run a single control plane and many data planes that connect to it. With multizone, you run multiple Kuma control planes, which are all managed by a single centralized Kuma control plane. Every proxy connects to the control plane just above it in a cluster. This is the ideal model for a hybrid setup with both Kubernetes and VM-based workloads.
Another feature, unsurprisingly, is that Kuma can be integrated natively with any API gateway. This is no surprise considering Kong is the creator of Kuma.
Kuma was recently contributed to the Cloud-Native Computing Foundation (CNCF). Kuma hopes to gain prominence as part of the foundation, which hosts other top Kubernetes-related projects like Prometheus, Jaeger, and Linkerd.
Linkerd is a very popular service mesh and is also adopted by the CNCF. Kuma’s uniqueness is that it is the only CNCF service mesh built on Envoy. While Istio is open source, it is still actively managed by Google and isn’t part of any foundation. This is a major area of concern for many as they wouldn’t want to heavily invest in a tool that any one vendor has complete control over. Google has promised to donate Istio to a foundation at some point, but for now, Kuma is the only option if you’re particular about an Envoy-based service mesh that is truly open source.
The road ahead for Kuma will be heavily defined by its adoption by the open-source community and decisions made by the CNCF. Kuma hopes to climb the ranks among the CNCF projects and finally graduating. Its future in the Kubernetes ecosystem is looking bright.
Kuma: The new kid has plenty of competition
Kuma has considerable competition in the cloud-native space already with the likes of Istio, Linkerd, and Consul. These tools are battle-tested and production-ready for a couple of years now. Kuma is the new kid on the block.
However, as with all things technology, new is not a disadvantage. In fact, newer solutions that solve real pain points can easily displace older ones. Whether the pain-points Kuma solves are real ones faced by many or are ones that just a few resonate with is left to be seen.
If you run cloud-native applications and have been less than impressed with the current breed of service mesh solutions out there, it’s time to give Kuma a try. It’s so easy to get started, and you’ll soon know if this is for you or not. The way things are going, what seems clear is that there will be many service meshes coexisting, and organizations will choose the flavor that appeals to them the most. That is good news for a new entrant like Kuma.
Featured image: Designed by Pikisuperstar / Freepik