In part 1 of this two-part series, we discussed the rise of multicloud as the new normal for corporate cloud setups. We also saw that an API gateway has essential features to manage multicloud applications — they are centrally managed by globally distributed. They are built to connect applications and services using a common framework. They speed up integration in the enterprise. We also saw the top vendors in the space — Apigee, Mulesoft, Kong, Tyk, and AWS API Gateway. In this post, we discuss a powerful complementary solution to API gateways — service meshes. We also discuss how they can work together to help you get the most out of a multicloud setup.
Service mesh: Communication at the infrastructure layer
Service meshes are a more recent development in the world of cloud-native computing. It was sparked by the rise of Kubernetes. While APIs are placed “in front of” a service or application, a service mesh is placed “beside” services in a cloud-native application. Service meshes offer a better way to observe and manage network communication for cloud-native applications.
According to William Morgan, co-creator of the open-source service mesh Linkerd, “A service mesh is a tool for adding observability, security, and reliability features to applications by inserting these features at the platform layer rather than the application layer.” Service meshes like Istio and Linkerd adopt a “sidecar” model, where they inject a proxy to every pod in Kubernetes. The sidecar of choice for many service mesh tools is Envoy. These sidecars handle communication between the microservices.
The sidecar proxies make up the data plane, while a service mesh importantly provides a higher-level control plane. The traffic flow and routing happen in the data plane, while it is managed in the control plane.
Service mesh: Pros and cons
Service mesh tools have their benefits and drawbacks that need to be considered. They are comprehensive networking solutions able to handle the complexity of distributed applications. They bring deeper visibility into communication between services.
However, they come with some downsides that can’t be ignored. First, service mesh tools are difficult to configure. Standing up the first Istio instance involves a steep learning curve and a good understanding of the entire system. The complex architecture makes it hard to manage at scale. The many components need to be regularly updated, kept secure, and adapted according to the applications. That said, there are bright spots and some service mesh solutions like Linkerd are easier to configure and operate than others.
Further, some overlap between a service mesh and an API gateway makes it confusing to know where a service mesh fits in the architecture.
Service mesh and API gateways working together
There are quite a few overlapping goals and functions between an API gateway and a service mesh. For example, they both handle network traffic, they both have routing capabilities, and they both improve the observability of the system. However, they also have their differences.
API gateways manage API requests. These API requests are commonly between internal and external applications (or users). They are primarily north-south traffic, although they can also be used for east-west traffic between internal services. APIs enable request and response transformation to insert the custom payload into any API request. They operate at a high level, closely following an application’s business logic.
A service mesh, on the other hand, manages service-to-service communication internally within applications. This is primarily east-west traffic, though ingress/egress capabilities are being added to service meshes as they mature. A service mesh is a low-level solution tightly integrated with Kubernetes and operates at the platform level. They are able to provide observability at a deeper level than APIs. Beyond observability, they allow enforcing of policies that govern network traffic and security.
When deciding whether to use an API gateway or a service mesh, where you are in your cloud journey matters. Organizations may be running either monolithic applications, or microservices, or a combination of the two. A monolithic stack is simpler and can operate well with just an API gateway in front of the application. A cloud-native stack requires both an API gateway and a service mesh. The complexity and maturity of the networking layer increases as organizations progress from monolith to facade services to microservices.
Service mesh tooling today
When it comes to implementing a service mesh, the options are not as many as API gateways, but there is a fair bit of choice available. There are open-source Istio and Linkerd, which are the most popular service mesh tools today. Linkerd is the older of the two and was the tool that created the space. Linkerd has a smaller but strong open-source community and is quick to stand up and get running in production. A mature product, it is governed by the CNCF and is well integrated with Kubernetes.
Istio burst on the scene later but is enjoying explosive growth thanks to its backing from companies like Google and its wide range of features. However, this is an area of concern as the industry would like for such an important tool to not be in the control of a single company like Google but governed by a consortium like the CNCF. The recent switching of Elasticsearch’s licensing from Apache v2.0 to SSPL is one example of how open-source projects can go wrong if not governed neutrally and responsibly.
A noteworthy new entrant in the space is Kuma, a control plane for multiple underlying service meshes. Kuma is created by Kong, the API gateway company. It was recently adopted by the CNCF and is currently the only CNCF service mesh that runs on Envoy proxy. Kuma calls itself a universal service mesh. It supports VMs alongside containers, modern cloud-native applications, and legacy applications. It looks to simplify service mesh operations compared to first-generation service meshes like Istio. Kuma has a multi-zone implementation model that features multiple control planes and multiple data planes. This is ideal for multicloud scenarios. Also, Kuma can be easily integrated with any API gateway service. This is not surprising considering it is from the house of Kong. Kuma is one service mesh to keep an eye on as it develops.
All the above service mesh tools are open-source — there aren’t many commercial service mesh solutions on the market today. Google packs a managed Istio service into its Anthos offering. This is a reminder that these are still early days for service mesh tools.
Today, organizations typically run a service mesh in a single public cloud location. However, they increasingly need to run multiple instances of a service mesh across multicloud locations — both private and public clouds.
Service meshes and API gateways: Better together in a multicloud world
The service mesh is a modern and necessary innovation built for a multicloud world. It works alongside an API gateway, ensuring communication between distributed services internally. With a clear separation of the data plane and control plane, service meshes are made for large-scale deployments. While the service mesh space is nascent and evolving fast, there are capable options available today. It’s only a good thing that they are open-source and vendor-neutral. This sits perfectly in a multicloud world.
Featured image: Shutterstock