Microservices and containers are taking center stage as developers are making a shift towards a more agile development platform. The microservices platform is emerging as a basis of applications for years to come. Using this approach, developers develop various services that work independently of others which lead to Agile development and swift rollouts of digital offerings. This is only possible by having a separate environment for different application components. This could be done by using VMs, but that is inefficient and expensive. Every VM requires an OS and using different VMs leads to higher load on the server. Enter containers which help in solving these issues. Containers provide lightweight environments for different components of an application. Containers help divide a physical server into different execution environments where developers can create different microservices while avoiding any conflicting libraries and application components. There’s also no need for separate operating systems as one OS instance can support multiple containers with separate execution environments.
However, microservices running on different containers require proper management to avoid collocation. That’s where container-management systems come into the picture. The systems let developers use their policies to dictate container placement. Kubernetes, the leading container orchestration tool today, is one such system that helps users achieve that.
A service mesh is a configurable infrastructure layer for microservices applications. Service meshes ensure that there is fast, secure, and reliable communication between containerized components of an application. Service mesh implementation is done through a proxy instance called a sidecar (based on Envoy) that is applied to every instance of an application. Sidecars take care of interservice communication, monitoring, and security concerns. With the help of a service mesh, developers can focus on development and support while operations teams can focus on maintaining the service mesh and running the application.
Backed by Google, IBM, and Lyft, Istio has emerged as the most popular service mesh in the market. Up until recently, Kubernetes was the only container orchestration framework to support Istio. But, Kubernetes being the most popular container orchestration tool around, a plethora of Kubernetes vendors are working on adding support for Istio to their managed Kubernetes service. This is indicative of how quickly Istio is being accepted by the market. However, Istio is not the only service mesh available. Projects from Buoyant and HashiCorp among others have their respective service mesh offerings out in the market.
Founded in 2014, Vamp sure has come a long way since its conception. While containerization offers continuous delivery through identical, independent environments, microservices infrastructure comes with increased complexity. Vamp helps in reducing the complexity by providing a platform agnostic DSL that provides simple A-B testing, canary releases, and auto-scaling features. Initially, the idea was to develop an e-commerce platform based on microservices and container infrastructure. But, this platform proved to be much more than what it was intended for. With its built-in canary release and auto-scaling features, it attracted vendors who weren’t necessarily looking for the e-commerce platform to begin with. Soon, Vamp became a beacon for vendors who wanted easier canary releases as it is tricky to set up coherent infrastructure around canary releases.
Nico Vierhout, CEO of Vamp says, “What’s interesting is that Vamp isn’t just for the savviest organizations that have gone cloud-native. Organizations at the very start of their microservices journey are also gravitating towards us as we can help them decompose their monolithic applications the right way, and implement the right foundation taking into account future complexity.” Vamp is vendor neutral and has built-in drivers that support platforms like Mesos/Marathon, Docker, and Kubernetes.
The idea of continuous releases can look appealing to many developers out there. However, it can be cumbersome and complicated. Integration, deployment, and release of updates require automated scaling with minimum disruption. Canary releases can help rollout changes in a regulated manner, and also help developers rollback those changes if need be. Canary release is the process of releasing new software updates that reduce the risk usually involved in rolling out these updates. New versions can be released to a small subset of users and based on their reception, vendors can decide on making any alterations or releasing the update universally. Just like canaries in the coal mines, these releases let developers implement last mile delivery and reduce the risk of stressful changes that can cause the application to fail. This technique is essential for teams operating under continuous delivery.
Canary releasing requires access to metrics to monitor the health of the new release and to control the phased-out deployment process. If the new release fails for some reason, the changes can be easily reverted with minimum damage. Vamp uses Istio to perform efficient canary releases and auto-scaling. Conceptually, Istio is similar to Vamp’s existing gateway architecture. Istio uses an intelligent proxy as its service mesh and uses route rules to control how requests are routed within the service mesh. With the help of Istio, Vamp supports a myriad of deployment policies from basic manual canary releases to time-based gradual rollouts to metric-based multistep regional rollouts with automatic rollback functionality.
Vamp started by creating route rules for percentage-based weighed traffic management. It was the basis of simple canary release functionality. After that was done, Vamp worked on adding an automated rollback policy which would suspend rollouts and revert the traffic to the original version of the application if the health of the newer version did not meet the predefined standard. The health metrics used to implement this policy is a user-defined combination of Kubernetes’ health/readiness metrics, current average response time vs nominal response time, and the ratio of 5xx errors vs 2xx responses for HTTP traffic. Once that was out of the way, using Istio, Vamp was able to add conditional routing. Conditions based on values of specific HTTP headers could now be applied. These conditions can help differentiate traffic coming from devices on public networks like mobile devices or ITO devices on VPN for instance.
According to Nico Vierhout, “We have big plans for expansion and we already have a number of large enterprise organizations both in Europe and the U.S. that have seen great results from using our service.” To that end, Vamp is working to develop enterprise-grade features that its largest customers need. Vamp also plans on growing its open source community. For now, Vamp stands out of the crowd because it offers a vendor-neutral package, unlike the top five cloud vendors whose services are specifically designed to work only within the confines of their own cloud. Vamp also provides a layer of abstraction over container configurations which dictates how load balancers like HAProxy will be configured.
Despite how important those things are, Vamp’s claim to fame is its robust management of canary releases. Vamp helps development teams work on developing applications and updates without having to care about the risks involved in releasing new updates. It helps them release their updates to a small percentage of their user base and if the reception seems satisfactory, help them scale up gradually. This approach of restricting the blast radius of failures serves to release stable and risk-free rollouts without major downtime. Enterprises are moving towards the microservices platform to help develop applications more efficiently, and canary releases are becoming a widely accepted technique to avoid risks. This is why Vamp’s solution is timely and promising.
Featured image: Pixabay
Microsoft has pumped up Office 365 Advanced Threat Protection with a new feature, Automated Incident Response. Here’s what you need…
What will be in your living room or on your wrist this year? It may very likely be one of…
As virtualization becomes a major part of organizations’ infrastructure, these SD-WAN technologies provide faster and more reliable networking solutions.
In this blog post, we are going over a simple script that can be used as an Azure runbook to…