Microservice architecture is that kind of next generation technology that’s really widening the gap between the boys and the men (or the girls and the women), so to speak. As far as applications go, microservice architecture is that edge you need to stay relevant in a time and age where big slow updates and sloppy patches are just unacceptable. Containers are definitely that first commitment to microservice architecture, but they’re only the “I do” part of the marriage. Being able to manage microservices in sickness and in health comes next, and that’s where you can cheat and let Kubernetes do a lot of the heavy lifting for you.
If containers are first on your shopping list, Kubernetes definitely comes next, and the fact that it’s open source and being continuously developed by the CNCF means a lot of people who know what they’re talking about think this technology is going to last for a while. When personal computers were a new thing, a lot of companies hesitated in investing a lot in case a processor twice as fast came out tomorrow. That’s not the case here with a living, breathing entity like Kubernetes that’s forever upgrading with the help of all kinds of geniuses from all across the planet. That’s definitely the technology you want to bank on and is probably why everyone wants to bank on it.
The new Kubernetes 1.7 update was all about showing us glimpses of how versatile and spread out the technology can be. The ecosystem of tools and supporting software that surrounds it has grown exponentially, and the tools being built around Kubernetes are just adding to the already high level of customizability that goes with it. Now, though a lot of people refer to the entire stack as “containers,” Kubernetes clearly distinguishes between containers and containers that are well managed.
Docker was definitely the software that set the container era rolling, but it wasn’t till people started using Kubernetes to manage their containers that we really started getting somewhere. Kubernetes makes it humanly possible to manage thousands of microservices running in containers, and it does that through a simple and easy to use set of APIs.
Kubernetes also makes it possible to utilize the vast ecosystem of specialized tools that have been built around it, which is quite a huge resource, to say the least. The fact that it’s open source and the darling of the CNCF is just icing on the cake, and most companies just go by the fact that if it’s good enough for Google and Netflix, it’s good enough for them.
When we talk about the stuff that has to start being done differently with regards to microservice architecture, you need to also start doing a lot of things that you weren’t even doing before. When thousands of microservices are communicating with each other, there’s a bunch of back-office work that needs to be done to keep them healthy and happy. Kubernetes is the beginning to making sure this happens by giving you the tools to manage microservices.
Things like service discovery, load balancing, and policy-based network security all come into play. If all you’ve got is a container engine, you’re definitely going to be in for a rough ride. Orchestration is more than just telling everything what to do, and before we look at some specialist tools in the ecosystem, we’re going to look at some new territory that container orchestration and microservice architecture has opened up.
Microservices work best when it’s easy to find other services and their own dependencies, and here’s where you really begin to appreciate Kubernetes with it’s well-designed service abstraction. To use a service, you need to dynamically discover the pods implementing it so it can be invoked. This is service discovery, and Kubernetes does it all for you either with DNS names or environment variables, which are the two ways to discover service. Kubernetes uses DNS by default. This is done by using DNS to resolve the service names to the service’s IP address.
Modern high-performing apps and websites cater to thousands and millions of users demanding anything from games to videos to live chat. Load balancing is the practice of efficiently distributing incoming network traffic across a group of backend servers to increase overall performance, and again Kubernetes does this efficiently. There are two different types of load balancers in Kubernetes. One is the internal type, which balances loads across containers, and then there are external load balancers, which are mostly used in the public cloud.
Policy-based network security
In Kubernetes, by default all pods can communicate with each other, and that’s like trying to teach a noisy class. A network policy sets down a set of ground rules with regards to who can communicate with whom. A network policy also defines how groups of pods are allowed to communicate with each other and other network endpoints and also isolates pods so that they reject any connections that are not allowed by the network policy. Pods by default are non-isolated and accept all incoming traffic, so if you want more control over who can talk to your container, a network policy should definitely be way up on your list of things to get.
An advantage that comes with using Kubernetes to manage microservices is the plethora of ecosystem tools that you have access to. Calico is one that definitely makes life a lot easier.
Project Calico secures a Kubernetes network by creating a firewall around each and every workload so that attackers have to deal with a lot more firewalls as opposed to just a couple. These micro-firewall’s from Calico protect every workload individually so there is no domino effect if one service goes down. Each service is perfectly capable of going on on its own.
Calico can be integrated with CoreOS’s open source container networking project called Flannel, where the name implies that it creates a “fabric” of connectivity across your microservices. The mix is dubbed “Canal” and is effectively the security of Calico and the connectivity options of Flannel. Canal is also an open source project and is in essence a more complete solution brought about by the integration of two stellar tools.
Something for everyone who needs to manage microservices
The beauty of technology and modern apps is that they’re almost like paint: If you don’t like blue or yellow, you can mix them up and make green! Weave is another policy-based security system that’s been talked about lately, and with the amount of time, effort, and money being put into policy-based network security, it’s pretty safe to say that this is the best way to secure and manage microservices applications that are powered by containers.
Like Calico and Weave, there are a number of really powerful tools and resources springing up around Kubernetes, especially with regards to networking and security. Some of the latest adoptions by the CNCF have been related to networking for containers like Istio and Linkerd.
Kubernetes isn’t just helping people manage microservices, but also facilitating an entire ecosystem of tools and services to help you manage microservices easily. The enterprise couldn’t ask for more, and though Swarm isn’t far behind, Kubernetes is making this an exceptionally hard act to follow.