Kubernetes is one of the fastest-growing open-source software solutions. Originally backed by over 15 years of Google’s R&D, Kubernetes has become the standard for managing containers at scale. And now a new version has been released. Version 1.6 is unique in the sense that for the first time the entire release team has been made up of primarily non-Googlers. The fact that this release can be attributed to the likes of CoreOS, Microsoft, Red Hat, Heptio, Mirantis, and Google is a testament to how far and wide the Kubernetes community has grown.
The main focus of version 1.6 is stability. Aparna Sinha, Google’s project manager for Kubernetes, said at the recent KubeCon in Berlin that it was a decision taken by the entire Kubernetes community and will reflect on future releases as well. The decision is to focus on moving lower-level existing features from alpha to beta to stable, rather than introducing new features. With version 1.6, over 20 features have been moved from alpha to stable with over 32 feature changes.
Kubernetes scalability is benchmarked against pretty stringent SLOs (service level objectives), 99 percent of all API calls return in one second and 99 percent of pods and their containers start within five seconds. This SLO now supports 5,000 nodes (150,000 pod) clusters. This is a 150 percent increase in total cluster size and good news for large enterprises looking for proof of concept. For the uninitiated, a cluster is a set of nodes (physical or virtual machines) running Kubernetes agents.
Etcd is a lightweight distributed key-value store that can be distributed across multiple nodes, and the increase in cluster size is attributed to the move to etcd 3.0. Kubernetes uses etcd, which was developed by the CoreOs team to store configuration data that can be used by each node in the cluster. Etcd takes care of storing and replicating data used by Kubernetes across the entire cluster and can also recover from hardware failure and network partitions. Etcd 3.0 is the first stable release of the etcd3 data and API model and is the default controller enabled by Kubernetes 1.6. Version 3.0 has been single-handedly developed by CoreOS to facilitate the scale up to 5,000 nodes.
Version 1.6 also features a number of upgrades related to storage automation. Dynamic Storage provisioning is used to automate and manage the lifecycle of storage and is especially beneficial for using stateful applications where you want to make sure that the storage is always available. With Version 1.6, users now have the benefits of dynamic storage provisioning without having to do any of the manual setup.
Federation and multicloud
Yes, 5,000 nodes is a lot, and it seems to be the consensus of the community that serving users where they are, at low latency, is the priority over growing any bigger. Enter multi-cloud, which not only reduces infrastructure costs but also increases availability, performance, security, and disaster management. For users who need to scale up to more than 5,000 nodes, federation allows you to combine multiple clusters and address them with a single API. Among the federation updates in version 1.6 are the kubefed command-line utility and cascading deletion of federated resources. The kubefed command-line utility, which just hit beta, can now automatically configure kube-dns while joining clusters. Cascading deletion means when you delete a resource from federation, the corresponding resources in all clusters are deleted as well.
Another development was that RBAC (role-based access control) has now moved to beta, which means specific permissions can be given to users with regards to accessing different parts of your cluster. Granular control over what people can do within the cluster is another feature. Without RBAC, every pod in a cluster has roughly the same amount of authorization as every other pod so it’s difficult to differentiate between workloads. The change to RBAC is such a big deal that the development team is comparing it to the DOS to UNIX transition where everyone had equal access in DOS but UNIX came out with user-specific permissions.
No container lock-in
You can’t really think about Kubernetes and not have images of Docker flash in your head — the two seem to be almost inseparable. But version 1.6 aims to change that. All previous versions of Kubernetes were tied in to the Docker container runtime, but a feature of 1.6 is that container runtimes are now pluggable and can be swapped out at will.
Customers can now use container runtimes other than Docker, such as rkt or CRI-O. This is a welcome change for the open-source community that hates vendor lock-ins. It’s also great news for the few who don’t use Docker.
Node-affinity, anti-affinity, and custom scheduling
A lot of powerful scheduling tools have also moved to beta with this release and node-affinity is one of them. Node-affinity allows you to define, on a per-pod basis, what kind of node you want each pod to schedule on. An example would be scheduling pods only on nodes with SSDs or nodes with GPUS, or nodes in a specific geographic location. The second part of this tool is called anti-affinity, and this feature lets you schedule pods relative to other pods so they can be programmed to either repel or attract each other based on user definitions. An example of anti-affinity would be if you want to separate pods or avoid antagonistic services from co-scheduling on the same pod.
Another feature that moves to beta is called “taints and tolerations,” and it allows you to exclude pods from particular nodes. An example would be a pod that doesn’t require GPU access would be excluded from all pods with GPUs. Last but not least, multiple scheduler also moved to beta with this release. This not only lets you create custom schedules but also lets you replace the scheduler with one of your own.
Along with Kubernetes, the tools built around it are also growing, and the CoreOS team having developed etcd3 for this release and also basically headed this release, are sitting in the Kubernetes VIP box. Along with version 1.6, CoreOS has announced that its Kubernetes solution Tectonic now supports bare metal, AWS, and preview support for both Azure and OpenStack. Another update is the extension of its container image registry Quay to manage and support complete Kubernetes applications.
In light of all the recent Kubernetes announcements, not just from CoreOs or Google or Docker, but from Red Hat, SUSE, IBM, HPE, and many others, what comes to our attention is that the entire enterprise is interested in container orchestration in a big way. With evidence that DevOps and microservice architecture reduce risk and improve quality, it’s just a matter of time before companies are forced to adapt or become obsolete. The Internet is full of case studies right from startups in the States to fortune 500 companies in China to banks in England, and there probably hasn’t been open-source software as big as Kubernetes since Linux.
Also like Linux, Kubernetes has gained unprecedented popularity and support, from not just the open-source community but big enterprise names as well. Dan Gillespie from CoreOS led the 1.6 release team, and it’s a credit to Google to allow that kind of transparency. Handing over the reins to someone else to head and manage a release is in the true spirit of open-source software and cultivates even more trust among users. With increased scalability, reduced costs, and multicloud support, it won’t be long before people start questioning why they haven’t made the move already.