Kubernetes has been evolving quickly, with newer versions rolled out now and then to add to its already huge collection of features. However, back in November, during Kubecon + CloudNativeCon North America, Kubernetes co-founder, Stephen Augustus, announced that they will be slowing down the frequency of the releases. This decision was influenced partly by the pandemic but mostly by the fact that many organizations are still using Kubernetes 1.15. By slowing the rate at which Kubernetes versions are released, it wants providers to catch up with its pace. This year, Kubernetes only released three minor upgrades with versions 1.18, 1.19, and 1.20, and it plans on continuing this tempo in 2021 as well. But despite the lack of releases in 2020, Kubernetes rolled out several new features. Let’s take a look at some of the most sought-after features and their graduation status.
Kubernetes Topology Manager
In many high-performance workloads, a combination of CPUs and hardware accelerators are used to provide parallel computation with high throughput. To derive the best performance out of a workload, various optimizations like CPU isolation along with memory and device allocations, are needed. However, in Kubernetes, these tasks are performed for disjoint sets of components as both device managers and CPUs work independently in allocating resources. This can lead to degradation in performance and increased latency.
The K8s Topology Manager was launched as beta with Kubernetes 1.18. The Topology Manager is a kubelet component that helps reduce latency and improve performance in mission-critical applications. The Topology Manager provides a single source of truth to various components via an interface for components called Hint Providers, which can be used by the components to make resource allocation decisions that are aligned with the topology to deliver low latency and optimized performance of critical workloads.
To enable this feature, the TopologyManager feature gate should be enabled. However, this comes enabled by default in version 1.18 and up.
Kubectl node debugging
Launched with Kubernetes 1.18 as alpha, this feature has now graduated to beta in version 1.20. This feature allows end-users to debug their nodes through kubectl. Users can debug running pods without having to restart them or relying on entering containers to perform debugging tasks like checking the filesystem, executing additional debug utilities, or executing initial network requests via host namespace.
Users can now perform the following actions:
- Troubleshoot workloads that crash upon startup by creating a copy of the pod using a different container image or command.
- Troubleshoot distroless containers by adding a new container with debugging tools to the copy of the pod or to ephemeral containers.
- Troubleshoot nodes by creating a new container that runs on the host workspace and can access the host’s file system.
With this feature, Kubernetes wants to eliminate the use of SSH for node debugging and maintenance. The feature gate kubectl debug will come enabled by default from Kubernetes 1.20. The kubectl alpha debug is being deprecated and will be removed in upcoming releases.
V1 Ingress API
This API has been available as beta since Kubernetes 1.1, with various enhancements along the way. Ingress emerged as a popular API among users and load balancers, giving it the de-facto GA status among the K8s community. Ingress API handles external access to the services by exposing relevant HTTP and HTTPs routes. Ingress performs various tasks like load balancing, providing name-based virtual hosts, and termination of SSL/TLS. Ingress resources rely on Ingress controllers to function. K8s currently supports various Ingress controllers like GCE and nginx, among several others. With version 1.18, Kubernetes made some key changes to the Ingress object. The new pathtype field was made available that was set to ImplementationSpecific by default. Users can now specify the type of path using the Exact and Prefix path types.
IngressClass field has been available to users to specify the type of ingress in a K8s cluster. However, starting with Kubernetes 1.18, a new field called IngressClassName has been made available to specify ingress class, replacing the preexisting kubernetes.io/ingress.class annotation.
Volume snapshot operations
Snapshots aren’t the most reliable backup/restore solution out there when it comes to high volume workloads. However, if you play your cards right, snapshots can be helpful in providing no downtime backup and restore functionality. This enhancement was launched as beta with Kubernetes 1.16 and provided API support for Kubernetes Container Storage Interface (CSI) plugins to take snapshots of PersistentVolumes and to restore them when needed. To ensure the snapshots are reliable, users should ensure consistency of data across the application level, host OS, and the storage system. If the snapshots are taken before all the in-memory application data is stored inside, the storage will inevitably be corrupt and wouldn’t be helpful when required.
With Kubernetes 1.20, the Volume Snapshot Operations feature has been moved to General Availability. This feature allows users to take snapshots of the volume in a standardized manner, ensuring reliability. The snapshots operations are portable and can be incorporated in various Kubernetes environments or supported storage providers. These snapshot operation primitives can be used to develop advanced storage administration features for K8s, enabling cluster and application-level backups. To use this feature, you should ensure that Snapshot controller, Snapshot CRDs, and validation webhooks are bundled with Kubernetes by your distributor.
Graceful node shutdown
This new feature was launched as alpha with the latest release of K8s. This feature solves the problem faced by many users and cluster administrators when it comes to pod shutdown. The pods don’t always follow the pod lifecycle. This can happen when a node system inside a running pod is shut down. Since there is no way for the pod to know that a node has been shut down, the pod can run into trouble and not shut down like it’s expected to. The GracefulNodeShutdown feature targets this issue by making the kubelet aware of node shutdown leading to a graceful termination of running pods.
Horizontal Pod Autoscaling Rate Controls
The Horizontal Pod Autoscaling (HPA) API is an alpha feature that allows automatic scaling of pods to replica sets based on certain metric values. This feature is quite helpful when it comes to applications that might be subjected to fluctuating traffic. The HorizontalPodAutoscaler now comes with optional field behavior. Users can now set different scaleup or scaledown rates for different applications based on their functionality and known behavior.
New Kubernetes features: Many in the 2021 pipeline
These are just some of the many features that the users have been looking forward to. Several other features like TLS 1.3 support, Immutable Secrets and ConfigMaps, and Kubernetes API Server Egress Proxy have been made available under varying availability statuses in the last year. Kubernetes has a lot of new features in the pipeline, and several features await general availability. With the slowed-down tempo of release cycles, Kubernetes is trying to extend support for previous versions making it easier for organizations to catch up with the rapid pace of development. And, these newer features are bound to encourage organizations to make that transition. It’ll be interesting to see how this new release strategy pans out for Kubernetes and what new features graduate to GA with Kubernetes 1.21.
Featured image: Shutterstock