Kubernetes hones in on multicloud and hybrid cloud for 2020

With KubeCon + CloudNativeCon drawing in more than 12,000 attendees in November, it’s no surprise that you can’t have a conversation about enterprise IT without bringing up Kubernetes. In addition to becoming the de-facto standard for container orchestration in the cloud, Kubernetes is also becoming the foundation stone for both hybrid IT as well as edge computing, with an emphasis on infrastructure abstraction. This is a pretty significant development considering that up until now, Kubernetes was exclusively for people who use Docker and containers — not so anymore.

A recent report by Sumo Logic suggests that while Docker and generic container adoption seem to have peaked and slowed down, the Kubernetes project continues to develop momentum. Around this time two years ago, a blog post from Apprenda stated there were about 7,000 people with Kubernetes skills listed on LinkedIn. If you check today, there are more than 24,000 in the U.S., 7,000 in the UK and 3,000 in India. The fact that Docker is now owned by a Kubernetes distro is pretty much overkill and just evidence of how rampant the Kubernetes revolution is.

Cloudy with a chance of enterprise

Now, as far back as most people can remember, “cloud-native” referred to stuff that runs in the cloud. In fact, Liz Rice, chair of the CNCF technical oversight committee and vice president of open-source engineering at Aqua Security, pointed out just that during her keynote speech on day 1 of KubeCon + CloudNativeCon. It wasn’t until day 2 that Rae Wang, group product manager at Google, made it quite clear that 2020 was going to be all about multicloud and hybrid IT.

This is a major turning point as far as Kubernetes and the CNCF are concerned and adds to the overwhelming reality that hybrid environments are the future of cloud-native computing. This brings us back to that emphasis on infrastructure abstraction and the cloud-native experience that the enterprise is looking to Kubernetes for. Infrastructure abstraction is what gives you the portability to run anywhere on any cloud or on-premises and that’s what the promise of Kubernetes holds for enterprise IT.

Extending the ecosystem

kubernetes multicloud
Flickr / Pedro Szekely

While the second beta version of Kubernetes 1.17 was released during the conference, the release was in no way the focus of the event. There was, however, quite a bit of focus on custom resource definitions (CRDs) and the number of enterprise organizations embracing this extension model. CRDs or custom resource definitions reached general availability in the latest stable version of Kubernetes (1.16) and make it possible for Kubernetes to not only integrate with external resource management tools but to govern and control them as well. In other words, CRDs are the tentacles with which Kubernetes extends its ecosystem, and the tentacles have only just reached maturity.

CRDs help extend the Kubernetes API so that users can define and manage new types of resources. It may seem simple, but it’s a pretty powerful concept. What it basically means is you can define anything with a single definition. That definition could refer to something as simple as a “hello world,” to something as complex as a data warehouse. Additionally, CRDs come with “operators” that allow us to encapsulate the human experience and encode it in software. It’s with the help of these CRDs and their operators that Kubernetes makes it possible to define “cloud-like” services on-premises, as well as on any Kubernetes cluster.

Edging the Kube

Flickr / Brandon Baker

One of the reasons for the absence of any anticipation with regards to the most recent version of Kubernetes is that organizations and developers alike are more-or-less satisfied with the stability of Kubernetes and are now looking beyond the traditional scope of containers. Edge computing is one of those things that lie beyond that scope and a major part of the Kubernetes ecosystem. As most of us are already aware, the majority of data that’s generated today is both created and consumed at the edge.

Earlier this year, Azure released Azure Data Box Edge which is a physical device that solves this exact challenge. It reduces latency and makes it easy to transfer and analyze terabytes or petabytes of edge data before sending it to Azure Machine Learning service for processing. This further expands on the already wide range of Azure file storage solutions.

With the exception of hybrid IT, edge computing has garnered the most interest during the conference. Wind River announced Wind River Cloud Platform for managing “edge-cloud” infrastructure with Kubernetes that helps users deploy and manage a physically distributed vRAN infrastructure. K3s from Rancher labs was another edge feature at the conference and was announced as GA. K3s is a lightweight Kubernetes distribution optimized for the edge that has recently been finding applications beyond the edge as well.

MacGyvering the mesh

The star of the show at the KubeCon this year was without a doubt the concept that “everything goes with everything” and Kubernetes is the control plane to manage it all. This is evident from the number of organizations stumbling over themselves in an attempt to solve the multicloud puzzle and get all the pieces to work together. In addition to HPE debuting its container platform for hybrid cloud implementations, Agile Stacks launched its KubeFlex platform that accelerates software delivery across on-premises and hybrid clouds.

Other notable organizations making waves in hybrid IT include NewRelic, Rafay, Banzai Cloud, Robin.io, and Diamanti. Diamanti announced the launch of Spektra, which is supposed to be the first hybrid cloud Kubernetes control plane focused on application and data persistence. Pipeline 2.0 from Banzai Cloud enables organizations to deploy and manage their own clusters as well as additional tools to help create and maintain services that run across hybrid environments.

Automating Kubernetes

Another important and ever-growing part of the Kubernetes ecosystem is without a doubt CI/CD and the GitOps workflows that govern them. GitOps is the process of using source control as a central system of record to maintain sanity among the chaos that is modern-day DevOps. While GitOps was introduced by Weaveworks, its acceptance is almost unprecedented and we now see a number of tools are emerging even merging to help automate Kubernetes.

Upbound, the makers of Rook and Crossplane, announced a collaboration with Gitlab to manage multicloud services from a single control plane in GitLab. This will allow users to use CI/CD pipelines across different public clouds. Another interesting merger was between Intuit, Weaveworks Flux, and AWS to create Argo Flux, an open-source project to drive GitOps application delivery for Kubernetes. Additionally, Cloudbees announced a GUI for Jenkins X, which has quickly become the GitOps tool of choice for the enterprise.

Beyond containers

Kubernetes in 2020 is going to be about reaching new markets and finding applications outside the “container.” In addition to hybrid IT, edge computing, IoT, GitOps, and CI/CD, Kubernetes is maturing and also being used to handle sensitive workloads as well as stateful apps like FTP servers and databases. In a presentation by Nozzle Corp. at KubeCon + CloudNativeCon, they showed how they moved a 20 terabyte MySQL database from Azure to GCP in just under an hour using Vitess.

Similar to how Docker just kind of resigned to the fact that Kubernetes was the better orchestrator, the CNCF and the Kubernetes community really haven’t been given a choice with regards to the way the cloud-native story is playing out. The writing is on the wall, however, and if you want the enterprise to shift to the cloud, you need to bring the cloud to the enterprise and that means on-premises and in hybrid environments.

Featured image: Shutterstock

Leave a Comment

Your email address will not be published.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top