Kubernetes is an open-source container orchestration platform created to automate the scaling, deployment, and management of containerized applications. Google engineers Joe Beda, Brendan Burns, and Craig McLuckie developed this platform before it was open-sourced in mid-2014. Now, it is maintained by the Cloud Native Computing Foundation (CNCF), backed by major tech giants like Google, Microsoft, AWS, Intel, Red Hat, IBM, and Cisco. Many cloud services offer a Kubernetes infrastructure as a service (PaaS or IaaS). In his article, we will see how any user, be it a seasoned IT pro or a newbie, can get started with Kubernetes.
How Kubernetes works
Before you can get started with Kubernetes, you should know about the general architecture of the platform. Kubernetes architecture uses several concepts and abstractions, some of which are variations of familiar notions (like services, namespaces, etc.), but others are specifically related to Kubernetes, such as Kubernetes clusters, nodes, pods, ingress, volumes, replica sets, pods, and dashboard.
Kubernetes works on top of an operating system and interacts with pods of containers running nodes. The Kubernetes master receives commands from an administrator and sends those instructions to the secondary nodes. It works with many services to automatically determine which node is best suited for a task. It then allocates computing resources and assigns the pods in that node to fulfill the requested work. The control over those app containers is done at a higher level, giving its user better control without needing each separate container or node for micromanaging.
To get started with a Kubernetes container, you first need to understand the basic terminologies about Kubernetes components and their associations.
Cluster: A cluster always needs a master node, where the regulating services (known as the master components) are installed. These services can be grouped on a single machine or over multiple machines for repetition. They control workload, scheduling, and communications.
Kubernetes Master: It is is a combination of three processes that run on a single node in the user’s cluster, which is created as the master node. These processes are Kube-APIserver (that validates and configures data for the API objects), Kube-scheduler (a daemon that embeds with core control loops shipped with Kubernetes), and Kube-controller-manager (a policy-rich and topology-aware, workload-specific function).
Pods: These are a group of one or more app containers that share storage and network. The containers in a pod are deployed and managed as a single unit. If an application is deployed in a more conventional model, the contents of the pod will always be used alongside on the same machine.
Services: The pods are volatile, and Kubernetes does not guarantee a given physical pod will be kept alive. Instead, a service is a logical set of pods and works as a gateway, allowing pods to send requests to the service without a need to keep track of which physical pods make up the service.
Kubernetes proxy service: This proxy service runs on each node and helps make services available for the external hosts. It helps in sending the request to correct containers and is capable of performing simple load balancing. It also makes sure that the networking environment is accessible and predictable, and at the same time, it is isolated as well. It can be used to manage pods on nodes, volumes, secrets, and creating new containers’ health checkups.
Kubernetes volumes: It is just like a container volume in Docker, but a Kubernetes volume is applied on the whole pod and mounted on all containers in the pod. Kubernetes guarantees that data will be preserved across container restarts. The is volume removed only when the pod is destroyed. Also, a pod can have various volumes (could be of different types) associated with it.
Replica sets: This is a grouping mechanism that lets the Kubernetes platform to maintain the number of instances that were declared for a given pod. The Replica Set uses a selector whose evaluation will result in recognizing all pods that are linked with it. The replica sets host a similar data set and gives high availability and redundancy for pods.
Nodes: They work as a worker machine in a Kubernetes cluster environment. The worker machine can be a physical or in most cases, a virtual one. A node includes all the needed services to host a pod. It has two components Kubelet and Kube-proxy. The Kubelet is the leading service on a node, regularly taking in new or modified pod specifications and ensuring that pods and their containers are running in the desired state. Kube-proxy is a proxy service that runs on each worker node to work with individual host subnetting and exposing services to the external hosts. It performs request forwarding to the correct containers/pods across the various isolated networks in a cluster.
How to set up Kubernetes and use it
There are several ways to set up and get started with Kubernetes. Some of these options are:
Minkube is a tool that can help run a single-node Kubernetes cluster in a virtual machine on any Linux OS flavors on your personal computer. This cluster will consist of a master and one or more worker nodes. The detailed steps for installing and setting up the Minkube are available here. Once done, the user can deploy a containerized application to their cluster with a deployment configuration. This local-box approach is useful for an introductory and initial practice level hands-on.
Katacoda is another easy option to test drive a Kubernetes cluster. It gives a temporary environment that is recycled when a user finishes using it. It provides several Kubernetes scenarios that can be used right out of the box in the interactive browser-based terminal. A user just needed to sign in, and they can start working on these scenarios. Details steps can be viewed here.
Google Kubernetes Engine
Google Kubernetes Engine (GKE) allows a user to get up-and-running with Kubernetes in no time. It provides a facility to manage and operate the Kubernetes clusters on Google Cloud Platform (GCP) without the need for any installation. A user can create a free tier account, and they can start a multinode cluster in a short time. The details about setting up GKE are available here. With this, users can also take benefits of advanced cluster management features provided by GCP, like automatic load balancing, scaling, and upgrades as well as logging and monitoring.
Microsoft Azure Kubernetes Service
Azure Kubernetes Service (AKS) offers serverless Kubernetes, an integrated continuous integration and delivery (CI/CD) experience with enterprise-level security and governance. A user only needs to create a free-tier account and start with the Kubernetes cluster, while enjoying the native integration with various Microsoft services like Visual Studio Code Kubernetes tools, Azure DevOps, and Azure Monitor. More details are available here.
Amazon Elastic Kubernetes Service
Amazon Elastic Kubernetes Service (EKS) is a managed service that makes it easy for a user to run Kubernetes on AWS without standing up or maintain any Kubernetes control plane. A user can create a free tier account with AWS to create the Kubernetes cluster and work on it. Amazon EKS provides support for both Windows Containers and Linux Containers. Additional details and use cases are available here.
Kubernetes Dashboard is a typical web-based UI that allows users to manage and visualize the resource utilization, deploy new containerized applications into the clusters, and troubleshoot applications. Kubernetes Dashboard UI is not installed by default with the Kubernetes containers, and it needs to be installed separately and configured with the Kubernetes environment. Here are the details about getting started with Kubernetes Dashboards.
Get started with Kubernetes: Why you should use it
As applications grow and more containers keep getting deployed across multiple servers, operating them becomes more complicated and challenging. To make this easier, Kubernetes can be used to manage applications in a microservice (software development technique) architecture. The Kubernetes platform provides an open-source API that regulates how and where those containers will run. This platform also provides the capacity to orchestrate a cluster of virtual machines. It can schedule the app containers to run on those virtual machines based on their available compute resources. Kubernetes can help manage service discovery, track resource allocation, incorporate load balancing, check the health of individual resources, scale based on computing utilization, and enable apps to self-heal by automatically restarting containers.
Featured image: Shutterstock