How KubeVirt bridges the gap between containers and VMs

Since containers landed on the scene not too long ago, the concept of containerization has grabbed the attention of every enterprise. Most enterprises are either already running containerized workloads or have been working on pilots. Containers vs. VMs has been a constant debate for what seems like ages. Many organizations recognize the shortcomings of VMs and have been working on migrating their workloads to containers while others have been hesitant or simply prefer virtualized workloads.

VMs aren’t dead just yet

Containers were always considered the beginning of the end for virtualized workloads. It was easy for organizations to get swept off the floor by this idea of isolated computing components that required only a fraction of resources consumed by VMs. However, as many organizations have since realized, VMs haven’t reached the end of the road just yet. To understand exactly why getting rid of VMs isn’t possible yet, we need to delve into features that set VMs and containers apart.

A VM, simply put, is an abstraction of a physical machine. Multiple VMs can be built on top of a host server, with each VM completely isolated with the host and other VMs in the network. VMs can be built to suit workload requirements. VMs could be built to run entire applications or just parts of those applications. However, VMs have their shortcomings. Each VM requires a separate image of an OS, which adds to increased storage overhead. VMs can also be slower to boot and run, making them less than ideal for most modern workloads. Containers, however, are smaller abstractions that share the host kernel, which means they require lesser storage. Containers are also faster and portable, which makes them ideal for CI/CD workloads. However, containers can be a bit tricky when it comes to networking and, since they all share a kernel, there’s always a higher risk of attacks.

KubeVirt

The differences between the two solutions are stark. However, a lot of thought and planning needs to go into making and executing the decision to migrate. The migration should be gradual, which means only a few components of your virtualized workloads should be decomposed per iteration. This means that during the migration, you will have both VMs and containers working in tandem. If the migration of certain components is impossible or just not practical, considering the cost of migration, you can still choose to decompose certain components that don’t pose the same issue. In either of these situations, you will have VMs and containers working at the same time in your workloads. So, how do you ensure you can manage your containers and VMs to work seamlessly with each other?

KubeVirt is the answer

There are still various organizations that run the majority of their workloads on VMs. Organizations that would like to incorporate containers into their workloads to varying degrees sometimes shy away from the migration altogether because of additional cost and a lack of necessary skill set. To manage both the VMs and containers, organizations will need to invest in two different platforms to manage them (e.g., vSphere and Kubernetes). Organizations will also have to hire resources that specialize in either platform. Some organizations might not find the migration worth the trouble. But, this is where KubeVirt comes into the picture. With KubeVirt, you can run your VMs in Kubernetes pods alongside containers.

KubeVirt by RedHat has been adopted by the CNCF. KubeVirt is an open-source project that extends Kubernetes using the CRD (Custom Resource Definition KPI). In K8s, resources are a collection of similar API objects. Users can create custom resources using the CRD API with a given name and schema that, once created, are managed by K8s. KubeVirt provides a CRD called VirtualMachine. This resource consists of various VM objects that define the properties of the VM like the type of CPU or machine, the size of RAM and VCPUs, and different NICs required by the VM. You can’t simply lift and shift entire VMs to Kubernetes pods and call it a day. To launch the VMs, you need more than just the VM resources. While Kubernetes takes care of networking, scheduling, and storage, we need an agent that runs on the cluster and provides the virtualization functionality to our K8s VMs. This agent is provided by KubeVirt. Let’s take a look at various components of KubeVirt.

virt-controller: This agent runs on the cluster and is responsible for providing cluster-wide virtualization. Virt-controller checks for any new VM object and creates pods to run those VMs in. This component of KubeVirt also monitors the VM/VMI CRDs and manages their subsequent pod lifecycles.

virt-handler: Just like the virt-controller, this component constantly checks for any update to the VM object. Its job is to ensure modification of the VM to meet the specified requirements or modifications. Every host runs an instance of the virt-handler on it.

virt-launcher: This component runs on the primary container of the pod and is responsible for launching the VM process whenever the VM object associated with that particular pod is scheduled to run. Virt-launcher also provides the cgroups and the namespaces that are required to host the VM process. virt-handler passes the VM object to virt-launcher whenever the VM resource is set to begin. virt-launcher uses the local libvirtd component to start the process. Virt-launcher monitors the VM process and is terminated when the process ends. Virt-launcher also ensures that the VM process is successful even if the duration exceeds the Kubernetes run-time.

libvirtd: This component is present in every VM/VMI pod and helps virt-launcher to manage the VM/VMI processes. libvirtd is also used by virt-handler to signal the creation of a domain to the cluster.

KubeVirt can be installed or removed in existing Kubernetes clusters without having to make one from scratch. This allows users to add newer VMs to existing fully containerized workloads. With KubeVirt, organizations don’t have to spend days working on containerization of the applications before they start their migration as they simply move their VMs to K8s and gradually decompose them into containers.  Similarly, KubeVirt helps save the hassle to fragment complicated monolithic applications that are hard to containerize.

KubeVirt, VMs, and containers: Peaceful coexistence

KubeVirt is the tool for the organizations still on the edge about migration. Many organizations have realized that VMs aren’t over yet. Ideally, a workload runs on both the VMs and the containers and KuberVirt is the way to bridge the gap between the two. Not everything needs to be decomposed into containers. Heavier and less frequently modified components can still be hosted on VMs while smaller and more dynamic components can be hosted on containers. With KubeVirt, VMs and containers won’t just coexist, they’ll work seamlessly together. Going forward, many organizations will want to use this tool to find the perfect balance between the traditional and the modern way of computing. As 5G rolls out in the not so distant future, KuberVirt will be able to help organizations even more. With increased network speeds, any latency and performance issues that occur in such hybrid-containerized workloads will be resolved to result in an even more flawless operation. KubeVirt is still pretty young and is evolving every day. It’ll be exciting to see how much it has to offer.

Featured image: Pixabay

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top