VMware’s response to a world gone Docker

Anyone who’s had a chance to work with containers knows how popular Docker is. The thing about containers is the flexibility of operation they offer with the use of relatively low resources when compared to virtual machines — the core offering of VMware. Does this mean that people are going to stop using virtual machines altogether? Not by a long shot. Most enterprises still feel that the stability and security that VMs offer far outweigh the flexibility and low-resource requirement offered by containers. This could be because the lack of isolation provided by containers makes it an easier target for hackers. However, if you look at history and how new technology always hits snags while gaining popularity, these are definitely just snags, and once developers manage to iron out a few kinks, there’s no hiding from the fact that containers are here to stay.

If you can’t beat ’em, join ’em

With the knowledge that containers are the way of the future and that all they’re really lacking is the security and stability that VMs offer, VMware really had only two options. Get on board or get out of the way. They made the smart decision by getting on board, and now are working toward making containers enterprise-ready. The question to be asked would probably be, “What can they bring to the table?” The answer would be a whole hell of a lot, because you’re talking about a company that has at least one product in every data center in the world, and deep pockets, too.

Let’s look at how VMware is “pushing” containers to be production-ready so that they’re on the bandwagon when it finally takes off. Until the use of containers, each instance of a program needed its own OS along with libraries and resources. What containers did was virtualize the OS level so that a single Linux kernel and set of resources are enough to run a large number of instances in parallel. This is crucial to DevOps, and any enterprise adopting this approach would lean toward using containers, especially in development.

Problems with managing containers

Containers aren’t just about virtualizing the OS level but also about breaking down applications into components and processes. For example, one application running on one virtual machine could be potentially broken down into thousands of containers. Though this makes for good portability, with thousands and sometimes hundreds of thousands of containers that need to be managed, this can be cumbersome. This is one of the main reasons apart from the obvious security issues that enterprises are not using containers in production (yet).

VMware’s response

VMware’s decision to support containers has led to the launch of two products aimed at different clientele: Vsphere Integrated Containers (VIC) and Photon. VIC was designed to work with Docker and lets VMware customers use the familiar VSphere console and management tools to manage and control containers. What it actually does is treat each container like a mini VM. VSphere is probably the most popular cloud-based virtualization platform from VMware. VMware recognized that the main area that containers were lacking in was security. VMware has spent close to 15 years building a fortress-like hypervisor that already has the confidence of enterprises everywhere. By integrating Docker into its VSphere virtualization platform, VMware not only gave developers the familiar Docker API to work with but also secured it in a way that enterprises expect when running their mission-critical applications.

Virtual container hosts and clones

With the strong isolation boundary provided by VSphere, using Docker is now a much more appealing option for production. Even though VMs have always been compatible with containers, each container needed its own VM. This might seem like a massive waste of resources for each container to have its own OS, but it’s what most enterprises were doing until now. With VIC, a Virtual Container Host (VCH) is used, which launches a paper-thin copy of the OS with every container. The cool thing about the VCH is its completely dynamic boundaries and the fact that it uses only the resources used by running containers. Once containers don’t exist anymore, their resources are allocated back to the resource pool. Along with a clone technology that allows the OS to be cloned and forked as required, it allows containers and conventional VM applications to be run side by side. What it does is basically bridge the gap between those who’d like to start using containers and those who are using virtual machines exclusively. Where an enterprise was probably using a thousand virtual machines to run a thousand applications, with the help of VCH they can now have those applications running in containers spread out across a much lower number of virtual machines.


The whole point behind using containers is so each instance of an application doesn’t need its own OS and resources. However, the need for basic infrastructure management doesn’t magically disappear when you use containers. Being the most popular virtualization platform out there, VMware’s step toward adopting containers has made their use a whole lot more viable for the average developer.

What most software developers have learned from Microsoft is that rather than monopolize the market, giving users the option to choose is always critical. What Microsoft forgot was that people need to like you, too. This is probably why VMware has gone one step further and also integrated Google’s open-source container manager Kubernetes into its virtualization platform. Kubernetes is commonly used with Docker to manage and keep track of containers, and the integration gives VMware users access to one of the best Docker orchestration tools out there.

The ‘bare-bones’ kernel

Another step toward the promotion of containerization is the Photon OS. Containers run on a single Linux kernel, and this enables them to be packaged and run on any platform regardless of inconsistencies in environment. So what VMware did was build an ultra-lightweight bare-bones Linux kernel from the ground up with the job of successfully running containers as the only priority. What the Photon OS does is wrap itself around each container with the help of a clone technology that allows it to fork itself into multiple instances. The clones can be created and deleted in an instant and lend the much-needed isolation to containers. The photon platform consists of the photon machine and the photon controller. The machine consists of not only a scaled-down OS, but also a scaled-down version of the ESXi hypervisor, referred to as a microvisor. So not only does each container get its own OS, it also get a mini-security guard, too! Unlike VIC, Photon does not rely on Docker as its underlying technology. As opposed to VIC users who are migrating from VMs to containers, Photon is aimed at new developers who are looking to adopt out-and-out containerization.


The best of both worlds

With both these efforts, VMware is effectively looking at merging the two technologies, VMs and containers, to get the best of both worlds. With an ultra-lightweight OS and microvisor, VMware has managed to lend the much-needed isolation that containers needed to keep them from communicating with each other. Is running each container with a scaled-down OS and hypervisor the ultimate solution to use containers in production? Only time will tell, but what we do know is that this is definitely a step forward, for both containers and virtual machines.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top