Docker and Containers (Part 1) – Understanding Containers

If you would like to read the other parts in this article series please go to:

Introduction

Containers have become hot topics for discussion in the new cloud-connected world of information technology and services, and especially so since the open source Docker platform was released a couple of years ago. As more and more organizations jumped on board the Docker train, Microsoft naturally saw an opportunity and has been integrating container technology and the Docker engine into its upcoming Windows Server 2016 operating system.

In this series of articles we’ll be exploring the basic concepts of containers and the Docker platform; how containers are implemented in Windows Server, Hyper-V, and Microsoft Azure; the new Azure Container service; and the various tools available for orchestrating container-based solutions.  

Let’s begin in this first article by examining what a container is, what it can be used for, and why you should care about them.

What are containers?

Containers are basically a type of virtualization technology but one that has recently become very popular through an open-source project called Docker. The concept of virtualization of course originated back from the days of mainframe computing where a single physical machine (the mainframe) could run multiple independent and isolated virtual machines. This was accomplished by “virtualizing” physical resources like processor, memory, disk and network resources into corresponding virtual resources.

Virtualization first made its appearance on the Microsoft Windows platform in Windows 3.1 with its Virtual Memory Manager (VMM) which managed how information in physical memory (RAM) could be paged to disk. This gave the illusion that the computer had more RAM than it actually had and allowed more applications to run concurrently on the computer. But virtualization really came into its own on the Windows platform with the incorporation of hypervisor-based hardware virtualization through the new Hyper-V role introduced in Windows Server 2008. Before this, organizations that wanted to run Windows-based virtual machines had to use a third-party product like VMware ESX. Beginning with Windows Server 2008 however, they could now run multiple Windows-based (and later Linux-based) virtual machines on a single Windows Server 2008 physical host machine without the need of a third-party hypervisor.

While hardware virtualization continues to be widely used in enterprise environments of all sizes, there’s another type of virtualization that’s even older but until recently it was only available on the UNIX/Linux platform. This other kind of virtualization is generally referred to as operating system (OS) virtualization, but it also goes under another moniker, namely, containers. To understand what containers are, it helps to compare them with the much more well-known (at least to Windows admins) technology of virtual machines.

How do containers compare with virtual machines?

Figure 1 below shows an example of hardware virtualization where two virtual machines are running on a physical host machine using hypervisor technology as implemented in the Hyper-V role on the Windows Server platform:

Image
Figure 1: Illustration of hypervisor-based hardware virtualization which enables multiple virtual machines to run on a single physical host machine.

As you can see from the diagram, each virtual machine requires its own separate copy of the guest operating system files (kernel and libraries) and the code for any applications running in the virtual machines. This means if you want to use an application installed in the first virtual machine, you first have to start the virtual machine and boot the guest OS in that virtual machine before the application itself can begin to execute. And because each virtual machine needs to run an in-memory instance of the guest OS and application code, virtual machines consume a lot of virtual resources.

Now let’s compare this with an example of OS virtualization as shown in Figure 2 where we have two containers running on a host machine that has support for OS virtualization:

Image
Figure 2: Illustration of OS virtualization which enables multiple containers to run on a single physical or virtual host machine.

You should easily be able to see the differences between this diagram and the previous one. These differences are as follows:

  • There is no hypervisor layer. In hardware virtualization (as implemented in Hyper-V) the hypervisor is a layer of software that creates multiple isolated virtual machines that share the same underlying hardware resources (CPU, RAM, disk, network) of the underlying physical host machine. OS virtualization doesn’t use hypervisor technology however, it uses a different approach we’ll examine shortly.
  • Each container contains only application code (here only one app per container, though you could easily have more). Containers do not contain installed instances of the guest OS; instead, each container shares the OS files (kernel and libraries) of the underlying host OS. Because of this, containers utilize much less virtual resources (for example, memory and disk) than virtual machines; hence the boxes representing containers are shown as thinner than those for virtual machines in the previous diagram.
  • The underlying host machine on which the containers are being hosted can be either a physical machine or a virtual machine. This provides flexibility for how container technology can be implemented both in private datacenters and by cloud service providers.

Recall that when you want to use an application installed in a virtual machine, the virtual machine must first be running. If it isn’t, you have to start the virtual machine. Since booting the guest OS may take some time, you may be waiting some seconds (or even minutes) before you can use your application. Containers however share the OS files of the underlying host, and this means the OS files needed by an application in a container are always present. This means that when you want to run an application installed in a container, the application starts almost immediately as if it was the only application installed on a fresh installation of the underlying OS. This means that containers are an ideal platform for developing and testing applications prior to deploying them in production. They’re also ideal for cloud-based applications and services since they enable a lot more applications to run in isolated environments on a given set of physical resources (racks of servers) than you can run using virtual machines.

So how does all this work? Hypervisor technology is well-understood; see the article An Introduction to Hyper-V in Windows Server 2008 from TechNet Magazine if you need a quick refresher. Container technology is new however to many of us who work with the Windows Server platform, so let’s briefly explain how it works.

How do containers work?

How container technology is actually implemented for an operating system platform depends on the underlying architecture of that platform. In other words, OS virtualization works one way on Linux (actually it’s been implemented in a variety of ways on that platform) and another way on the Windows Server platform (it’s coming in the next version). But for now we’ll leave the details of how containers are implemented on the upcoming Windows Server 2016 operating system and instead just describe in general at a very high level how containers work.

In order to be able to host containers, the underling operating system of the container host machine must be able to accomplish two things: isolation and resource control.

Isolation

While a container shares the kernel and libraries of the operating system of the machine that is hosting the container, the operating system must be designed so that it can isolate any application code running in the container from application code running in any other containers being hosted on the same host machine. The way this generally works is that the operating system provides a virtualized namespace to the container so that the container sees only those resources it is supposed to see. By resources I mean things like the file system, the list of running processes, available network ports, and so on. But the container cannot see any other applications running on the host machine, it cannot see any physical files or folders that aren’t within its virtualized file system namespace, it cannot see any physical network ports being used by other applications on the host machine, and so on. The container thinks it owns the entire host machine and its operating system, and any applications running in the container think they are the only applications running on that machine.

Resource control

For this to work however there must also be a mechanism in place for resource control, that is, for controlling how much of each of the host machine’s physical resources are to be allocated to each container hosted by the machine. For example, you might decide to limit Container #1 to being able to utilize a maximum of 5% of the host machine’s CPU cycles and 10% of its network bandwidth while allowing container #2 to have 15% CPU resources and 20% of the available bandwidth.

Conclusion

This first article in this series has helped us understand what containers are and generally how they work. Later in this series we’ll examine in detail how container technology will be implemented in the upcoming Windows Server 2016 operating system, but first we need to learn about Docker and why it’s led to the huge wave of interest in containers over the last couple of years. 

If you would like to read the other parts in this article series please go to:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top