Containerization Proliferation: The Docker Effect (Part 1)

If you would like to read the other parts in this article series please go to:

Introduction

Earlier this month, the IEEE CS announced their picks for the top 9 tech trends in 2016, and it comes as no surprise to anyone who’s been following the industry news that one of those is containers. Those readers who follow my WindowsNetworking newsletter probably saw the editorial I wrote a few months ago about containers, and I’m certainly not the only technology writer who has touched on this subject.

The containerization phenomenon isn’t really new; various implementations of the idea have been around since the 1980s in UNIX, and such products as Sandboxie, Virtuozzo and FreeBSD Jail were available in the early 2000s. However, with the release of Docker in 2013, the popularity of containers began to catch fire, and the container craze is spreading throughout the IT world. Google has been running Docs, Mail, etc in containers for years. Amazon, Microsoft and VMware have gotten into the act, and when many of those “big guys” decide that a technology is worth investing in, you know it’s hot.

Containerization melds well with another important trend in IT and business that has taken hold recently: the “agile” philosophy that’s focused on quick delivery and updating of applications in contrast to the older, slower process. That’s because containers allow developers to create apps that “just work” without worrying about application conflicts and hardware and driver incompatibilities. The container contains the right versions of whatever operating system and drivers and other applications need for the app to run properly.

What containerization is – and isn’t

Some IT pros, and many financial decision-makers who aren’t IT experts but control the purse strings – don’t really understand what containerization is and how it works. If you tell them that it’s virtualization, they think they know what that means, but this isn’t just a new name for the same old virtual machines, just as cloud computing isn’t just a new way of referring to the Internet, although there are some who think that, too.

However, just as the Internet is the foundation on which the cloud is built, virtualization is the foundation on which containers are built. At this point, you might be wondering: How, then, does it differ from traditional VMs?  The biggest difference is in the way these virtual containers “sit on top of” the underlying host operating system.

You’re probably pretty familiar with the way traditional virtual machines work in Hyper-V or VMware Workstation/Server products. Actually, there are two types of traditional virtual machine technology, which are built on two different types of hypervisors, often (not very descriptively) referred to as Type 1 and Type 2 hypervisors. Type 1 hypervisors are also called “bare metal hypervisors” (which is a little more descriptive). In other words, they can run directly on the hardware without an underlying operating system (in essence they ARE the operating system). A Type 2 hypervisor is an application that’s installed on top of a host operating system such as Windows or Linux. VMware ESX is considered Type 1 and VMware Server or Workstation is considered Type 2.

Microsoft’s Hyper-V is a little bit of a chameleon. Although it is marketed as a “feature” in Windows Server and appears to be running on it like a Type 2, Hyper-V has direct access to the hardware and so is classified as a Type 1. The Hyper-V hypervisor is actually running underneath the operating system instead of on top of it. The “host” operating system is actually itself a virtual machine.

So what about container technologies such as Docker? Hypervisors, whether they’re Type 1 or Type 2, do their jobs by emulating the hardware, and this makes them comparatively less efficient than they might be otherwise. Anyone who has worked with VMs knows that if you’re going to run a number of virtual machines on one physical machine, that computer needs to be a powerful one with a high end processor and plenty of RAM. Otherwise performance is going to suffer.

This is because each VM contains a separate copy of the operating system (or for that matter, different VMs can be running different operating systems; this is useful in testing when you need a VM running Windows and one running Linux because you have applications written for each, or when you want to run several different versions of an operating system – for example, Windows Server 2008 R2, 2012 and 2012 R2 – in order to see how a particular application works on each).

The problem is that this is very resource intensive and a physical machine that will support running half a dozen VMs will be expensive, offsetting the cost savings of machine consolidation, although it will reduce the footprint and may allow you to save money in form of reduced square footage needs. Still, there ought to be a better way, and there is. That’s what containerization is all about.

How containers work

The key to understanding containers is realizing multiple containers run on a shared operating system. This doesn’t mean an underlying host operating system on which a hypervisor runs; this means that the applications in the different virtualized containers actually use the resources of the same operating system instead of each having its own copy of the OS installed as with traditional VMs.

There’s just one OS instance instead of several. That, in turn, means the minimum hardware system requirements are drastically reduced, lowering cost while at the same time speeding up performance. Or put another way, you can run about twice as many instances on the same hardware as you could with traditional VMs. It’s a win/win situation.

Applications that are running in containers operate as if they have their own processor, memory, storage and file system. The operating system kernel is abstracted so that there is no need for each application to have a separate OS kernel. Kernel features are shared between the containerized applications, but a separate namespace is created for each of the containers to isolate the environments from one another.

Of course, this means that all of your containers have to run on the same operating system, so you wouldn’t be able to have some Linux instances and some Windows instances as you can with traditional virtual machines. That’s one of the limitations of containerization and something that you have to take into account when deciding whether containers or VMs is the best solution for your particular use case scenario.

A big “plus” for containers is the ability to use them to package applications that can then be run almost anywhere, more easily than with regular VMs. Open source containers are easy to deploy to the cloud. The application plus its dependencies – libraries, packages etc. – are all contained in the container and all shipped together, eliminating the frustrations that occur when software works great in development and testing and then fails to work properly when it gets to the actual production environment.

You might be wondering about the security angle, given that all of your containers are running on the same operating system kernel. In UNIX/Linux-based solutions, the chroot operation is used to create separate root file systems where processes can run in isolated environments and are unable to access data in other processes that aren’t within their own directory trees. The processes are said to be running in a chroot jail. Containers build on that mechanism. Solaris Containers, released by Sun in 2005, first utilized and enhanced this chroot process and a few years later LXC (Linux Containers) was integrated into the Linux kernel. Docker was originally based on LXC, and when we get deeper into the discussion of Docker’s features and functionalities in Part 2 of this series, we’ll talk more about LXC and Docker’s new execution environment that replaced it, libcontainer.

There are other containerization technologies that are similar to LXC for Linux and UNIX environments. These include OpenVZ, Linux-VServer, FreeBSD Jails and Solaris Containers. Over on the Windows side, in 2015 Microsoft introduced Windows Server Containers for Windows Server 2016 and Nano Server. Nano Server is an operating system built specifically for running cloud apps and containers. Hyper-V containers are Windows Server containers that run inside a Hyper-V partition for great isolation/security. Windows containers can be managed with PowerShell and integrate with other Windows technologies such as .NET.

The good news is that Microsoft built its container solution to integrate with Docker so IT professionals and devs can have a cross-platform experience, managing Windows Server containers with Docker management tools. Microsoft and Docker are both a part of the Open Container Initiative and the two companies announced a partnership to make it easy for developers to develop and manage both Windows containers and Docker containers with the same set of tools.

Summary

Here in Part 1 of this multi-part article series providing an overview of the containerization phenomenon, we briefly explained the benefits of using containers and some generalities about how they work. In Part 2, we’ll start drilling down into the popular container solutions, logically beginning with the company that put containers on the map: Docker.

If you would like to read the other parts in this article series please go to:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top