Getting Started with Containers (Part 1)

If you would like to read the other parts in this article series please go to:

One of the most anticipated new capabilities that is being introduced in Windows Server 2016 is containers. In spite of the hype surrounding containers however, containers seem to be causing a fair amount of confusion (based on what I have read on various Internet message boards). That being the case, I wanted to take the opportunity to explain what containers are, what they are used for, and what containers should not be used for. As I do, I am also going to show you the basics of creating and working with containers in Windows Server 2016.

So what are containers? I can give you a three word definition – operating system virtualization.

When you hear the word virtualization, it is tempting to think of server virtualization because that is the type of virtualization that usually gets the most attention. Even so, there are other types of virtualization that have been in use for many years. For example, network virtualization and storage virtualization come to mind. Operating system virtualization is just another type of virtualization.

In a general sense, virtualization can be thought of as a technology that gives administrators the ability to make resources appear as the administrator wishes they existed, not as they really do. Consider for example, server virtualization. I have one particular host server that has roughly about a dozen virtual machines running on it right now. I wish that I had a dozen physical servers that I could use for various purposes, but I don’t. Even so, server virtualization lets me function as if I did have a dozen different servers, even if those machines only exist in cyber space.

Let’s use virtualized storage as another example. Virtualized storage and storage virtualization come in many different forms. One especially common type involves the use of thin provisioning. Thin provisioning is commonly used with virtual machines. Just as physical computers typically require physical hard drives, virtual machines usually need virtual hard drives. The problem however, is that the collective storage requirements of an organization’s virtual machines can quickly overwhelm the available physical storage capacity.

Consider for example, what I said earlier about having a dozen virtual machines running on one of my host servers. The default virtual hard disk size in Hyper-V is 127 GB. Most if not all of the previously mentioned virtual machines are running default configurations. This means that 12 virtual machines, each with 127 GB of storage should collectively consume 1524 GB of space. In reality, my virtual machines are using far less space, thanks to thin provisioning.

When Windows creates a thinly provisioned virtual hard disk, it doesn’t really matter how large the administrator makes the virtual hard disk. The virtual hard disk initially consumes less than 1 GB of physical disk space. Additional physical disk space is only consumed as data is added to the virtual hard disk. The goal of thin provisioning is to help virtual hard disks to consume as little physical disk space as possible.

So earlier, I said that virtualization technologies can give you the illusion of having the resources that you wish you had. Consider how thin provisioning supports that statement. If you have ever wished for unlimited storage space, that wish can virtually come true. Hyper-V makes it possible to attach large numbers of multi TB drives to a virtual machine, even if the host does not have enough physical disk space to accommodate the storage allocation. This is known as over commitment. Over commitment only becomes a problem if you attempt to write more data than the underlying storage hardware can physically accommodate.

So what does all of this have to do with containers? As previously mentioned, containers are just another form of virtualization. Like I said earlier, containers can be thought of as operating system virtualization. If that concept is still a bit hazy, then think of containers as a next generation server virtualization technique.

Right now I’m sure that there are plenty of people who are cringing because I just referred to containers as a next generation form of server virtualization. Before you send me an angry E-mail though, let me elaborate.

As I’m sure you know, there are a number of different advantages to using server virtualization. The main advantage is probably that server virtualization allows server hardware to be more fully utilized than what would be possible if the server were running a single workload. Another advantage to using server virtualization is that workloads can be isolated from one another. The virtual machine acts as an isolation boundary that can keep applications from conflicting with one another as they might if they were running within a shared operating system.

As great as server virtualization is however, it is terribly inefficient. To show you what I mean, consider a server that is running Windows Server 2012 R2 Hyper-V. Obviously, there is one thing that can be done right off the bat to make the server more efficient. Hyper-V could be configured to run within a server core deployment. Even so, there are other, much bigger inefficiencies that come into play.

The hypervisor itself can be made to be relatively efficient. The problem is the virtual machines. In the case of a Hyper-V server running Windows Server 2012 R2 Datacenter Edition, it’s a pretty safe bet that most of the virtual machines are probably also running the Windows Server 2012 R2 operating system. Sure, Hyper-V can run a wide variety of operating systems, and it’s possible that a server like the one being discussed in this example could be running other non-Windows or legacy Windows operating systems within its virtual machines. Even so, the odds are that most of the virtual machines are running Windows Server 2012 R2. The reason why I say this is because of licensing. A Hyper-V server running Windows Server 2012 R2 Datacenter Edition is licensed to run an unlimited number of Windows Server 2012 R2 virtual machines.

So with that said, imagine that this particular server has its parent operating system and ten virtual machines, each of which are running Windows Server 2012 R2. That’s eleven copies of the same operating system running on the server! There will be eleven copies of many of the files. There will be eleven different instances of some of the system processes running at once. That’s what I mean when I say that server virtualization is inefficient.

But what if you could run a single guest operating system instead or running a dedicated operating system for each virtual machine, without giving up the benefit of isolation? How much memory would that save? How much of an impact would reducing the number of guest operating systems have on CPU cycles and storage IOPS? This is what containers are all about.

Containers are designed to virtualize an operating system in a way that makes it possible to run a single copy of the operating system, while still preserving isolation boundaries between your workloads.

Conclusion

Hopefully, you have begun to understand why containers are such a big deal. In the next article in this series, I am going to delve into the anatomy of Windows Server containers.

If you would like to read the other parts in this article series please go to:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top