If you have been working in IT for some time, you know that container technology is nothing new. Lately though, it may have been utilized more in Linux environments. IT professionals that work on Windows infrastructures are rediscovering the technology through product releases from companies like Docker and Microsoft. With the release of Windows Server 2016, Microsoft provides an implementation of containers named Windows Server Containers and Hyper-V Containers, and also makes this technology available in Microsoft Azure.
At a high level, a container leverages operating system virtualization to allow execution and isolation of different applications and services on a single host system without having to worry about whether or not each application is compatible with the others. Every application or service running in a container has its own view of the operating system, processes, registry, file system, and network. This offers a less rigorous isolation boundary than Hyper-V virtual machines (VM), but provides a virtualization environment that is more efficient with less overhead to run trusted applications.
Operating System Virtualization
Operating system virtualization is based on the abstraction of the operating system layer to support multiple, isolated partitions or containers on a single-instance host operating system. The virtualization is accomplished by multiplexing access to the kernel while no single container is able to take down the host system. Figure 1 shows the basic architecture implemented with this approach.
Figure 1: Basic operating system–level virtualization architecture
This technique results in very low virtualization overhead and can yield high partition density. However, there are limitations with this type of solution. The primary limitation is the inability to run a heterogeneous operating system mix on a given server because all partitions share a single operating system kernel. In addition, any operating system kernel update affects all virtual environments. For these reasons, operating system–level virtualization tends to work best for largely homogeneous workload environments. You might remember a product named Virtuozzo Containers from Parallels that was based on operating system–level virtualization. Virtuozzo Containers was extensively adopted and deployed by the Web hosting industry to build high-density infrastructures, offering isolated Web services.
In Windows Server 2016, Microsoft adopts the Windows Containers nomenclature to describe the partitions that are created on top of the operating system virtualization layer. This is a smart move that avoids confusing containers with Hyper-V partitions that are based on machine-level virtualization. It also helps to clarify the operating model for Windows Containers, which is to allow the faster deployment of applications to run side-by-side on the same version of the operating system while providing isolation and security for each application, and minimizing the virtualization overhead.
When you create a Windows Container, you instantiate an isolated sandbox on top of the Windows Server 2016 host operating system. Conceptually, you can think of the Windows Server 2016 host operating system as a read-only base image. The new Windows Container that you create will contain the modifications that you make, such as installing a new application and its dependencies, or modifying settings. The underlying Windows Server 2016 host operating system image is not modified. However, you can save your new Windows Container environment as a new image and save it in an image repository. A big advantage of these images is that they are generally much smaller in size than a VHD-based image because only file-based modifications are stored instead of the entire virtual machine with guest operating system and applications. You can deploy a Windows Container image created on a particular Windows Server 2016 host on any other Windows Server 2016 container, and even within a virtual machine if a specific workload requires a higher level of isolation. However, there is no requirement to save a Windows Container as a new image, allowing you to create temporary Windows Containers that can be easily destroyed without saving any of the contents.
A Hyper-V Container is essentially a Windows Container that is running in a Hyper-V partition. With a Hyper-V Container, you are able to further isolate a workload from the physical host operating system. While Windows Containers offer greater partition density and performance, Hyper-V Containers provide a greater degree of isolation, ensuring that the code running in a container cannot impact the host operating system. In a multi-tenant scenario, the ability to have a nested virtualization solution like Hyper-V Containers may be necessary to satisfy more rigorous security, isolation, or regulatory requirements.
With that said, Windows Container images can be deployed in both Windows Containers and Hyper-V Containers without any changes, simply by setting a specific runtime type flag when you create the new container. This enables you to quickly deploy an application and its dependencies in either type of container, and allows you the flexibility of changing the degree of isolation as requirements change for a specific application.
Windows Containers Network Configuration
Windows Containers provide you the flexibility to access your applications on the network using two different methods. A container can be configured with an externally accessible IP address using DHCP or it can obtain an IP address from the container host using Network Address Translation (NAT). If you choose to assign an external IP address to a Windows Container using DHCP, the container will communicate on the network using its own MAC address. This configuration requires MAC spoofing. You would select this IP address assignment option if you require each Windows Container to have a routable address on your network.
Using NAT, the container host assigns a private IP address to a Windows Container, and a specified Windows Container port is mapped to a port on the container host. The application running in the container is accessible to external clients by specifying a combination of the IP address and port on the container host. The container host forwards the network traffic to the destination Windows Container using an internal NAT table that maps the container host port to the Windows Container NAT address and port number pair. You should select this IP address assignment option if you are going to deploy a large number of containers and do not require or want to manage a large number of routable IP addresses.
Using Windows and Hyper-V Containers on a Physical Host
As you can see in Figure 2, Microsoft has made it possible for you to create and use Windows Containers and Hyper-V Containers on the same physical host, if so needed. In this example, the Hyper-V role is enabled on the physical host. In the parent partition, you can deploy Windows Containers each with their own abstracted view of the host operating system. In addition, you can also create one or more VMs, or child partitions, each with their own guest operating system, in which you can deploy Hyper-V Containers. Each VM guest OS must still be some flavor of Windows Server 2016, such as a Server Core or Nano Server configuration. While the Windows Containers share the host operating system as their base image, Hyper-V Containers running in the same VM share the guest operating system as their base image.
Figure 2: Windows and Hyper-V Containers on a Physical Host
Microsoft’s plan is to provide you with a couple of options to manage your container environment. If you are a Microsoft-only shop, you can use PowerShell and WMI to manage your container infrastructure on Windows Server 2016. However, in order to promote rapid adoption of the container technology, Microsoft will also support Docker, an open source system widely used in the Linux community to package, deploy, and manage containers. Docker will allow you to centrally manage containers across both your Windows Server 2016 and Linux infrastructures. Docker will still not allow you to deploy Windows Containers on Linux or Linux containers on Windows Server 2016 since the physical host operating system is shared with the containers. However, you will have access to the Docker Hub, Docker Engine, and Docker Client on Windows Server 2016.
The Docker Hub is an image repository that contains a large collection of container images that you can pull down to deploy on your systems, and also push container images to share with the Docker community. The Docker Engine will allow you to build, run and orchestrate containers on Windows Server 2016. The Docker client will allow you to use the same interface used in the Linux environment to manage containers. With these two container management options, Microsoft hopes to satisfy the requirements of IT staff that work in homogenous and heterogeneous environments.
In Windows Server 2016, Windows and Hyper-V Containers provide a new, lighter operating system virtualization option for the quick deployment of applications in your infrastructure. Both options require Windows Server 2016 as the host operating system and base image for the containers. Windows Containers are most suitable for deployment in a trusted multi-tenancy environment where the containerized applications trust each other, and there is little risk of violating the container isolation boundary through the mistaken or malicious misbehavior of applications. Windows Containers support high automation deployment and scalability factors with less virtualization overhead, resulting in more resources for running applications. Hyper-V Containers provide a higher degree of isolation from the host operating system that is better suited for running applications in a non-trusted multi-tenancy environment, or in environments where workloads have regulatory requirements that drive a higher, more stringent security boundary.