If you would like to read the other parts in this article series please go to:
- Containerization Proliferation: The Docker Effect (Part 1)
- Containerization Proliferation: The Docker Effect (Part 2)
- Containerization Proliferation: The Docker Effect (Part 3)
- Containerization Proliferation: The Docker Effect (Part 4)
- Containerization Proliferation: The Docker Effect (Part 5)
In Part 1 of this multi-part article series, I briefly explained what containers are, some benefits of using containers and the scenarios in which they aren’t the best choice, and some generalities about how they work. In Part 2, we discussed popular container solutions, beginning with Docker. In Part 3, we started talking about container security in general and Docker security in particular. In Part 4 we addressed how you can harden the Linux host using SELinux, AppArmor, and Grsec, and in Part 5, we discussed Docker Content Trust, and some third party security solutions for Docker.
CoreOS is a minimalist implementation of Linux that’s based on ChromeOS. One of its most interesting characteristics is that it uses Docker containers for all of the applications other than the small number that are installed along with the OS. CoreOS is in fact all about containerization and clustering. The containers make it easy to isolate the applications and also makes them portable and easy to manage. Clusters are groups of nodes – either physical servers or virtual machines.
Note that CoreOS is intended for large scalable deployments in an enterprise environment that are managed as clusters, rather than as individual servers, where you want to distribute services across the available nodes. It can run in the cloud (for example, Amazon EC2 or OpenStack) and on-premises on your own hardware (bare metal). You can run a single CoreOS cluster on two different clouds, as well.
When you create a CoreOS installation, it can immediately join an existing cluster and connect with the other members. A daemon stores and distributes data to each cluster host so as to keep configurations consistent. CoreOS can use a tool called fleet to allow you to manage clusters as if they shared the same system and not have to worry about the physical machines that each container runs on. This gives you high availability.
Google introduced Kubernetes (which comes from the Greek word for “helmsman”) two years ago and released version 1.0 last year. It was designed to be an open source solution for managing application containers across clusters. It uses the concept of “pods” that consist of a container or containers on the same host machine that can share resources. Sets of pods that work together are called Kubernetes services.
Kubernetes was originally created to simplify working with containers on the Google Compute Engine, but doesn’t require GCE. It is often run with CoreOS. Kubernetes builds a software layer over the clustering infrastructure to allow applications that are made up of different services to be managed as a single application. This simplifies things for admins.
A Kubernetes cluster has a master server that is used as the point of centralized management. Kubernetes doesn’t work without this master server. If the master server is down, Kubernetes won’t function. The master server runs an API server, which uses a RESTful interface. REST is representational state transfer. It’s the architectural style of the web. RESTful systems can communicate over HTTP. What this all means is that many existing tools can communicate with the API server.
Other Kubernetes components include a controller manager server, which handles replication processes and a scheduler server that assigns workloads to nodes within the cluster. It ensures that the workload is spread correctly to avoid overloading the resources of any server.
That brings us to the servers that perform the computing tasks, and these nodes have been dubbed “minions.” Since the broader definition of minion is a nameless, faceless servant, this somewhat describes the roles of these servers that “do what they’re told” by the master. Pods run on the minions. Services run on the minion nodes and are config units for proxies that run there. A service point to one or more pods. Containers in the same pod share the same network namespace. Pods can be created manually or created by the replication controller based on a template.
Kubernetes uses its own implementation of etcd, which is also used by CoreOS, for the storage of configuration information that is available to all of the nodes in a cluster. The Kubernetes version gives you more flexibility than the CoreOS version.
You can find out more in the Kubernetes User Guide here.
Windows Server and Hyper-V containers
Microsoft announced in October of 2014 that they were going to bring containers to Windows Server and at the same time, formed a partnership with Docker to extend its API to support Windows Server containers. Windows Server 2016 supports two different container variants: Windows Server containers and Hyper-V containers.
Thanks to the partnership with Docker, you can manage both Linux and Windows Server containers with the same Docker client, and you can run the same Windows container package with Windows Server containers and Hyper-V containers. So the next logical question is: what’s the difference between the two?
The big difference is the level of isolation and thus security. Windows Server containers operate like traditional containers, with multiple containers utilizing the same host operating system. Container isolation relies on namespace and process isolation. This carries with it the same benefits (lower resource usage and increased performance) as well as the same drawbacks (security issues due to the shared OS) that we’ve discussed previously when we talked about Docker containers.
Hyper-V containers attempt to address the security issue by providing more isolation and they do this by giving each container a separate copy of the operating system kernel (kernel level isolation). But wait a minute – didn’t we say those shared resources define the difference between containers and traditional virtual machines? Yes, and in truth Hyper-V containers are really more like VMs than containers – with one big exception, and that’s that they can be managed as containers by Docker.
This difference provides you with a clue as to when and where you could make the choice to use one or the other of Microsoft’s container solutions. Windows Server containers will work fine when you’re running trusted applications in a secure environment, and they’ll be less taxing on server resources.
Deployment is easy, too, although there are a few “tricks of the trade” to getting them up and running. You have to install the available base images that contain the core OS files needed by your containers, and in order to do that, you have to first install a package provider to find the images. Also note that you have to use PowerShell or Docker to manage them since there’s no GUI interface, so if you don’t like typing commands, you might not be a happy camper. After you create and start a container, you can establish a session with it using PowerShell Remoting and configure it, and then you can install applications and services in it.
If you need to run untrusted applications, Hyper-V containers may be a better option, and they will give you more portability and agility than a regular VM, plus you’ll still be able to manage them through Docker along with your Windows Server containers and your Linux containers. In a way, it’s the best of both worlds, albeit with some tradeoffs (such as slower startup times than “real” containers). Another caveat is that implementing Hyper-V containers is a little more complicated than deploying Windows Server containers.
Note that with Windows Server 2016 Hyper-V, you have support for nested virtualization. That means you can use containers (a type of virtualization) on a host machines that is itself a virtual machine. How cool is that? Hyper-V containers connect to a virtual network adapter (that connects to a virtual switch) like other Hyper-V VMs.
You can find a deployment tutorial with screenshots in this article from Redmond Magazine online.
Containers on Windows Azure
Microsoft’s Azure IaaS cloud supports Windows Server containers. You will, of course, have to install a Windows Server 2016 image with the containers feature enabled in your Azure cloud. This is easy to do; a search in Azure Market Place for “containers” will take you to a Windows Server 2016 Core with Containers technical preview image and walk you through the steps of creating the deployment, as described in this article on the MSDN web site.
At the time of this writing, Azure does not support Hyper-V containers. In order to deploy them, you will need an on-premises Windows Server 2016 host.
In the Server 2016 tech preview, the container images are based on Server Core, the minimalist non-graphical installation of Windows Server.
Azure also supports container technologies such as Google’s Kubernetes, Docker Compose and Swarm, and Mesosphere. We discussed Kubernetes earlier in this article. Docker Compose lets you link together multiple containers running small apps that work together to comprise distributed applications. Swarm is a clustering tool that can create a large resource pool by turning a group of Docker engines into one virtual engine. Apache Mesos is a distributed systems kernel on which Mesosphere has built a container orchestration solution for Mesos containers, Docker and Kubernetes.
In this six-part series, we’ve provided a broad overview of the containerization trend, its various incarnations, some history, and where it might be going. Containers are likely to be key players in the future of IT professionals, so it behooves you to learn as much as you can about them, as soon as you can. As with all things related to computing and networking, container technology will continue to evolve and change and undoubtedly exciting new developments are right over the horizon.
If you would like to read the other parts in this article series please go to: