If you would like to read the other parts in this article series please go to:
- Docker and Containers (Part 1) – Understanding Containers
- Docker and Containers (Part 3) – Containers and Windows Server
- Docker and Containers (Part 4) – Implementing Windows Server Containers
- Docker and Containers (Part 5) – Implementing Hyper-V Containers
- Docker and containers (Part 6)
In the first part of this series we introduced containers, a technology that is also known as operating system (OS) virtualization. To understand containers, we compared it with the more familiar (to Windows Server admins) technology known as hardware virtualization. Just as container technology is built into the kernel of modern Linux distros (and will be a feature of the next release of Windows Server), hardware virtualization is similarly built into the kernel of Windows Server in the form of the Hyper-V server role. With hardware virtualization, a single Hyper-V host machine can run multiple virtual machines that act as separate, independent computers each with their own operating system and installed applications. Containers on the other hand contain only application code, and all containers hosted on a container host share the operating system kernel and libraries of the host. And just as a virtual machine may be moved from one Hyper-V host to another, containerized applications are portable across Linux host machines and even across different Linux environments such as Ubuntu, SUSE and Red Hat Enterprise Linux.
However, while having a virtualization technology (containerization in the Linux kernel or the hypervisor layer in Windows Server) is a good start, it only provides the foundation. The real power for hardware virtualization on the Windows Server platform comes when you combine Hyper-V hardware virtualization with the management capabilities of System Center applications like Virtual Machine Manager (VMM) and App Controller. Similarly, the real power for OS virtualization on the Linux platform comes when you combine the containerization capabilities of the Linux kernel with the Docker platform.
If you want to learn more about Microsoft’s System Center suite of applications, go to the Free eBooks from Microsoft Press page on the Microsoft Virtual Academy. Several of the ebooks available there broadly cover the whole System Center platform including :
- Introducing Microsoft System Center 2012 R2
- Microsoft System Center: Integrated Cloud Platform
In addition you will find a number of ebooks there that demonstrate how System Center applications can be used to deploy, manage and maintain hybrid clouds, in particular:
- Microsoft System Center: Cloud Management with App Controller
Yours truly was the Series Editor for most these System Center ebooks, and the authors are all experts from the System Center team at Microsoft.
What is Docker?
At a fundamental level, Docker is a collection of open source tools, solutions and cloud-based services that provide a common model for packaging (containerizing) application code into images that can be easily distributed to and deployed on Linux host machines regardless of which flavor of Linux those machines are running.
In a very fundamental way, Docker helps make DevOps a reality by allowing developer teams to rapidly build, test, deploy and run distributed applications and services at any level of scale. Because containerizing applications eliminates the problem of troubleshooting issues with software dependencies and differences between host environments, Docker increases developer productivity and lets you quickly move applications from test to ship-ready. Docker also makes it a snap to move applications from your developer environment to production, and you can easily roll applications back if further remediation is discovered to be necessary.
Docker began as a project of dotCloud, a company that offered Platform as a Service (PaaS) application hosting services. The open source Docker platform was released in early 2013, and the company Docker, Inc. was established shortly after this. The Docker platform has since gone through a number of incremental releases and is currently at version level 1.10.
As of February 29, 2016 the dotCloud PaaS has shut down because of insolvency by their parent company, and they recommended that subscribers migrate their apps to Heroku, another PaaS hosting company. The fact that they had to do this after being in business only for a few years (and after having spawned the highly popular Docker platform) should serve as another warning to businesses that cloud service companies (and even cloud platforms like Docker) may rise and fall quickly in this rapidly changing world we now live in! And for what it’s worth, Heroku was itself acquired by Salesforce in 2010.
How does Docker work?
To understand how Docker works we’ll begin with some basic Docker terminology:
- Image – A stateless collection of root filesystem changes in the form of layered filesystems stacked upon one another.
- Container – A runtime instance of an image consisting of the image, its execution environment, and a standard set of instructions.
- Dockerfile – A text file that contains the commands that need to be executed to build a Docker image.
- Build – The process of building Docker images from a Dockerfile and any other files in the directory where the image is being built.
The Docker platform provides developers with tools and services they can use to:
- Build and share images through a central repository of images
- Collaborate on developing containerized applications using version control
- Manage infrastructure for applications in Linux containers
But Docker isn’t just limited to Linux containers; as it evolves you will be able to use it for managing infrastructure for Hyper-V, Xen, KVM and other virtualization technologies.
The Docker Toolbox is a collection of tools provided by the Docker platform that make building, testing, deploying and running Docker containers possible. These tools include:
- Docker Engine – A lightweight runtime environment used to build and run Docker containers. The Docker Engine includes an in-host daemon (Linux service) that you communicate with using the Docker client for building, deploying and running containers.
- Docker Compose – Lets you define a multi-container application together with any dependencies so you can run it with a single command. Docker Compose lets you specify the images your application will use together with any volumes or networks needed.
- Docker Machine – Lets you provision Docker hosts by installing the Docker Engine on a machine in your datacenter or at a cloud provider. Docker Machine also installs and configures the Docker Client so it can talk with the Docker Engine.
- Docker Client – On Linux this is a command shell that is preconfigured as a Docker command-line environment. Docker Clients are also available for the Windows and Mac platforms.
- Kitematic – A graphical user interface (GUI) you can use to quickly build and run Docker containers and to find and pull images from the Docker Hub.
- Docker Registry – An open source application that forms the basis for the Docker Hub and Docker Trusted Registry.
- Docker Swarm – A native clustering capability that allows you to combine multiple Docker Engines into a single virtual Docker Engine.
Docker software can be downloaded and installed on various platforms:
Once you’ve installed the Docker software on your machine, you can then proceed to build images and tag, push or pull them to the Docker Hub.
The software in the Docker Toolbox isn’t all the Docker platform has to offer however. The following Docker Solutions are also key parts of what makes Docker so powerful for DevOps:
- Docker Hub – A cloud hosted service where you can register your Docker images and share them with others.
- Docker Trusted Registry – A private dedicated image registry that lets you store and manage your images on-premises or in your virtual private cloud.
- Universal Control Plane – A management solution for Dockerized applications that can be used to manage your applications regardless of whether they are running on-premises or within your virtual private cloud.
- Docker Cloud – A cloud hosted service to which you can directly deploy and manage your Dockerized (containerized) applications. If you’re new to Docker, this is a great way to get started fast.
- Docker Datacenter (DDC) – An integrated end-to-end platform for deploying Containers as a Services (CaaS) on-premises or in your virtual private cloud. The self-service capabilities of DDC make it easy for developers to build, test, deploy and manage agile applications. DDC is the latest addition to the stable of Docker Solutions and its availability was announced on February 23, 2016.
For more information on Docker products and solutions, see https://www.docker.com.
For detailed documentation on any Docker tool, product or solution, see https://docs.docker.com.
Docker and AWS
Amazon Web Services (AWS) can be used to host Docker software for building, running and managing Dockerized applications. For more information on what’s possible, see https://aws.amazon.com/docker/ and also https://www.docker.com/aws.
Docker and Microsoft
Microsoft and Docker have been working together to enable developers to build, deploy, run and manage distributed applications on-premises and in the cloud across both Windows and Linux platforms. For more information on their ongoing collaboration in these areas, see https://www.docker.com/microsoft.
In this article and the previous one we’ve examined the concept of containers and how the Docker platform empowers them for DevOps. The next few articles in this series will focus on containers as they will be implemented in the next version of the Windows Server operating system.
If you would like to read the other parts in this article series please go to: