Containerization Proliferation: The Docker Effect (Part 2)

If you would like to read the other parts in this article series please go to:

Introduction

In Part 1 of this multi-part article series, the intent of which is to provide you with an overview of the containerization phenomenon, I briefly explained what containers are in the context of virtualization technologies and how they differ from traditional hypervisor-based virtual machines, the benefits of using containers and the scenarios in which they aren’t the best choice, and some generalities about how they work.

Docker brings containers to the masses

As we discussed in Part 1, the container concept has been around for a long time, but it remained a bit of a niche technology until Docker came onto the scene. Similarly to the way Apple took a couple of ideas that had been done by others in the past but never caught on (smart phones and tablets) and turned them into huge consumer successes, Docker “reimagined” the container idea and turned it into the Next Big Thing in IT.

Docker’s stated design goal (according to the architecture doc on their web site) is to provide a platform by which developers can test, ship code and deploy applications faster; it is a packaging and delivery mechanism that bypasses the conflicts and compatibility issues that often arise when apps move from the development and testing environments into the production environment. I think it’s important to note that the original purpose of the isolation provided by running these apps in containers was to be able to deploy them more quickly and reliably – the primary focus wasn’t on isolation for security. We’ll talk more about Docker security later in this discussion.

Both software developers and their customers want what Docker can give them. This includes a speedier end-to-end process. We live in a world – especially in the business world – where we want what we want and we want it yesterday, and that applies to software applications, too. Faster delivery makes the customer happy, and to paraphrase that old saying about mama, if the customer isn’t happy, nobody is going to be happy.

Docker containers make applications imminently portable. It’s easy for devs to share their code with one another for collaborative efforts. It’s just as easy to deploy the applications to whatever environment you want, whether it’s a local machine, a server (running on a VM or on bare metal) in the organization’s on-premises data center, or a cloud provider’s data center “out there.” Docker also brings the cost savings from reduced hardware resources that we discussed in Part 1 as well as scalability and flexibility.

The reason Docker has taken off the way it has is that not only does it make it possible to achieve these things (as other containerization solutions before it did), it also makes it easier, by providing tools to enable you to put your apps in Docker and deliver the containers for testing or deployment. For even more functionality, you can extend Docker with plug-ins created by third parties, and Docker provides a plug-in API for those who want to create their own plug-ins that can run either inside or outside of containers. These include volume plug-ins and network driver plug-ins, and other plug-in types are expected to be supported in the future.

Evolution of Docker

Docker’s early versions were originally based on LXC and that was the default execution environment. Beginning with version 0.9, Docker got its own libcontainer library to provide direct access to Linux container APIs and doesn’t have to depend on LXC. The two can still be used together, but LXC is now optional. We’ll talk more about LXC and its new implementation, LXD, later in this series.

Last summer, Docker introduced runC, which they bill as a universal sandboxing container runtime environment. This is a standalone tool that includes the code Docker users for working with the operating system features (in Linux/UNIX and in Windows 10) that enable hardware and OS abstraction (namespaces, control groups, etc.). One of the goals is to make containers even more portable and toward that end, runC has native support for Linux namespaces and security features as well as native support for Windows containers, and Docker plans to support various hardware platforms such as ARM, Sparc and Power, and the latest hardware features as well.

We talked about the ways in which containers differ from traditional VMs and Docker – unlike some other containerization solutions, including LXC – is designed to run just a single dedicated application. It’s not a general purpose environment like you might be used to with traditional hypervisor-based virtual machines. A container is a package that encapsulates an application along with that app’s dependencies, and that’s all.

The Docker apps that are available in the registry (Docker Hub) are already configured and easy to deploy, and use a standardized format. This makes it really easy to run cloud apps and to port them across different cloud environments and operating systems.

Docker under the hood

Docker was built on the virtualization features that are part of the Linux kernel, which we talked about in Part 1 of this series. That includes separate namespaces and resource isolation, and this is the reason Docker containers don’t have to run a whole separate operating system as traditional virtual machines do. Docker’s virtualization occurs at the kernel level and builds on the containerization features that are included in OS kernels (now including Windows), but it operates at the OS level so you can still run it inside a traditional VM.

Docker can be installed on a developer’s laptop or desktop computer and used to build the containerized apps and it can be installed on servers to run those apps. Docker Engine can be installed on Linux, Windows, Mac OS X and Cloud platforms such as Google Cloud, Amazon EC2, Microsoft Azure and Rackspace Cloud.

To launch a Docker container, you specify the Docker binary and command you want to run and specify an image that is the source of the container you want to run. If Docker doesn’t find the image on your local Docker host, it will go to the Docker Hub, which is the public registry of images, and download it from there. You then specify the command that you want to run inside the container. The container will run as long as the command is active, and then will stop. You can use the exit command or CTRL+D when you’re finished. You can create interactive containers that run in the foreground or you can create detached containers that run a daemon in the background.

Containers are identified in Docker by a container ID, which is a long hexadecimal string with a shorter variant. You can use the ID to query for information about the container. Docker automatically assigns a name to containers when you start them but you can specify your own names. You use the docker logs command to look into a container and view its output. You can stop a container using the docker stop command, and check the status of the container to make sure that it is indeed stopped with the docker ps command.

Anytime you type docker in the Bash terminal, you’re using the Docker client. It’s a command line interface that you can use to build web apps. In addition to using images downloaded from the Docker Hub, you can build and share your own images. There are many different images that have been created by others and uploaded to the Hub, so you don’t have to recreate the wheel every time you want a Dockerized application. When you download images from the Hub, Docker stores them on your local host. You can find images using the docker search command and then download them using the docker pull command.

If an image is similar to what you need but doesn’t quite work, you can make changes to an image by updating a container made from that image and then committing the results. You can also build new images from scratch using the docker build command and creating a Dockerfile. To share the image with others, you can use the docker push command to upload it to the Docker Hub. Note that you can put it in either a public or private repository.

Summary

In this, Part 2 of our series on containers and the Docker effect that has caused the technology to boom in popularity, we examined what Docker is and how it differs from some other containerization solutions, traced the evolution of Docker and how it has changed over time (and is still changing) and then we took a dive under the hood to look at some of the more technical aspects of how Docker works and some of the basic commands that are the foundation of using the Docker CLI to containerize different types of applications and run them in different ways.

Next time, in Part 3, we will take up the (sometimes touchy) subject of container security in general and Docker security in particular, what has been done and is being done to improve it, and third party security solutions that can make Docker and other containerization methods more secure.

If you would like to read the other parts in this article series please go to:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top