Containerization Proliferation: The Docker Effect (Part 3)

If you would like to read the other parts in this article series please go to:

Introduction

In Part 1 of this multi-part article series, I briefly explained what containers are in the context of virtualization technologies and how they differ from traditional hypervisor-based virtual machines, the benefits of using containers and the scenarios in which they aren’t the best choice, and some generalities about how they work. Then in Part 2, we began drilling down into the nitty gritty details about the most popular container solutions, logically beginning with the company that gets the credit for putting containers on the map: Docker.

Security is the number one concern

It seems that with every new and exciting technology that comes along to make IT more efficient and less costly, the big sticking point is always security. The containerization phenomenon is no different in that regard. Remember when cloud computing was the new kid on the block, and the hue and cry from every corner was “What about security?” Well then it should come as no surprise that many experts are tagging security as the number one concern among companies that are considering adopting Docker or other container solutions.

There are some who will tell you that because popular container technologies are built on Linux, you don’t have to worry about security. They’ll throw around terms such a cgroups (Control groups) and Grsecurity, which are security tools that are built into Linux and which the latest containerization solutions can utilize. That’s true, as far as it goes. But a quick glance at the monthly list of security advisories put out for any popular Linux distro should be enough to make you suspect that *NIX just might not be the invulnerable OS that some of its proponents have made it out to be. As with Windows (and Apple products), a whole slew of vulnerabilities are discovered and patched in Linux on a regular basis.

One of the security issues attached to using containers is that most of the container technologies need access to root in order to run. That comes with all the dangers inherent in giving any program root access, which is equivalent to administrative access on Windows. Root privileges include the power to modify the OS configuration, change permissions, run code, etc. If a program is running as root and a security flaw is exploited, the attacker will be able to control the hardware, access (or destroy) all of the data, etc.

In fact, security is a big differentiating factor between containers and regular traditional virtual machines. With traditional VMs, each of the virtual machines runs its own separate instance of the operating system. This makes for the greater resource overhead and is the reason that you must have more powerful processors and more memory with VMs than with containers. However, the good side of that is that it also provides for more isolation. With containers all sharing the same OS kernel, there is an obvious security issue.

Here’s another thing to think about regarding the security of containers. This doesn’t apply if you have in-house devs who create the software that you run in your containers, and if that’s the case, you can skip this paragraph. However, many organizations don’t have that – especially small operations – and so they turn to container repositories to get pre-built containers that fit their purposes. Docker Hub repositories, for instance, allow you to build and share images with others. This is very convenient – but how trustworthy are those images? If you download a container from GitHub and you don’t know and trust the source, you could be asking for trouble.

Basic Linux container security

The most basic of Linux containerization technologies is LXC (Linux Containers), which is built into modern Linux distributions. This is an operating system virtualization environment that uses the Linux kernel’s Control Groups (cgroups) to isolate the use of the hardware resources by the separate containers.

Cgroups and the kernel have been improved over the last few years and this has improved the security of LCX. Now firewalling is supported, and namespace isolation can be used to prevent grouped processes from being aware of those that are in other groups. The cgroups themselves, despite the name, don’t let you control access from one container to another, but they do protect from some types of DoS (Denial of Service) attacks.

A combination of cgroups and namespace isolation is the primary means for creating the isolated application environment in LXC containers. The security problem inherent in early implementations of LXC is that UID 0 in the container was equal to UID 0 outside of the container. What does that mean? Well, what it means in practice is that it was possible to escape the container and gain root privileges on the host OS. You don’t have to know anything about Linux to guess that that’s not a good thing.

Since LXC 1.0 and Linux kernel version 3.12, you can use “unprivileged containers” that run with regular user accounts. This means these containers are unable to gain direct access to the hardware. Using unprivileged containers prevents many security vulnerabilities from being exploitable. Linux tools were also updated to recognize unprivileged containers.

Okay, then, what are containerization companies in general, and Docker in particular, doing to increase the security of this promising technology? Cloud vendors such as Amazon and Microsoft listened to their customers’ concerns about security and turned their focus to improving that aspect of their services, adding such features as cloud identity and access controls, data encryption both in transit and at rest, multi-factor authentication, key management services, enablement of private dedicated connections, and more. Container vendors have likewise ascertained that security matters to getting their technologies adopted, and both shoring up security within their own products and services and partnering with third party companies that are creating security solutions for containers.

In the next section, we’ll hone in on Docker and its security mechanisms and how they work.

A closer look at Docker Security

You’ll remember from an earlier installment in this series that Docker originally used LXC as its default execution environment but then, in version 0.9, changed that so that now the default is libcontainer. Nonetheless, the process works similarly.

According to Docker’s documentation, each of the containers that you create has a network stack of its own and does not have privileged access to interfaces of other containers that are running on the same host. You can link containers together to allow them to send information to and from each other that describes the sending container. Links work differently depending on whether you’re using user-defined networks or the default bridge network. For more information about links, see the Docker user guide.

The most important thing to be aware of in regard to Docker security is the Docker daemon. The daemon is the component that requires root privileges, and it is because of this that you must be careful about which users are allowed to manage the daemon. From the Docker web site:

Docker allows you to share a directory between the Docker host and a guest container; and it allows you to do so without limiting the access rights of the container. This means that you can start a container where the /host directory will be the / directory on your host; and the container will be able to alter your host filesystem without any restriction.

The security risk here is pretty obvious. Docker has recognized this and has made some changes to help mitigate the risk. Specifically, they have changed from using a TCP socket in versions prior to 0.5.2 to using a UNIX socket, which is less susceptible to cross site scripting attacks when running Docker on a local machine outside of a virtual machine. You can control the access to the control socket with UNIX permissions.

More security improvements are planned for future versions. Most importantly, the plan is for the Docker daemon to eventually run with restricted privileges, with different parts of the Docker engine running inside of containers.

Because of the inherent security risks associated with the Docker daemon, Docker recommends that when you’re running it on a server, Docker should be the only service that you run on that server (except for administrative tools).

There is some good news, though. When you start up Docker containers, unless you have explicitly changed the configuration, they will start up with restricted capabilities, so that processes that don’t need to run as root will only be granted the capabilities they need. Processes inside containers don’t have to handle the typical services that need root access, such as SSH and network and hardware management and config tools. These things are handled by the Docker host. Thus in many cases, the containers don’t need and don’t have full root privileges. You can deny access to many operations, such as filesystem ops, module loading, mount operations, and raw sockets. It is default behavior to deny all capabilities that aren’t needed, which substantially improves the security situation.

Further, you can harden the Docker host using AppArmor, SELinux, GRSec and other similar systems. In Part 4, we’ll talk about that and dig into third party security solutions for Docker.

If you would like to read the other parts in this article series please go to:

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top