Getting started with containers — A brief introduction for IT professionals

Containers are the natural evolution of virtualization where we can now virtualize applications and run them faster, use fewer resources, improve DevOps scenarios, and take advantage of this technology to decouple applications. Containers have been around for a long time in the Linux world, and Microsoft introduced the technology natively in its Windows Server 2016 and Windows 10 releases. Docker is the company behind this revolution and they contribute with their code to create the OCI (Open Container Initiative), which is the body that defines standards for containers. In this article, we will try to get the big picture for those just getting started with containers and introduce some of the key concepts you need to know. Most of the images of this article come from Docker official documentation, and if you like the idea of containers and want to learn more from the source, I listed all the official documentation from Docker at the end of this article to help you.

Virtualization vs. containers

Containers are the natural evolution of virtualization, using virtualization we shared CPU/memory/network of a physical server into several VMs (Virtual Machines), which are great to optimize hardware utilization and allow enterprises to reduce datacenter and other costs. Keep in mind that every VM carries a full operating system (including licenses), which usually are a combination of several larger files (disk files).

Using containers, we have a complete isolation including CPU/memory/network/process, but they share the same OS kernel and the container size is tiny just a few GBs because we only carry what we really need to run an application. Containers are portable, and they provide high availability, orchestration, and a large ecosystem of partners, which means that they are ready for enterprise applications.

There’s one key difference between virtualization and containers that IT pros have a hard time accepting early on. IT pros are emotionally attached to their VMs: They have names, sometimes static IPs, we connect on them to check if everything is fine and so forth, they may have several applications installed on it. On the other hand, when using containers, we shouldn’t care about names, IPs, and even connect to the containers (we can, but we should not!). The great advantage of using containers is if we need a change, we deploy a new container and that’s it, no time spent configuring on an individual basis, we just deploy new containers based on our new image!

The shared kernel feature of containers briefly explains why a Linux container must be running on top of a Linux host (it does not matter if it is a physical or virtual Linux), and the same applies for a Windows container — it must run on top of a Windows host.

A container is based on an image (which is a read-only template comprised of one or more layers and contains all instructions to create a container. The container itself is a live version of an image with a thin read/write layer that allows storing data. All data stored in a container will persist until the container ceases to exist, similar to a VM. In this architecture, we can have several containers from the same image — it is similar to parent disks in the virtualization world.

The container lives for a single application/service, which is a huge difference from a full-blown VM, which has tons of services from the start.

Getting started with containers

A good example is to run a web server to display some content on a VM and a container. If we spin a VM, we will have to install the operating system, connect remotely on the VM, configure the service, copy some files for the website, and so forth. Using containers, we just need to have the image configured (in that image we will have a base image, and some commands to copy the information required), and after that, it is just matter of deploying containers. If we need more instances, not a problem! We just create more containers from the same image.

Containers make it easy to scale. We can use Docker solution called Swarm, which consists of a network of nodes that can run containers. But when the topic is orchestration and containers, the all-time favorite is Kubernetes, which supports a declarative language to deploy applications across multiple nodes and most important, it is able to keep consistency and compliance of any given application.

Docker architecture — high level

I will try my best to simplify the architecture of Docker in this article, to start getting you up to speed on this technology. If you need more information you can check out the More Information section at the end of this article where we are going to point you to the Docker official documentation.

When installing Docker on a server/workstation, we will have two components: Docker server (daemon) and client. All communication between client and server are through a RestAPI. The entire package is called Docker Engine — Docker uses a client-server architecture.

Getting started with containers

The Docker client can be installed on the same server as the Docker daemon, but can also connect remotely to a Docker daemon. In order to work with Docker, just type docker on your command line and the output will be the help prompt (same as docker –help), where we have management commands and commands. All management commands will require a command to execute. For example, docker image requires an additional command, and a simple one is to list all existent images. In that case, we would use docker image list. Don’t worry about this right now, because we are going to have an entire article about deploying and using our first containers where we will spend a lot of time on the docker client command line interface.

Getting started with containers

The Docker daemon component is responsible for receiving all communication from the Docker client through the RestAPI and manage the Docker objects: images, container, network, and volumes.

Another component in the Docker world is the registry, where we can store and use images. The images can be public or private. For beginners, the first registry will be the Docker Hub (hub.docker.com, which requires registration but it is free). In corporate environments, usually a private registry is recommended, and nowadays the best place to host is the cloud, using Microsoft Azure. For example, the administrator can take advantage of their managed service called Azure Container Registry, which has great features. (We explored this service in this article on TechGenix.

In the image below we can see the client executing a few commands: build, pull, and run. They will interact with the Docker daemon and registry to pull images and create containers.

Getting started with containers

I hope I was able to give you a few ideas about getting started with containers using Docker. We are going to have more articles going over some key areas of this technology, so stay tuned!

Getting started with containers: More information

Featured image: Shutterstock

About The Author

1 thought on “Getting started with containers — A brief introduction for IT professionals”

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top