From the early MS DOS days to Win 3.1, 95 and 7, operating systems have come a long way from being just a command prompt. Though Linux burst out on the scene over a decade ago, flavors like Mint and Fedora have recently gained popularity among personal users. This is probably due to the amount of drivers, features, applications, and MS compatible software bundled in.
Unlike the feature-loaded DVD images of Linux that we’re used to, less seems to be more in regards to choosing the best OS for your containers, and a number of companies are coming up with stripped-down versions for just this purpose. Stripping down an OS to the bare essentials isn’t something new, though getting it to work across an entire datacenter is. Thanks to containers popularized by Docker, we can now have one operating system working across multiple nodes, which effectively make it possible to address an entire network as one single computer.
Clustering and the distribution of tasks
Let’s look at a few more benefits these stripped-down, Docker-centric operating systems have when compared to the feature-rich products that we’re used to. Clustering would probably be one of the main advantages. Clustered computing is different from the conventional grid computing in the sense that in clustered computing, each node performs the same task. This means jobs are split across nodes so they get done faster. This kind of distribution of processing power leads to much higher efficiency and smaller resource footprints. Since all nodes are set to the same task, they all act as backups for each other. What this means to the enterprise is higher availability and less down time. Clustering not only helps reduce costs, but also adds to the convenience factor. Where system administrators were originally expected to control an entire datacenter across multiple platforms, now all it takes is one system and one platform to control an entire cluster of computers.
Dealing with inconsistent environments
Updating applications is something that we just can’t get around to every day, even though many upgrades make our applications faster, lighter, cheaper, and more efficient. In a world where it’s all about who gets their updates to market quicker and who has more releases, it’s sometimes hard for system administrators to keep up, and more often than not we end up using older versions of software than we would like. This inconsistency in environment can often lead to inconsistencies in testing and development. Atomic updates seem to be answer to this problem and get around the issues of interdependency by updating the OS as a single unit instead of the conventional package-by-package updates. This is a big advantage of Docker-centric operating systems, because this practice makes sure all dependencies are always up to date, thereby eliminating inconsistencies in environment.
Another advantage that Docker-centric operating systems especially designed for containers have is the ability to roll back the version of the OS if the update is unsuccessful. This clearly distinguishes operating systems built for containers from the traditional operating systems, where updates are permanent and cannot be reversed automatically. How Docker-centric operating systems do it is by keeping a copy of the original OS “on hand” so that a rollback is quick and efficient. Some Docker-centric operating systems like CoreOS achieve this by having multiple partitions and installing the update on only one; this makes for a relatively simple roll back if required. Like CoreOS, there are a number of stripped-down versions of Linux that have been developed specifically with containers in mind.
Here’s a look at the most popular ones and what they bring to the enterprise:
This little operating system is supported by most major cloud providers and automatically configures the container runtime on each CoreOS machine. Apart from automatically updating the OS, which leaves you not having to worry about older versions, it also comes bundled with a host of tools needed to run and manage Linux Containers. Added to the fact that it is open source, CoreOS also has a large community of developers who are constantly contributing to make it faster, lighter, and more efficient. Also available is a commercial product called Tectonic, which is a combination of a Kubernetes and CoreOS stack and features a management console, corporate SSO integration, and CoreOS’s own enterprise-ready container registry.
Where Rancher distinguishes itself from the crowd is that it is solely designed to run Docker, and the Docker daemon is actually the first process that the kernel starts on startup. Rancher also runs two Docker daemons, one for the user and one for the system; this makes for a perfect backup and, if you mess up while managing your user containers, your system containers remain untouched. Another distinguishing factor is this OS uses Docker to initiate all system services and manages them as if they were Docker containers. This lightens the load on the OS considerably and makes life for the developer a lot easier.
This distribution is based on Tiny Core Linux OS, and apart from the fact that it weighs less than 25 megabytes, which puts it in the “ultra-lightweight” category, it boots up in under five seconds and runs completely off the RAM. It also solves the issue of incompatibility with Mac OSX and Windows systems, so it is a great choice for users running either OS.
Snappy Ubuntu Core
Ubuntu seems to be the first choice for most folks when it comes to running containers, and studies show that Ubuntu is seven times more popular than other operating systems when it comes to running containers. This particular distribution specializes in isolation. It does that with a containment mechanism that isolates each application from the rest and makes sure that all related files are maintained as “read-only.” This adds a much needed security level to containers, which makes it a better choice for a production environment than most other distributions.
Unlike most stripped-down Docker-centric operating systems that are optimized for Docker, Photon is optimized for use with vSphere. However, it does support Docker and other container technologies. It also distinguishes itself from the crowd with life cycle and centralized identity management. This makes it a good choice for production, especially for VMware users who are already familiar with vSphere.
Red Hat Atomic Host
From one of the oldest and biggest Linux distributors, this mini-OS doesn’t disappoint as it gives you access to a host of well-developed Red Hat tools and applications that can be used with it. Designed specifically to deploy and manage Docker containers, this distribution is also popular for its extremely simple roll backs, which are due to the fact that it replaces the traditional yum updates with rpm-ostree. This makes it possible to roll back to a previous tree.
Microsoft Nano Server and OSv are a few more noteworthy mentions in the lightweight category. Although OSv is still in beta, it is said to have unparalleled short latencies and reported to be extremely predictable. Nano Server, on the other hand, seems to be targeting a different market of mainly Windows developers as well as those using Microsoft Azure.
There a number of ways that you can deploy and host your containers, and as long you’re constantly learning and developing, choosing the right OS is just a matter of preference. Containers are definitely the way of the future and as long as you’re on the right train, you can switch your seats to find which one is the most comfortable. Considering that the technology is still in its infancy, don’t get too attached to one particular OS or application and keep an open mind. This is not a place to get comfortable as there are entire communities of developers coming up with something new every single day.