Few announcements have been met with as much confusion and criticism as Docker’s recent announcement of the Moby Project, despite various representatives at Docker doing their best to convince everyone that nothing had changed. What made things more confusing was that the Moby move came on the heels of the EE and CE split, while users were still getting used to the idea of separate Enterprise and Community editions. The motive behind Moby Project however, really isn’t that confusing and put in Docker founder Solomon Hykes’ own words is to be able to build a framework to assemble specialized container systems without “reinventing the wheel.” The emphasis here is on “reinventing the wheel” and on working smart as opposed to working hard or repeatedly doing things that have already been done.
Docker or Moby Project: What’s in a name?
Before we go into detail about how Moby Project saves time and effort, let’s get some clarity with regards to what exactly we’re referring to when we say “Docker,” which is probably where a lot the confusion resides. Docker was originally the title given to the open-source project released by dotCloud, which allowed applications to be packaged with their dependencies and run regardless of underlying operating system or distributions. This savior from dependency hell was quick to garner the attention of the entire enterprise for the sole reason that prior to Docker, the portability of an application was never a guaranteed thing.
Now, while some people call it an engine, others refer to it as a platform, and still others call Docker a tool. According to the official website, Docker is a platform with its two main components: the Docker Engine and Docker Hub. The rest of the platform consists of huge ecosystem of interchangeable tools and applications. Docker Hub is an open-source repository of Docker images just like GitHub is a repository of source code. There’s an enterprise paid version of the Docker Hub as well. Docker Engine is what we’re going to look at in a bit more detail. This “engine” consists of a daemon to listen for API requests and manage Docker objects, a REST API to control the daemon, and a CLI client to control the API. Now, while a lot of people refer to the entire engine as Docker, others refer to the client as Docker and still others refer to the registry as Docker. Yes, it gets confusing.
So just to clarify, the Docker client, or CLI, is what most users use to communicate with the API to send commands to the daemon. That in turn uses a CLI tool (like runC) to communicate with the kernel. Most people who call Docker a “tool” are referring to the client or the CLI, which is in effect a tool, but if you really want to get technical, they’re all tools built around a core Linux feature called containers. In fact, if you have the technical knowhow, you can actually do everything that Docker does just from the Linux command line interface and all these CLIs and APIs are nothing but translators for people who can’t speak Linux kernel.
DevOps through detonation
Now, let’s get back again to when dotCloud released the open-source project called Docker. They had about 10,000 developers try it within the first month, and companies like AWS and RedHat offering commercial support within the first year. Within 15 months they had 8,741 commits from 460 contributors, 2.75 million downloads, over 14,000 “Dockerized” apps, and feedback from tens of thousands of users. DevOps ideology goes hand in hand with the open-source model, and that ideology is to be able to work on what you’re best at while the mundane stuff gets automated. What better way to automate than to break down your solution into key open-source components and donate them to the community?
This is a lot like a NASA rocket dropping its boosters once it’s out of earth’s gravitational field. The list of dropped boosters include libnetwork, notary, runC, Hyperkit, VPNKit, Datakait, Swarmkit, Infrakit, LinuxKit, and the most recent donation to the CNCF, Containerd. One question that comes to mind is if it were all open source in the first place, what difference does it make to break it down and donate it? The difference is that though it was all open source, each release was still managed by Docker while they probably had a hundred different things they would rather work on. By breaking it down and donating core components, they completely free themselves from ever having to even look at those bits of code again.
This distinction and distancing themselves from their open-source components gives them time and manpower to work on and develop solutions for their enterprise customers with the added advantage of having their base code continuously developed and improved on by the open-source community. This is probably around the time they decided to have a Docker Community Edition, which is open source and managed by Docker, and an Enterprise Edition where they were free to have fun and experiment with solutions without the criticism of the entire open-source community.
About cars and reinvented wheels
However, with the breakdown and donation of each component to the community, Docker itself was finding it difficult to pull all these open-source technologies together to build specialized systems. This was largely due to the fact that they had all these different teams pulling different components of their projects to build different specialized systems at the same time. What was happening was while one component was being modified for one specialized system, the same component had to be “unmodified” or completely rebuilt for another specialized system. It was like having one master file that everyone was editing to their specifications and saving changes to at the same time. This led to a lot of duplication of efforts and reminds us why Solomon Hykes referred to the situation as reinventing the wheel.
The inspiration for the solution to this problem came from the automotive world, and Solomon Hykes took a page out of the General Motors book when he decided to have one big blank “chassis”. This chassis would be completely boring, have no customization whatsoever, and be used to build sports cars, saloons, hatchbacks, family estates, or 4x4 SUVs if cars were container systems. This big blank container chassis was nicknamed Moby Project and is made up of all the open-source code that’s used in Docker, with none of the proprietary enterprise specific stuff that Docker builds on it. This solved the problem of having to build every specialized container system for every customer from scratch.
The whale in the room
With the increasing demand of users and the explosion of ecosystem components and participants, Docker decided to make this “container chassis” that they had already used to solve their own problems available to everyone by moving all relevant files to a GitHub repository called Moby Project . This move gives everyone who wants to build a specialized container system a great platform to do it on, while not disturbing other running projects like Docker CE or EE, which are both now based on Moby Project . So while the GitHub redirection of Docker visitors did cause some confusion, at its core the Moby Project is just everything you need to build a specialized container system without duplicating previous efforts.
Being in the spotlight isn’t necessarily always the best thing for a software company that’s just over a year old. When Docker introduced Swarm, the response from the community made it evident that the open-source project called Docker had outgrown its creators. What do you do when something you built can’t be controlled anymore? You rename it after a whale, free it like Willy, and make it somebody else’s problem. This has done two things: They get to keep the name Docker on the platform that they continue to build and improve upon, while staying out of the spotlight and avoiding the criticism of the entire community. However, Captain Ahab (Kubernetes) may have other plans involving rkt-shaped harpoons that fit in CRI sockets.
Photo credit: Wikimedia