It’s hard to think of DevOps today without the containers to power it, but there was a time when DevOps existed without containers. We say “existed” because you really can’t call that living, trying to practice DevOps in the middle of a virtual machine sprawl. Though VMs provide both segregation and organization, steady growth means one day you will eventually have more servers than you can handle.
With VMs, developers mostly create their applications on staging servers designed to match production hosting environments and then finally migrate code from staging to hosting. The Ops team then hosts it by creating dedicated VMs each for individual service, which includes manually filing tickets for things like monitoring and backups. This makes a service's lifecycle “labor intensive” since each one needs a lot of personal attention, a bit like the popular “pets vs. cattle” analogy that’s used to describe the different types of servers in the cloud.
Pets vs. cattle analogy
The pets vs. cattle analogy was first used in a slide titled CERN Data Centre Evolution, which details the scientific organization's 12,000-odd servers and plans to manage them more efficiently. Anyone who’s had more than one pet will know how different they are from each other, their likes, dislikes, behavior, choices, and preferences. Now imagine having to look after a thousand such “pets" -- or let's say a hundred thousand. Probably isn’t humanly possible, but if those pets were cattle, a few men handling a herd of a few thousand cattle isn’t unheard of.
The difference between pets and cattle is that cattle is standardized, you don’t need to name them, you can just number them. If one dies, your life doesn’t stop, it gets replaced by another. Cattle can be controlled like one single herd and can share food and resources with each other. They’re also pretty much self-sufficient and can survive in the wild if all they have is grass and water. DevOps employs such lean methodologies that allow servers to be looked at like cattle, and containers are what makes that actually possible.
DevOps automation — Better together
Containers offer a standard packaging format and runtime for running any application with all its dependencies, regardless of what code it was written in, how it’s configured, and how it operates on containers. This standardization was the main tipping point for DevOps’ popularity, which along with the popularity of containers and microservices, allow developers to push out new cloud-native applications faster than ever.
While DevOps is all about reducing manual hand-offs between development, operations and customers, automation is what makes that practical and possible without an army of humans to look after the containers. This is because while making developers’ lives easier, containers create complexity for the IT operations teams that have to keep those apps running. Another reason DevOps needs automation is humans just can’t keep up with the labor that goes into creating and managing containers on hosts.
Automation is the best response to accomplish monotonous and gigantic tasks that are usually “human-gated." An example would be automating the task of monitoring how containers are created and destroyed in response to events, especially because they aren’t around long enough to analyze the whole ticketing process manually.
Automation is all about finding tedious tasks that could be automated and freeing the human minds being drained on them. This helps you get the most out of your time as your money is now being spent on the commodity called creativity rather than time. Automated testing helps you get the most out of your team. A good DevOps team should spend their time and energy focused on doing things that only humans can do and leave the rest for the machines.
This contributes to better sync among the teams and eventually faster and more accurate deployment and releases. Automation and container adoption seem to parallel one another. At some point, they become inextricably intertwined. Management of the infrastructure itself became the responsibility of automation, not humans, and humans focused their efforts on the services inside the containers.
Test automation with containers
Continuous integration (CI) is a software development practice in which small adjustments to the underlying code in an application are tested every time a team member makes changes. Continuous delivery (CD) is the process of getting new builds into the hands of users as quickly as possible. Though it sounds pretty simple, to get what people call true CI/CD, you need a standard unit of deployment to be able to address an end-to-end automated DevOps process that spans across multiple disparate tools. CI/CD is the key to automation and the secret to an effective pipeline. Automated testing lies at the heart of that equation and is facilitated by containers that allow us to integrate tools from different phases of the DevOps cycle.
Automated tests are without a doubt the most important part of any CI/CD pipeline. Additionally, they have to be fast, have good coverage, and no erroneous results. Automated tests mean faster tests. Automation leads to faster results than a human could produce manually. Automation also makes it possible to test in parallel, which speeds the testing process even more. Speed and the avoidance of delays are essential to deliver software continuously.
Automated testing is also more consistent, which is key to ensuring uniform behavior of software across all stages of the delivery chain. Automated testing does this by mitigating the chance of human error by removing the human element. Automated testing also enables agility, which makes it possible to adjust tools and frameworks quickly without disrupting the rest of the system. These changes could be based on needs or just because better technology is available.
Manual testing leaves a lot to be desired in terms of agility especially since it’s “manual." This means every change in your CI requires an equal and opposite manual change in testing and new test suites need to be rewritten or reconfigured every time the CI/CD toolchain is updated. With automated tests, however, most of the configuration is done automatically and it’s easy to migrate between technologies.
Containers make automated testing possible as they create environments to run tests that can be spun up or down instantly with the least resource footprint. Containers also provide fidelity in the sense that your test environment can be virtually identical to your hosting environment. Container environments can also be allocated on demand and this is what makes the cloud an ideal place to host these environments.
DevOps has been the key factor to a very noticeable change in the way things are done in the entire enterprise. That change from just following the way things have always been done, to being aware that there is a bigger picture, has been facilitated by containers in more than one way. Containers give applications the chance to speak one language and integrate in a way that all departments can now work together as a team right from the get-go.
As mentioned earlier, you can’t have one without the other, and as soon as you decide to go DevOps, then DevOps automation will soon follow. It is in our nature to avoid the drudgery of life and keep our minds occupied with interesting things, and while containers give us the tools to do that, automation frees us to innovate even more.