With services like Netflix, Uber, YouTube, and Facebook, most people are used to apps that respond quickly, work efficiently, and are updated regularly. Patience is no longer a virtue, and thanks to apps like the ones mentioned above, when people use applications, they expect blistering speeds and uninterrupted service. If you do not provide that, users aren’t exactly starved for choice; it takes less than a minute to delete an app and download something else as a replacement.
The quick or the dead
With microservice architecture where it is today, the gap between the good, the bad, and the ugly, is now enormous in terms of being able to deliver high-quality software with speed and efficiency. In fact, the difference is so huge now that it was demonstrated by Bob Wise, CTO of Samsung’s cloud native computing team in a talk that showed high-performance organizations have 200 times more frequent deployments, 24 times faster recovery from failure, three times lower change failure rate and 2,555 times shorter lead times than low-performance organizations. That’s a huge difference, to say the least, and if your organization is on the lower end of that spectrum, the time to panic was yesterday.
What exactly is microservice architecture and how come it’s giving these early adopters such an edge over everyone else? Microservice architecture is all about breaking it down and spreading it out. Uber apparently has anywhere between 2,000 to 3,000 microservices running at any given point of time. A microservice is basically code that’s written in a way that can be reused across the spectrum of an organization, or at least abstracted away, to make it easier for developers to build new features or services without having to start over from the ground up. This pretty much means abstracting away all the “plumbing” and getting right down to business in a much cheaper and more efficient way.
Microservice architecture vs. traditional APIs
As an example, let’s consider that department “A” of an organization has built a service that allows users to video chat, while department “B” allows them to play online games with each other. With the traditional API approach, both these departments would have their own backend processes for billing, user management, scheduling and content, among others. Now, though, all these processes work well together inside a particular department, they’re all customized to be solution specific, so A can’t use B’s billing process or B can’t use A’s content providers, and so on. What this boils down to is duplication of efforts which is a big no-no in today’s landscape of users with high expectations.
In stark contrast, if the above scenario were to exist in a microservice environment, everything apart from the actual code that runs the application would be abstracted into separate microservices that are generic and can be used across all departments and services. In truth, if you take away all the “support staff,” what should remain is just the business function itself. This not only gives developers the freedom not to worry about dependencies and backend functions and focus on code, but also makes it a lot easier to build testing environments since all the backend stuff is now duplicates of each other across the organization.
Containers are just the tip of the iceberg
This is why high-performance organizations are so much ahead of low-performing organizations, and the reason we’re using the word organizations here is because it’s an organizational change that gets you there and not a technology change. While this approach does make it possible to build, test, and deploy new features and services without impacting the entire product, it’s really not as simple as taking your monolith and shoving it into containers. A lot of people talk about containers and microservices and how the two go together, but containers are just the fundamental unit of deployment. To truly adopt microservice architecture and remain relevant, companies will have to change not only what they’re deploying, but also the people who are doing the deployment.
Bob Wise asked how many people had heard about Conway’s Law and was pleasantly surprised when a few hands were raised. What Conway’s law states is that system architecture follows organizational pattern, so, in short, a traditional centrally managed hierarchical organization is going to have a hard time with microservice architecture. What you need is lots of independent teams building independent parts independently.
He also mentioned “weapons for the last war,” and this is a great analogy since the way modern counter-terrorist forces have evolved is a great example.
New weapons for a new war
To counter the guerrilla-style warfare that most terrorist organizations use, modern antiterrorist teams have changed from orthodox platoon battalion style operations. The modern or post 9/11 approach is all about lots of small, well-equipped strike teams spread across large areas with constant backup, communication, and even information from drones and satellites. Where there was one warship, there are now hundreds of armored boats with radar, satellite phones, and rocket launchers, and that’s what you need to do with your monolith.
If we look at the above analogy closely, it’s not really the weapons that have changed, but rather the approach. That approach, when broken down, boils down to how quickly you’re able to adapt to changes in your environment or how “rapid” is your rapid response team.
Embracing change with automation
Bob Wise said high-performance organizations embrace change, but really high-performance orgs “rapidly embrace change with automation.” This is done by having a lot of independent teams running independent and simple parts with continuous integration and continuous delivery (CI/CD), automated QA and automated security. He also mentions that the way things stand right now, DevOps is quickly becoming a word for overwhelmed “generalists” who do a bit of everything. That’s because they’re still running their business like a centrally managed hierarchy instead of “Cluster-Ops,” which are a lot of teams of independent operational specialists. He also touches on the fact that people use the word CI/CD so often that a lot of people are beginning to think CI means CD.
CI is not CD
Just because you have CI doesn’t mean you can say you have CI/CD until your deployment is automated. In other words, not only does CI with manual deployment not count, it’s actually going about things in the opposite direction. To build this lean, mean fighting machine, you need to start from the ground up, and the ground is Ops not Dev. You can’t have an agile Dev team without an agile Ops team to follow through, and this is done by containerizing existing applications within the deployment tooling first so it has no effect on the Dev team
Kubernetes, the Linux of the cloud
Bob Wise is also on the Cloud Native Computing Foundation, so it’s not surprising that he recommends Kubernetes as the best way to deploy new containers, and there’s also no one that would really argue with him about that right now. He even had an interesting slide that showed how Kubernetes is part of the CNCF, which is part of the Linux foundation, and how Kubernetes is quickly becoming the Linux of the cloud.
An ideal system would obviously be one where deployment artefacts can be updated and deployed consistently without error. That’s when you finally start small incremental Dev updates and really begin the DevOps journey to becoming a high-performance organization.
Building from the ground up gives you the ability to upgrade a piece at time without overwhelming yourself. When you know right off the bat that microservice architecture adoption is probably going to be the most critical, risky, and technical project your organization has ever undertaken, it’s probably a good idea to do it right the first time.
Photo credit: Shutterstock