Network security was simple back in the days of monolithic, legacy apps. Monolithic apps tend to be self-sufficient, not requiring much interaction with external apps or services. Even if they do communicate externally, the setup is fairly simple. Developers define the various parts of the app, and IT places firewalls around each major component — database, server, etc. This model has a lot of drawbacks. For starters, it creates a tussle between Dev and IT. Developers need to spend time creating a ticket that defines the app they create. IT spends a couple of days configuring the requirements. Whenever a change is needed, developers need to raise another ticket, for which IT takes more time. Developers want to release new features faster and have fewer bottlenecks from IT, and IT wants to be sure that whatever is released is secure in production. This results in a blame game with neither Dev nor IT coming out happy.
A new approach to security
Fast forward to 2017, and this problem has only been exacerbated with the emergence of new technologies like containers, microservices, and the DevOps methodology. Today, application lifecycles are very dynamic. The underlying infrastructure that powers them is very short-lived. Applications require being scaled massively in a matter of seconds to cope with traffic spikes. As a result container lifespans are much lesser than VM lifespans. One survey says that the average lifespan of a container is 9.25 hours. This is drastically shorter than VMs, which usually run for months or years untouched. In fact, it’s not rare to see containers being churned in a matter of minutes.
For example, if an application supports a TV ad, it may receive a spike of millions of visits for the first five to 10 minutes of the ad airing. During this time, its traffic load may be 1,000 times more than usual. To support this volume of traffic, the app could provision new infrastructure for a day, or a few hours and retire this infrastructure once traffic comes back to normal. However, this is expensive and could leave you with thousands of cloud instances that are unused for 95 percent of the time. Contrast this with how a containerized app would handle this load — it would automatically provision thousands of container instances in real-time based on demand, and as soon as the demand goes down, it would kill those container instances, resulting in no wastage of resources. This is ideally how infrastructure should be handled.
However, when working with such dynamic applications and infrastructure, ensuring they are secure at this scale is a complex challenge. You can’t use a single peripheral firewall around the entire application as it can’t be updated as fast. And even if it could be updated, it presents a security threat that if an attacker breaches the peripheral firewall, they have access to the entire system.
What’s needed is a solution that can secure each container instance, and do it without requiring manual intervention at every step.
Kubernetes — The first piece to the puzzle
As containers are scaled dynamically, the way this happens is using a container orchestrator like Kubernetes. An orchestrator automates the provisioning, configuring, and management of containers at scale. It follows that a security solution for containers should take into account orchestration.
Kubernetes has a set of great defaults for container management. When it comes to security, one critical default is that it assigns an IP address to every pod in a cluster. And it so happens that IP addresses have been central to security since the early days of the Internet. Every computer that connects to the web has an IP address, every enterprise server has an IP address, and when it comes to network security, IP addresses are the foundation. What containers need is a network solution that can integrate with Kubernetes, provide IP-based security, and do this at scale. That’s what Project Calico brings to the table.
Project Calico: Policy-based security
Project Calico can scale network security along with container creation at scale. Even if an app scales to thousands of containers in seconds, Project Calico can still configure security seamlessly. The way Calico does this is by applying a set of policies that govern every component of the system. Using these policies, Calico can be configured to allow services and instances to talk to other services and instances only when needed. This reduces the complexity of the network and creates fewer opportunities for breaches.
Project Calico uses IP addresses to identify container instances and creates policies based on these IP addresses. This is similar to how Kubernetes functions as well, and Calico is a perfect match for securing containerized apps that use Kubernetes as their orchestrator. By integrating with Kubernetes, Calico is aware of infrastructure changes, and can scale security policies based on changes with infrastructure. And the best part is that this happens without having to manually configure the changes each time something changes with infrastructure. Calico works well with any cloud vendor, or infrastructure type, but considering how well they match, it is likely to be used most with Kubernetes.
Calico is different from traditional peripheral firewalls in that it secures each individual container instance. Legacy firewalls take time to setup and secure the entire system at the edge. This means that it secures the components it contains fairly well, but if it is compromised, attackers have access to the entire system. Calico, on the other hand, ensures compromised services don’t affect other services and in this way limits the blast radius of an attack.
Benefits of Calico
The advantages of Calico are many. For starters, it enables microservices apps without compromising on security. Microservices is complex, and it requires a security solution that can take this complexity into consideration and devise a scalable model. Calico also eases the way into containerizing applications. For organizations used to VMs, security was more straightforward, and VMs have very capable security tools. However, when transitioning to containers, organizations will find security to be an obstacle and concern as they aren’t aware of how to approach it. But with a tool like Calico, organizations can make their apps even more secure than with VMs. This improved security is the key driver for a solution like Calico. Finally, it is a necessity in today’s world of high scale applications. Organizations can no longer be content with monolithic applications that are static. They need dynamic, easily scalable applications with infrastructure to match — and to support this they need a network security solution that is equally capable. That’s what Calico is. Since it’s based on policies, these policies can be easily updated at any time without requiring foundational changes to the security configuration. Because policies can stay relevant more easily than exact specifications, Calico reduces the workload on security professionals and makes security simple.
Calico also integrates with Flannel to provide a more holistic networking solution for Kubernetes. This project is called Canal. It brings together Project Calico’s fine-grained policy-based security controls with Flannel’s network connectivity options. Created by Tigera, Calico enjoys great support from the Kubernetes community. Recently, it was also integrated with the Google Container Engine platform.
Containerized apps are the future, but to realize their full potential, they need to be adequately secured in production. However, the traditional model for security will not apply to containerized apps as they are too dynamic, work at a larger scale, and have new components like an orchestrator. To implement security for containerized apps you need a policy-based network security tool like Calico. It makes security scalable, simple, and more robust than ever before. As you build and ship your apps in containers, don’t miss out on Project Calico to secure them.
Photo credit: Pexels