Kubernetes and serverless: The point where they intersect

Making the software developer’s life easier is now a multibillion-dollar industry, with Kubernetes at the head and serverless options from AWS and the like close behind. It all boils down to different approaches toward a common goal in the end, which is making sure developers can concentrate on their code and not have to bother about infrastructure.

Kubernetes does this by packaging everything neatly into self-sufficient containers that can be run anywhere, so developers don’t have to worry about compatibility between different software environments. It does, however, involve a lot of heavy lifting with regards to infrastructure setup and configurations, so you need to know about containers and Kubernetes and Docker and how it all works before getting started.

Kubernetes for control

Kubernetes and serverless

According to the CNCF, storage, security and networking issues remain top concerns for those deploying their architectures via Kubernetes. This is because there are significant hurdles in adopting a fully container-based architecture, even one orchestrated by Kubernetes. An example would be scaling is not instantaneous and you have to wait for a container to come online, there are also significant “management” issues that need to be undertaken before you’re up and running.

So while Kubernetes is a technology developed to provide a “serverless-experience” of running containers, at ground level, Kubernetes architecture is deeply infrastructure aware. This is because at the root of Kubernetes infrastructure is the assumption that containers in Kubernetes are living on machines that are visible to Kubernetes, so abstracting away all the infrastructure is still a pipe dream to say the least.

Serverless for simplicity

Serverless, on the other hand, actually abstracts away the infrastructure and requires absolutely no heavy lifting and your instances are run on demand, as and when required, automatically. Scaling is instantaneous and you don’t have to configure or provision anything at all, just focus on your application and deploy at will. Just don’t expect to get the granular control over resources that Kubernetes gives you — it isn’t going to happen any time soon.

This is because serverless has its limitations too and doesn’t come anywhere near the functionality and control that Kubernetes offers, simply because all that is controlled by your cloud vendor. It also has restrictions on run time and file size so is quite impractical for large data sets like online games and applications, latency is also an issue. So while both these platforms aim at abstracting that infrastructure layer from the software supply chain, where exactly do the two intersect?

Kubernetes vs. serverless

Kubernetes and serverless
Serverless architectures, at the moment, are compared to Kubernetes simply because they allow for scaling without complexities. That’s where the similarities end, however, and containers and serverless are two different games altogether. As things stand today, a choice between Kubernetes and serverless options doesn’t really make sense, for a number of reasons.

This is because while both may have the same ultimate goal, both are at very different stages of their “life-cycle” right now and are still being developed in terms of production readiness. So Kubernetes offers advantages that serverless alternatives do not and vice versa, the key to successful deployment right now is to know how to choose between Kubernetes and serverless and figuring out which makes more sense in your situation.

Now with regard to successfully deploying software, choosing between the two greatly depends on what needs to be accomplished. To quickly recap the benefits of each, serverless is a great option if, for instance, you’re running a brand-new application and are starting small. Since you only pay for the time during which the server is executing your action, you save a lot of money.

A mentioned before, it scales up and down automatically based on demand, so you don’t have to have servers running as a backup all the time in case of a spike in usage. Additionally, since all the hardware and plumbing is essentially hidden, you don’t need to have any experience with infrastructure to be able to code and deploy software, making it easier to hire people to work for you.

Containers have their own benefits apart from the obvious “portability” they offer. They also help avoid vendor lock-in, which is a major unique selling proposition. Additionally, containers give you all control you need over your environment and infrastructure so that you can allocate resources as you like.

Serverless containers

kubernetes and serverless

There are a few platforms that look to abstract away the complications that come with managing containers and AWS Fargate looks to do that by offering what they’re calling “serverless” containers.

Fargate features resource-based pricing and per second billing in addition to a host of other features like container registry support, load balancing and a lot more. With Fargate, you don’t need to provision, configure, or scale virtual machines in your clusters to run containers. Fargate can be used with Amazon ECS today, with plans to support Amazon Elastic Container Service for Kubernetes very soon.

It’s not really serverless, however, and though your Docker hosts autoscale, you still have to worry about scaling the containers in your load balancer. You also need to set up subnets in multiple availability zones, launch multiple containers and configure Fargate to use those subnets. It’s also a lot more expensive than using ECS on AWS Lambda right now.

Additionally, with AWS Fargate your containers are still running whether actions are triggered or not. This is because containers are active even when they’re not handling data, so if you fire one up and run a node process that listens for requests, that’s a server. Less infrastructure is always a good thing no doubt, but it’s still a bit premature to call Fargate serverless, especially when compared with AWS Lambda.

Fn is another open source “serverless” functions platform that tries to bring the value and convenience of serverless architecture to containers. The Fn project is a container native Apache 2.0 licensed serverless platform that you can run anywhere, on any cloud or on-premise. Again, this isn’t really serverless like Lambda where your functions don’t exist till they’re triggered, it also relies on Docker.


The point of intersection we’re looking for is probably ahead in the distant future, where Kubernetes gets to a point where all the configuration, complexity, and cluster management is abstracted away, or AWS Fargate gets to a point where it offers “Kubernetes-level” control over our environment. While automation is great, we’re always looking for “powerful” tools that make us masters over our environment. So a fine balance between abstracting away infrastructure and leaving us with just enough control so we can tweak the important stuff is probably what we’re looking for.

So, bringing together the best of both worlds with regards to containers and serverless may sound like a dream in theory, but the journey has just begun. There will definitely be a day when AWS Fargate takes care of orchestration completely and there will also be a day when Kubernetes runs like AWS Lambda. When that day comes, we will be at that point of intersection where it doesn’t really matter since we will all be free to just work on our code and nothing else. Until that day comes, serverless and Kubernetes are two very different animals, so choose wisely.

Featured image: Flickr / Adam Meek

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top