Continuous integration and continuous deployment is the name of the game in the software delivery space. Organizations of varied sizes have adopted CI/CD pipelines to meet client requirements and ensure the least possible time to market. The CI/CD space is constantly evolving, and as a result, there are many tools and services organizations can choose from to perfect their CI/CD workloads. Let’s take a look at some trends in the CI/CD space recently.
1. Templatized resource creation using Helm
Helm is a packaging system that provides Kubernetes configuration to DevOps teams without them having to create several config files themselves. Every time a service is deployed in a K8s environment, it needs a stateful set, secrets, K8s users with relevant permissions, and a config map. Creating different YAML files that hold this information can be time-consuming and only leads to slower deployment. With Helm, users can create bundles of such YAML files and upload them to public or private Helm repositories for other users who need them. These bundles are called Helm charts and can be reused easily.
Another important feature of Helm is that it also acts as a template engine. Let’s take an example of multiple microservices that are to be deployed on the same cluster. These microservices are all slightly different from each other, but without Helm, you will have to create multiple YAML files for all of these microservices, which can become time-consuming. Developers don’t have the time to create all of these files manually. This is where you can use Helm chart templates where you only need to create one YAML file that has placeholders replacing the varying values for different microservices. You can have a separate YAML file that holds all the values for different microservices.
Another feature of Helm that helps speed up the CI/CD pipeline is the single commit to deploy services to different environments without having different sets of YAML files. Developers can create their own Helm charts that they can use to deploy services across all of their various environments in one go.
2. GitOps for collaboration and declarative operations
Setting the infrastructure up every time you start working on a workload can be tedious if you choose the manual route. Traditionally, infrastructure creation requires teams to manually create servers, networks, configurations, and K8s clusters.
Infrastructure as a code helps DevOps teams define all of these components of their infrastructure in a code so that it can be created and redeployed. This allows your infrastructure to be reused just like your modernized workloads. With IaC, you can unify all of the different infrastructure specifications into a Terraform code, Ansible code, K8s manifest files, and additional YAML files. DevOps teams can create these files locally and push these files to a git repository for version control and collaboration among team members. However, since all of these files are locally created by the DevOps team and can be created or updated by anyone, it’s hard to keep track of all the changes. Due to testing being manual here, some untested files could end up in production, and errors could go unnoticed for a long time. This is where GitOps comes into the picture.
With GitOps, you specify a series of steps to streamline the process of publishing changes that starts from a Git repository holding your code. Here, every time a user makes a pull request to make some changes to the main branch, the changes don’t merge with the main branch. You run a series of steps to test the changes and approve them.
Once the changes made by a DevOps team member are approved, these changes are merged with the main branch. After that, a GitOps agent notices the changes and deploys these changes to the production environment. This happens automatically without manual intervention.
The agent is installed in the environment that constantly checks the actual state of your workloads and checks the Git repository for the desired state. If there is a difference between the two states, it pulls the changes from the git repository and applies those changes to the environment. This automated process provides more transparency and increased security as the approving team is much smaller than the number of team members who can propose changes to the infrastructure code.
3. Security that spans plugins, containers, environments, identities
In CI/CD workloads, security cannot be put on the back burner. Security has to be woven into the workload from the very beginning. DevOps teams and security teams should collaborate at every step of the software development life cycle (SDLC) to ensure no vulnerabilities go unnoticed.
The CI/CD delivery model requires teams to not only be on top of all the security best practices but to do it quickly so the time to market doesn’t get affected. Due to the complexity of CI/CD pipelines, it is impossible to manually test the security of all the components. CI/CD workloads present a huge attack surface. The security teams have to configure different plugins and tools used in CI/CD pipelines so they are not exploited as backdoors into an organization’s workloads.
The Build stage in CI/CD workloads is also quite vulnerable as it is a blind spot. Code can be altered by attackers to exploit later when it gets deployed in production. Build stage breeches recently made headlines and led to a lot of private data getting leaked. Attackers don’t attack the organizations directly. If they can find their way into your workloads through tools you use to implement CI/CD pipelines, they can escalate their access levels and privileges slowly over time.
Therefore, it is important for organizations to enable security that puts every CI/CD component on a single plane and provides you end-to-end visibility into your complex workloads, so you can identify risks and deal with them before they actually cause any damage. Tools are available in the market that plug into your existing CI/CD workloads and provide security teams better visibility into your workloads.
4. Internal platform model to create environments at scale
Traditional enterprise models are not made to scale. Today, organizations have multiple teams working on varied products that all use different infrastructure stacks and tools. This can become extremely difficult to deal with when it comes to managing cost and compliance. This is where the internal platform model can help.
Instead of having your DevOps teams worry about spinning up new resources and infrastructure for different products, separate platform teams can take care of the infrastructure by creating a unified infrastructure stack used by different products across an organization. This allows developers to focus on product development rather than infrastructure management, thereby saving time and effort.
There is a steep learning curve that is sometimes overlooked by organizations venturing into Kubernetes. With an internal platform model, the DevOps team can deliver products without having to constantly take care of security and configurations associated with different resources as they are handled by the platform team. Developers can also quickly spin up any resource they want without waiting on Ops, which leads to a shorter time to market. Security is also enabled more efficiently across different products as it is hard-coded by the platform team into the environments and resources.
Finally, the CI/CD space is booming with constant innovation. Organizations looking to venture into CI/CD should explore all their options and implement a pipeline that works for their use-cases. These new trends are just setting the stage for more things to come that’ll make CI/CD look less intimidating and allow more organizations to leverage the power of continuous integration and continuous deployment.
Featured image: Pixabay