A DevOps approach to deployment brings about a number of advantages, one of which is agility with regards to addressing customer issues and fixing problems. Ignoring DevOps and continuing in the traditional waterfall approach is just too costly when it comes to keeping up with the competition and addressing industry trends. DevOps teams love cutting out unnecessary functions, and cloud computing is the best way to do that. AWS, the No. 1 public cloud provider, is going out of its way to accommodate DevOps teams and has spent a lot of time and money developing tools to enable this. The whole point of DevOps is to maintain the ability to act quickly in response to customer requirements and frequently integrate new code into applications.
Automation is something that all DevOps teams strive for as it eliminates the chance for human error and very rarely requires human time or effort. One key area for automation is infrastructure. Let’s start by looking at how AWS approaches infrastructure automation.
AWS OpsWork powered by Chef
AWS works closely with Chef, which is a tool that automates infrastructure deployment to free engineers from having to do all the heavy lifting to manage their IT environment. Obviously, DevOps is about a lot more than just infrastructure deployment and OpsWork is AWS’s full-fledged DevOps application management service that lets you automate infrastructure deployment and configuration from source all the way through to production. OpsWork uses Chef to automate server configuration, deployment, and management across your EC2 instances as well as your on-premise data. OpsWork comes in two versions: OpsWorks for Chef Automate, which includes an entire suite of workflow automation tools, and OpsWorks Stacks, which lets you manage applications and servers.
If you want to go the extra mile and integrate security and compliance as well, you can integrate Chef Compliance Server into your workflow to make sure your infrastructure and applications meet company standards and regulations. Chef lets you manage your entire application stack in one place and provides the kind of transparency that DevOps teams require for smooth functioning.
CloudFormation is another DevOps offering from AWS that makes it possible to save resource templates for later use. This blueprint of an AWS resource template can be implemented when required and greatly reduces the chances of error due to human involvement. Troposphere and Terraform are other third-party tools that aid in stack deployment.
Git is a popular version control system and CodeCommit lets you migrate an entire Git repository to a private server where it can be securely hosted and infinitely scaled. The advantage is that CodeCommit repositories have no size limits and can be scaled up as per the customer’s needs along with the extra security that AWS provides. One security feature is the automatic encryption of all files and repositories at rest via AWS Key Management Services. The other key features here are the integration with other AWS services, Git compatibility, and high availability since data is replicated across availability zones. The best part is that anyone with an AWS account gets five active CodeCommit users for free after which a $1 charge is levied for every additional user per month. This includes unlimited repositories, 50GB of storage and 10,000 Git requests per month.
Continuous integration means continuous testing, and AWS provides a vast playground for developers to put their applications through the ropes. While central repositories are used to regularly merge code changes, AWS’s advanced APIs and automated test frameworks make it effortless for engineers to run multiple tests at the same time. Apart from a number of third-party tools like BlazeMeter, JUnit, and Sauce Labs, AWS has its own offerings to help teams implement DevOps in their pipelines. AWS provides a lot of tools for developers to quickly (and temporarily) replicate production environments specifically for testing purposes.
With repository management, and automated infrastructure creation, AWS has the first steps of the DevOps pipeline covered. Continuous integration (CI) and continuous deployment (CD) being the DevOps mantra, AWS services effectively create a CI/CD pipeline that automates everything from building to deployment, including the testing process. The aggressive pricing makes it a no-brainer to adopt these tools for your entire DevOps team.
CodePipeline was launched in the fall of 2014 and is a great tool from AWS that allows developers to break down the release process into stages that are a lot easier to manage and orchestrate. With reference again to CI practices of using a central repository to merge updated code, AWS CodePipeline automatically builds, tests and launches an application each time any code change occurs in the repository. This means developers can focus on the code and be completely oblivious to any changes in environment or infrastructure while simultaneously being able to test groups of actions or stages separately and in different environments.
The way CodePipeline breaks down deliveries into stages is by maintaining a source repository where developers commit code. Each time a change is detected, the code is automatically put through a series of builds and tests before being deployed into an environment that is again built by the pipeline. What this effectively does is maintain and manage a complex testing environment that not only runs on its own but also in parallel through an execution process that uses multiple processing cores to handle separate workflows simultaneously. AWS CodePipeline also integrates well with a number of third-party DevOps tool, which makes it a great choice to manage your entire CI/CD workflow.
Any workflow should end in code being successfully deployed, and CodePipeline can write code directly to AWS CodeDeploy, among other things. CodeDeploy is the fruit of Amazon’s tried-and-tested internal deployment service called Apollo where code is tested across a fleet of EC2 instances while still receiving traffic. This is done by only taking down a fraction of instances at a time so that users don’t experience any downtime while upgrades are happening. Since a lot of their own customers were facing similar problems, Apollo became AWS CodeDeploy with a number of additional features for agile software delivery.
Apart from keeping track of all updates, changes, and deployments, AWS CodeDeploy orchestrates fleet rollout, monitors the status of all deployments, and has a clear and understandable dashboard. AWS CodeDeploy also features the built-in logic to respond to potential failure cases using automatic rollbacks. Though there is no charge to use CodeDeploy to write code to EC2 instances, it can be used to write code to private servers for $0.02 per instance, which is still quite reasonable. This combined support makes it a lot easier for developers to coordinate updates across a collection of on-premise and cloud instances.
In the past, it has been the practice of most teams to work in “silos” for time on end until they felt their work was complete or good enough to show to anyone else. Only when they were satisfied with their end product would any attempts be made to merge their changes to the source and let the rest of the teams do their checks and give their feedback. With DevOps, it’s really quite the opposite, and the transparency and shared ownership is what really makes DevOps teams stand out. This DevOps-friendly approach from AWS not only helps them expand their market from Infrastructure as a Service (IaaS) to application development, but also makes it evident that the world’s leading cloud provider supports DevOps.
Photo credit: Shutterstock