Deploying builds quickly is an absolute must if you’re going to keep the customer happy, but not if that speed comes at the expense of accuracy or quality. This is where software development becomes a bit of a balancing act – and one that officially becomes tricker as the project grows.
Cautious project managers may be tempted to delay each release just to make sure everything’s correct, but this would be a mistake.
The longer you wait before each release, the bigger the pressure to get it right. If there is an issue, the dev team will have to trawl through so much more code to find and fix the problem, which is never a fun place to be.
This is where automation and the continuous delivery pipeline comes in handy. Let’s dive in.
What is the continuous delivery pipeline?
The continuous delivery pipeline (or CI/CD pipeline) forms the backbone of modern DevOps. It refers to a process in which certain key steps in the software delivery process are automated. The ultimate goal is to speed things up and reduce errors.
It takes its bearings from other agile software development best practices, including build automation, version control, and automated deployments. Projects progress in small iterations and feedback (from other developers, teams, and stakeholders) is continual.
The pipeline has software that automatically accepts or rejects code. An alert is generated – via email, chat app, or project management software – to let the developers know when there’s been a rejection.
The only other notifications tend to be sent to the whole team after each successful deployment or update, which means fewer emails flying around. Meanwhile, having a pre-defined set of parameters in place takes some of the burdens off developers and helps ensure consistency.
What are the benefits of the continuous delivery pipeline?
- Automated releases remove the need for time-consuming and error-prone tasks.
- Before code is merged, it needs to pass a CI test that ensures new features match the specifications. This prevents errors and regressions. It also helps new team members join in faster because they don’t need to learn complicated development tests.
- It provides greater insight into a design’s metrics, including engagement rates, time spent between each design phase, bug encounter rates, and new feature release frequencies.
- Team members are more confident because they know their code is guaranteed to integrate seamlessly with the rest of the build.
- Changes can be made and measured quickly, while bugs can be fixed quickly and cheaply because the amount of code to go through is that much smaller.
The phases of the continuous delivery pipeline explained
Every pipeline is a little different, so there are no hard and fast rules here. But as a general guide, the following stages apply to most projects.
Build and component testing
This is where the software is built. Code is reviewed before going into a source-code repository. The components – the smallest testable unit – are continually tested. The software is then built and archived, while the artifact is stored in an artifact repository.
Subsystems are small, deployable units, such as a server or container. They can be deployed into a staging environment that is a clone of the production environment, and then be validated (or rejected) after passing through a series of tests. Unlike components, subsystems can run alone and be validated against the customer’s requirements. Below are some of the different types of tests run:
- Automated tests confirm the new version works.
- Functional tests (or integration tests) verify the new capabilities or features.
- Regression tests ensure the new version does not break any features.
- Final performance tests ensure quality and security.
As a general rule, it’s better to use independently deployable artifacts rather than compressing the system and releasing as one because the tests tie everything together, which means results will be less specific.
That said, some systems do need to be validated as a whole. If this is the case, run the above tests (functional, regression, and performance) to certify the code. And remember not to use mocks or stubs here.
Next, the approved software is deployed into the production environment and released. More tests are run to make sure the new version works.
If your pipeline is a simple one, stages can run sequentially. If your product architecture is more complex, multiple stages can exist in parallel: various pipelines intertwine before eventually making it to production. In this case, many dev teams include a manual approval step to avoid interrupting deployments.
This is fine in some scenarios, such as a business wanting to focus on a specific demographic before releasing to the wider population – but be wary: adding in a manual stage has the potential to confuse, so make sure roles are clearly defined before any kind of human-based check is introduced.
Bringing it all together
One last (but vital) part of building a delivery pipeline is choosing a project management framework that brings it all together. Ideally, you’ll want to choose something that integrates with source code management tools (GitHub, Subversion) and has a wide choice of plug-ins and extensions.
You’ll also want it to be able to send out automatic notifications so developers can log in and pull tasks through, hear instant feedback, and access progress charts to see where their work fits into the bigger picture. It just makes everything that little bit more transparent and collaborative.
A continuous delivery pipeline is essentially a series of tests. In the same way that a minimum viable product reduces risk and helps teams make something that better fits the customers’ needs, the CD pipeline is an agile and sustainable way to create software.
In addition to speeding up delivery time and reducing the chance of error, automatic pipelines help your developers solve issues faster, reducing delivery downtime. So customers get the finished product faster, and dev teams can focus more of their attention on what they do best: building software.