Skip to main content
  • The Wellington City Council (WCC) wanted to deliver quality outcomes without breaking the bank. Find out how Planit’s fast and flexible resources helped WCC achieve this goal.

this is a test Who We Are Landing Page

INSIGHTS / Articles

Where is the Q in Pipeline Automation?

 18 Aug 2020 
INSIGHTS / Articles

Where is the Q in Pipeline Automation?

 18 Aug 2020 

In the first of two articles, I discussed some of the automation and test methods that support software quality and help accelerate delivery. Next I will look from the other direction to show how we can test our pipeline automation initiatives.

The environments we talked about setting up via Docker or AWS are an example of Infrastructure-as-Code, following a script to set up an environment or a service.

How do we verify the infrastructure we set up using this code is operating correctly or is the one we intended? Maybe the script had the wrong version of Java listed, so the script functioned correctly but not as intended. Or what confidence do we have that our Continuous Integration pipeline is triggering when it should using the right application versions?

This side of the story feels much less mature than the first half. There is more work to be done here and tools to be created, but there are some things you can already do to help test the DevOps initiatives you have set up.

Here’s some ideas you could try:

Build pipeline

If you're building an environment in the pipeline, include version validation steps to make sure the environment was set up correctly with the right version of the tools and software packages.

Most tools provide a simple command line version check to determine the version. If you want to check the version of your development code, you could include the version number in a log file, and then read that in your test as verification.

You could test if a trigger is set up correctly by including a build step to verify the trigger worked. For example, a commit was pushed recently, the time is correct, or that the dependency passed on from a previous build is the right one. We are not wanting to test that our build pipeline tool does its job right – rather, it’s about testing that we have configured it correctly.

If using a JenkinsFile to generate a build, we could consider static code analysis for the Jenkins file, looking for missing parameters, general linting, catching out-of-range numbers, ensuring logging parameters are used, verifying secrets/passwords aren’t stored in plain text, etc. Again, this isn’t to say that Jenkins didn’t work correctly - this is to question if we gave it the right instructions, if we did it securely, and just catch any simple errors early.

This all links into how and why we test development code. It’s not necessarily that the code or tool itself isn’t working correctly - it’s just that we made a mistake or missed something in how we programmed it.

Environment scripts

Include a test just after creation of your environment for a sanity validation.

Test any integrations with other systems are up and accessible, that you have database access, the database content was populated, you have external network access, etc. This should run as soon as the environment build finishes, and, ideally, if the test fails, we automatically roll back the environment to the last one which was successful.

Put the scripts in source control, allowing for easy rollback, making them go through the usual code review process. You could even write a unit test against the script validating outputs for different inputs, with stubs in place to not actually create the environment but to make the appropriate calls.

Use static code analysis to look for missing parameters, syntax linting, out-of-range numbers, etc. DeepSource is an commercial tool you could use, but there are plenty of options.

Check for potential bugs and security risks. It doesn’t take much to forget to turn on a security setting that allows external access to your systems, or to get the size of your system requirements wrong.

Use a Blue/Green deployment method. Because you aren’t using the same hardware for your environment that you are updating, you can leave the existing one up and running, get the new one ready, run some validation tests on it, and only swap to it when you are happy it’s functioning correctly.

Tools like AWS also have a lot of checks and rules you can configure to help you make sure you do not accidentally set up the wrong size environment or miss something crucial. You should use these where you can, but not all approaches will have this option, so be smart and creative.

Testing as an enabler

In my first article, I explored the triangle of concerns between development, operations and quality, or as I called it: QualityDevOps. By thinking about quality up front, we will get more out of our DevOps initiatives, which means we get a higher quality output while minimising time and cost.

Make use of containers, build pipelines, Service Virtualisation, and other DevOps tools to allow you to test your systems reliably and efficiently. Run your tests early and often, so you always get quick feedback in real-time during development.

Also understand that your DevOps processes can be faulty, and mistakes in the architecture, triggers, and configuration can slip through, so testing is a key enabler of this type of development.

Quality at Speed

Whether you need assistance maturing how you use test automation or require skilled developers in test to build robust automation scripts for your applications, we can help. As world leaders in testing, we can help you engineer the right results through automation, improving quality, accelerating speed, and decreasing cost in delivery.
Find out how we can help you fully leverage the power of automation and benefit from reducing manual effort, improving reliability, increasing repeatability, and identifying issues as they are introduced.


Find out more

Get updates

Get the latest articles, reports, and job alerts.