If you aren’t measuring the coverage your regression tests provide, you may be spending too much time for little benefit. Consider the value of your regression tests as you create and manage them. You need to be smart about the regression tests you maintain in order to gain the maximum value from the work put into creating, running, and analysing their results.
Do you know how many regression tests you have? Do you know what the coverage of your regression tests is? Do you know the value of your regression tests?
These may all be pretty simple questions, but your team needs to be asking them; knowing the answers is very important to your test plan. You can't manage what you don't measure!
As I conduct more testing and Agile reviews with organisations, irrelevant regression tests seem to be an increasingly common problem. In many instances, I see a good portion of tests that are either redundant, repetitive, or not exercising any different code—i.e., not expanding code coverage. This is becoming even worse with the increase in automation. Due to the perception that automation tests require almost no time to run because they are running out of hours or in parallel with the operator doing other tasks, people think it’s not as necessary to manage the volume of regression tests, and the associated tasks disappear.
You have to be smart when designing your regressions tests. That’s why I use that acronym to remember what’s important to consider when creating and managing them:
- Specific: Tests have to be clear about what requirement is being tested
- Measurable: Tests must define the coverage of the requirement in order to measure the actual value
- Achievable: Every regression test has to have a standard value defined for it. This could be measured in the time required to create it, run it, or maintain it, or as when it last found a defect
- Relevant: The regression test must give more insight in the continuing function of the software, proving its fitness for purpose as well as just working correctly.
- Time: It is important to express the value of the regression test in time. Every test only has a meaning if one knows the time dimension in which it is realised. This can both be from the aspect of the time it actually takes to run the test, and the time lapse as to the value that the regression test still adds value.
Let’s dig deeper into what you should consider when creating, managing, and evaluating regression tests.
Know the Value of Your Regression Tests
The first thing we need to consider for each individual regression test is its return on investment. If the effort to create, run, and maintain a test is not either giving us confidence in the system under test or finding regression defects, then we have to question why we are spending this time.
I want to highlight in particular the “run” part of the equation. I have heard people say so many times that they think there is no cost to run regression tests after they’re automated. While a good automation framework may allow regression runs to be kicked off by the click of a button and run unaided, there is still a requirement to analyse the results, and that does incur a cost—even if those results are summarised in a dashboard.
This also takes into consideration that the scripts have been created within the framework to deal with unexpected events, such as pop-up messages, which could potentially stop a regression run. If they require a human to analyse the results, they are not fully automated. Maybe it is worth questioning if these tests are truly automated.
Selecting Tests for Regression
When it comes to creating tests, I have used the term selection deliberately. I do not believe you should be writing a regression test as a separate test; it should be a reuse of a test you have already written for a feature set. When writing progression tests, we should simply be marking those that we think will be of use as regression tests in the future.
Let’s take this thought to the next step. When we do the analysis and design of our tests, why not mark tests that should be used for regression before the test case or session sheet is even written? The regression test simply gets scripted as an automated test immediately. This saves on both preparation time to write the test, plus the test execution that may happen manually until it gets automated.
This is where Agile teams need to focus; they cannot afford to test the same thing once manually and then again as automated. This would not be a very efficient way to test. Let automation take care of what it is good at—checking—and let humans focus on what they do best.
In addition to being selective about what we create as regression tests, we also need to be careful about which regression tests we execute. Here, I am particularly focused on Agile teams. With the increase in communication and collaboration between business analysts, developers, and testers, I see no reason a whole regression pack needs to be run for each new release or change of configuration. The Three Amigos strategy is a great practice that can be used to have conversations about regression tests. With the combined expertise of the BA from the business value perspective and the developer from the technical aspect working with the tester, I see no reason we cannot reduce the number of regression tests down to a handful per user story.
We need to become more focused on our regression testing, targeting it to what we have just changed and the potential impacts on tests that we have already executed and passed for that feature. We need to have confidence that we have identified the correct risks and put in place the tests necessary to mitigate them, instead of running multiple regression tests “just in case.” We need to be smart about how we spend our time.
Managing Regression Tests
There should be tasks on your task board detailing continuous review of your regression tests to ensure that they are still providing the benefit to your project that you expect. As situations change over time, here are some aspects you should consider:
- Test level: This relates to the period within the software development lifecycle itself. When progressing through unit, system, integration, and user acceptance tests, the regression tests are going to differ. As the team adds more stories and features to the working system, consider which regression tests you want to retire at which level.
- Time since first run: The system, application, or feature being promoted to the live production environment is another trigger point for active management. As the maturity of the system grows, certain tests may deliver less value, particularly if no development has been done in that area for a period of time.
- Changed functionality: Changes in functionality or features also should trigger a review of the related regression tests. If changes are actioned as they occur, regression packs will remain current.
- Archived or retired applications: As applications or even systems are archived or retired, the related regression tests also need to be marked as such.
All regression tests will need to be retired at some point, even if it is at the stage that the system, application, or feature becomes decommissioned. But it is likely that most regression tests should be archived long before this. Proper maintenance of regression tests can have a significant impact on your delivery. Keeping regression tests that are no longer required or of value just wastes time and effort.
If you aren’t measuring the coverage your regression tests provide, you may be spending too much time for little benefit. Consider the value of your regression tests as you create and manage them. We need to be smart about the regression tests we maintain in order to gain the maximum value from the work we put into creating, running, and analysing their results. If they do not provide confidence in the system or find defects, then why are they in your regression pack?
This topic was discussed in an Ask the Expert virtual webinar and a poll taken. The majority of respondents were currently using a separate iteration just before go live to run their regression (41%) with a close second at the end of each iteration (32%). I would be interested to know when you are running your regression tests and what you have done to move then to continuously throughout the iteration. Join the discussion on our LinkedIn group via the button below and share your thoughts.