Skip to main content
 
in
  • The Wellington City Council (WCC) wanted to deliver quality outcomes without breaking the bank. Find out how Planit’s fast and flexible resources helped WCC achieve this goal.

this is a test Who We Are Landing Page

Amplify
DoT
 
       

INSIGHTS / Media Coverage

Can Test Managers Harm Product Quality by Being Too Helpful?

 26 Feb 2014 
INSIGHTS / Media Coverage

Can Test Managers Harm Product Quality by Being Too Helpful?

 26 Feb 2014 

I have a tendency to be quite helpful. That may sound like I’m boasting, because surely helping others is a good thing, right? I do find that approaching software projects with a positive and helpful attitude is rewarding, but I’ve recently learnt that a test manager taking on too much responsibility within their team is detrimental to the quality of the release.

Agile Management

Upon joining a project, I personally take on a strong sense of responsibility for releasing a quality product to customers. When I investigate different angles of risks to quality, I need to know more about the processes that go into developing and releasing software at that company. As I speak with other managers about their team’s role in software product releases, we always identify multiple major project issues.

There are two issues in particular that I’ve encountered across multiple companies:

  • Automated unit test results have not been checked for weeks or months
  • Releases for testing and production are created by different teams

These issues both present serious risks to product quality, and should not be ignored by the test manager. I have worked with some outstanding colleagues in the software industry; they were concerned about the risks inherent with each situation, but had limited time, resources and budget to spare for implementing solutions. Each time, I agreed (and often volunteered) to take on these responsibilities within the test team rather than accept the status quo.

Test team checking unit test results

I take my hat off to development teams who have an automated build process, with unit tests that are run as part of creating a build. Unless they never check the results of those unit tests, in which case, why even bother to have them? Too often the reason that development teams stop checking the results is that many of the unit tests have started failing! The failed tests can be time consuming to investigate, and sometimes the problem is an out-of-date test rather than a bug in the program. So development managers lower the priority of unit tests and focus on writing new features and addressing raised defects in order to meet product deadlines.

My response to this situation was to have a tester check the unit test log each morning, and raise a defect in the defect tracking system for each failed unit test. These defects were then assigned out by the development manager and prioritised above other tasks. The problem was no longer so easy to ignore, and the unit tests were getting fixed. Yet in hindsight, I believe this approach was actually harmful to product quality.

The extra handling of defects through the tracking system was substantial. These did not go through triage, but still took time to raise, assign, re-assign, update the status, etc. All of that time spent on defect management could have been used more effectively to improve product quality.

Many developers took on less responsibility and accountability for updating unit tests while they were making code changes. They knew that if a unit test failed as a result of their change the test team would raise a defect. So they could “save time” by only updating tests once defects were assigned to them. This created a situation where some unit tests were reporting false positive results, as only the failing unit tests were being reviewed. It took a lot more time for the test team to manually detect, raise and retest bugs which should have been caught by unit tests as part of the build process.

There were some proactive developers who had already fixed their unit tests before the defect appeared in their queue. They saw these superfluous defects as a nuisance, a distraction and a waste of everybody’s time. Overall I believe that their opinion of the test team was lowered, as they saw us performing what was basically a redundant data entry function. Development and test teams function together much more productively when they have mutual respect.

I have tried a much less helpful approach on another project. I was strict about demanding that all unit tests were run and passed on a particular build prior to accepting that build for testing. This approach forced the development manager to explain every anomaly. It was damaging to the work relationship, as it put the development manager on the defensive. In practice, tests can fail or be skipped for many different reasons, and this should be left to the discretion of the development manager.

Next time I will encourage the development manager to resolve this issue by checking unit test results within their team. That will allow developers to retain responsibility for the integrity of the unit tests. Also, developers can fix the tests and verify them without raising defects, saving on defect management overhead. For my part, I will regularly confirm that the unit test results are being followed up by the development manager, and keep the lines of communication open between the teams.

An overly helpful approach can have the unintended side effect of harming product quality in various ways. When offering to assist other teams, ensure that you discuss who will ultimately take responsibility for each aspect of the task. Also consider whether the time required for the task could be better spent on testing activities, in the interests of product quality.

Test team creating release packages

In another role, I was responsible for multiple projects being developed concurrently, and the sole release manager resigned shortly before one of those projects was due to be released. That person had been in the role for a long time and there was no documented release process. Without enough time to hire a replacement, I volunteered to create the release package. My theory was that I already knew the version of software to be released, and the location of the files, so how hard could it be to publish the release?

Getting access to the release management tool and learning to use it took me more than two days. It was clunky software at the time and had its own set of bugs to contend with. I found it interesting for the first hour or so, and annoying for the rest of the time. Every hour spent on this task was time taken away from test management for the concurrent software projects.

I published the release to staging, and had the test team verify it. Then I made the release publicly available, and again asked the test team to verify it before sending out release notifications. Once that was complete, the support team installed the public version in their own environment. They immediately reported that their diagnostic tools were missing from the release. Investigation of customer issues would be severely hampered without these tools installed and running on the customer’s computers. A little too late, I learnt that the automated build process produced more than one set of installers, in different shared folders at different network locations. The test team had been installing and testing a subset of the final product.

Looking back, I realise that a short meeting with the managers of various teams would have allowed me to create a checklist for releasing the product. As a management group, we could have then assigned out tasks across the various teams more appropriately. For example, there were developers in the same office who had experience with the release management tool. They could have made the product live in one hour rather than two days. My goal was to save the other managers some time, but the product took longer to release. Being overly helpful had a definite negative effect on the product quality of that particular release.

This was very useful information for future releases, but having the test team publish a release and also verify the release process was another case of being too helpful and negatively impacting product quality. If the test team releases the product, who tests the test team? In the absence of a release manager, it would have made more sense for the support manager to package and publish the release.

How helpful is too helpful?

There’s a line to be drawn between helping other teams, and taking on responsibility for aspects of that team’s role. Many times developers have helped my test teams, for example, by writing custom tools for automating specific aspects of testing. That help was much appreciated, and saved the teams an enormous amount of time and budget! Imagine if those developers had then been even more “helpful”, and had taken on the role of setting up the test scenarios, checking the results and raising defects without involvement and oversight from the test team. Certain things would either be tested by both teams or not at all, and either way there would be a negative impact on product quality.

The test manager is in a position to notice issues in the software development process, but that doesn’t automatically mean we’re best placed to resolve those issues. When offering to help, solicit input and feedback from representatives of the relevant teams. Make sure that your reporting of those additional activities is just as transparent as your reporting of testing activities, to allow for effective feedback. And overall, consider whether the issue presents a risk to the project which is greater than the risk of having fewer resources focused on testing.

Deliver Quality Quicker

At Planit, we give our clients a competitive edge by providing them with the right advice, expert skills, and technical solutions they need to assure success for their key projects. As your independent quality partner, you gain a fresh set of eyes, an honest account of your systems and processes, and expert solutions and recommendations for your challenges.
 
Find out how we can help you get the most out of your digital platforms and core business systems to deliver quality quicker.

 

Find out more

Get updates

Get the latest articles, reports, and job alerts.