Optimism has returned to software development projects across Australia and New Zealand, with over 60% of organisations reporting at least as much activity now as pre-GFC.
In the recent 2010 Planit Testing Index, surveying over 200 organisations, 26% of the industry indicated that they have now moved beyond pre-GFC activity levels.
According to Planit Managing Director, Chris Carter, this positive outlook also carries through to spending for software testing in 2011. “49% are going to be spending an increased amount of money on structured testing methodologies, which is encouraging to see,” said Carter. “We’re also seeing an increase in spend in Software Testing tools and also an increase in spend in training permanent staff.
“On the downside of things, the number one area where spending will reduce will be offshoring. Interestingly, offshoring has grown over the past few years, but more organisations are looking to decrease offshoring in 2011 compared with last year.”
But in spite of the general optimism and intentions to continue spending more on testing, project outcomes worsened over the past year, with only 42% of projects being delivered on time, on budget and in line with their original scope, compared with 49% in 2009.
The predominant reason for this outcome is the continued burden of poor and changing project requirements, a key result of the Planit Testing Index over the past four years. In fact, 96% of respondents said that better requirements engineering would have a positive impact on project outcomes, with 63% indicating that this impact would be significant.
Another interesting outcome from the 2010 Planit Testing Index was the immense swing towards Agile/RAD methodologies, with 48% of respondents indicating that they commonly use these in software development projects.
Despite their rapid rise in acceptance, Agile/RAD users were only slightly more likely to report a favourable outcome as compare with those that used no defined corporate methodology at all. This result could be at least partially attributed to the relative immaturity in the way these methodologies are being implemented, rather than being any fault of the methodologies themselves.
In summing up the year’s findings, Carter emphasised the large amount of room for improvement stating that outcomes “could be so much better if only we got requirements definition right”. Time will tell if the industry’s good intentions to improve testing processes will translate into action and positive project outcomes.