Is software testing a necessary evil in the development and deployment of software projects for Australian and New Zealand IT departments, or an essential tool in reducing budget overruns and quality nightmares?
The creators of the Planit Software Testing Index are calling on software developers and testers to help answer these and other questions with the launch of the 2010 edition of the Index today.
Designed for software testers, developers and related professionals, this year’s survey seeks to shed light on the role that testing plays in software development and its contribution to successful project outcomes.
Planit managing director Chris Carter said the Index matched Planit’s mission to improve the delivery of IT programs and projects in Australia and New Zealand.
“And we want to do that by focusing on how we can improve software testing within projects and programs,” Carter said.
He hoped that this year’s responses would show a continuation of the trend towards software testing being seen as an important component of the development process. In the 2007 Index 26 percent of respondents reported that software testing was a necessary evil, but that figure had dropped to 12 percent in 2009. Those respondents who reported that it was strategically important for the organisation’s success grew from 13 percent to 23 percent over the same period.
“People were starting to use project testing as a means of controlling their project costs and controlling their output, rather than just something that was tacked on at the end,” Carter said. “So I think that will be a trend that will continue with people getting more and more serious about it.”
The Index gives participants an opportunity to benchmark themselves against their peers in the region and across vertical industry segments, in terms of when they apply testing, the resources allocated and the methodologies used.
This year’s Index also seeks to shed light on whether the resources and time of commencement of software testing has a significant impact on project quality, and asks what the usual impediments are to earlier or more extensive testing. Carter said the goal of these questions was to provide those responsible for software testing with a stronger set of quantified data to benchmark themselves against and improve the efficiency and effectiveness of their testing efforts.
Devin said the survey also hoped to uncover whether software development activity had bounced back after the slowdown that occurred during the global financial crisis. Last year’s Index saw the majority of respondents (31.4 percent) state that the financial crisis had had no impact on the way projects were executed, although 26.3 percent stated that projects underwent more rigorous evaluation, while 26.3 percent said that resources were reduced.
“Projects have ramped up again and I think it will be interesting to see if the lessons learnt during the GFC with regards to IT spending will continue to be applied,” Carter said.