A "way station on the journey to achieving software quality" is how the Burton Group characterizes software testing, yet it's a destination that many organizations still too frequently avoid, according to the market research company's report "To Err Is Human, So Test That Software."
There are, however, several drivers making better testing an imperative, according to the Burton Group: Software that businesses depend on is pervasive; regulatory compliance is an issue for many organizations, while litigation is a concern for all types of businesses; and the emergence of service-oriented architecture (SOA) has shined a light on the potential of reusability. Burton Group recommends organizations assess their testing maturity and work toward improvements.
"Just the sheer complexity of the environment is stressing the situation," said Kirk Knoernschild, an analyst for application platform strategies at the Burton Group. "Things like large-scale SOA and regulatory compliance drain resources. Testing is a resource pool that's often drained. The difficult nature of SOA development, not just the technical aspects but the human/social and cultural aspects, and the impact on the organization in general has an impact on the traditional views of testing."
At a minimum, companies have to come up with new processes surrounding testing. "Testing is much more a process issue than a technology issue or a regulatory issue," he said.
The long-standing problem has been that testing was performed at the eleventh hour, Knoernschild explained.
"Traditionally testing has been one of those tasks performed late in the lifecycle, whether you were outsourcing software or developing it internally," he said. Typically project testing was given about 20% to 30% of a project time line, "but unfortunately it was always done at the end. As project scope increased, teams began to breech dates, and the timeline for testing seemed to shrink. Because of that you have a problem with software quality issues."
Knoernschild continued, "To a great extent it goes back to the process; software needs to be tested throughout the lifecycle, which is an advantage of iterative and agile [development]. Agile/iterative, if done correctly, addresses the problem because testing is integral to each iteration. If you can't perform adequate testing, you recognize it's because quality has been compromised, but you recognize it much earlier in the lifecycle."
The evolution of software testing
Testing has evolved on several fronts, Knoernschild said. For one, testing is gaining traction in the industry, more so than even five years ago.
"The notion of developer testing -- unit and integration testing -- gained a lot of ground in the last decade," he said. "There are some real good developer-based testing frameworks that allow developers to test software. This is not as controversial as it once was."
In addition, the notion of automated testing is growing more prevalent. "When you talk about automated testing, you're talking about using an automated test tool to script the test case and using it for a suite of regression tests that can be incorporated into the build process. Ideally those tests run more frequently than in a traditional waterfall lifecycle," Knoernschild said.
However, according to the report, many organizations "still maintain a situational and episodic approach to testing and have no organization-wide software quality initiative in place. … Excellent testing solutions that help automate much of the unit, functional, security, and performance testing meet resistance (despite their availability) and frequently end up as shelfware."
Knoernschild said that's often because organizations approach the selection of a testing tool from the wrong angle.
"To a great extent a lot of organizations purchase tools that they hope will solve some problems, and testing tools are a perfect example. They hope they can develop processes around the tool, and really, the way I see tools used most effectively is to find a process that's most effective for the organization and the development team, and then find a tool that supports that process," he said. "To a great extent that's what it boils down to -- most problems surrounding software test and the SDLC stem from ineffective processes."
There are models for assessing testing maturity, similar to assessing process maturity with CMMI-style frameworks, but Knoernschild said organizations typically fall to one end of the spectrum or the other: those looking for maturity models and those more focused on agility.
"They don't necessarily value a maturity model so much as the human aspect of software development, and to empower individuals to make choices, even surrounding software testing," he said.
The Burton Group report does recommend that organizations set up governance processes for testing: "Software testing is a first-class part of an organization's overall governance effort and merits the same funding and attention that an organization gives to governing the quality of the products and services that it offers for sale."
However, Knoernschild said, "I'm not convinced testing is really gaining traction as something heavily governed at this point. In fact I've seen quite a few organizations that don't have dedicated testing teams and some whose testers play many roles in the SDLC, and that's not necessarily bad. The traditional pitfall of the SDLC is that testers are brought in late."
The advice Knoernschild has for organizations is twofold: "One critical thing any organization can do to improve software quality and software testing methodology is to bring the tester on the project earlier in the lifecycle. Second is to remove the burden from the tester of performing manual tests through the lifecycle and develop a strategy for automation, where testers are monitoring the automated tests, working in the context of the tools. Once the test case is scripted, it becomes an artifact that lives alongside the software being developed."