There are three questions that every test manager should be able to answer; in fact, the people employing those test managers should also know the answers to these questions. The questions are:
- How effective is your testing?
- How efficient is your testing?
- What are you doing about it?
If you can answer these questions well, you've got a potential framework for measuring and reporting performance levels within the testing function. But too often they cannot be answered and that tells you there's a lack of proper governance over the testing function. In this tip you'll learn the metrics needed to ensure an effective and efficient test organization.
What is meant by the first question, "How effective is your testing?"
This is about the fundamental purpose of testing-- to find
"Defects into production" sounds like an obvious metric to monitor. Is that true?
The "defects into production" metric really is the ultimate measure of testing success and should be the primary focus for every test manager, but it's surprising how few test managers actually track this. Let's take a project with multiple releases: an effective test manager would work to see the percentage of post-deployment defects found in each release dropping during the life of the application. If a test manager can do that, he or she will be adding real value to the business.
How does efficiency factor in the equation?
Once a level of effectiveness has been established, the focus should switch to efficiency. It would be a mistake to assume that just because a testing function is delivering good quality it is a good testing function. Running the 100 meters sprint is a good achievement, but not if it takes you an hour.
So, how do you measure efficiency?
Efficiency is about metrics and goals. When selecting metrics, the first step is to identify the goals; ask yourself, what's going to bring you the most benefit? With the goals in mind, you can select the relevant metrics; there are a lot of possible metrics and you don't want to be collecting metrics for the metrics' sake. It's important to remember that when monitoring metrics you get the most value by analyzing the trends. Test managers (and their sponsors) could even attach targets to the trends, basing the targets could be based on quantitative goals, market averages or even instinctive measures.
Ok, so you've got the effectiveness and efficiency metrics. How can they be used?
Test managers need to act on the metrics. Test managers should look for ways in which to deliver better services through testing faster, cheaper and better. But objections often get raised: "What we are doing at the moment is ok, and my sponsor is happy, so I don't need to change," or, "Improving test processes takes time and money." Well, testing is often seen as a static activity, but we seldom operate in a stable and routine environment and we need testing to be dynamically driven, continually improving the service it delivers. Faster testing, through automation and offshore "around the clock" solutions, means that it is possible to reduce testing timelines and increase throughput-- potentially testing more in a shorter time. This in turn enables cheaper testing. You can help to manage the cost of testing down by increasing the intelligence behind the test effort-- for example, better automation generally means more accurate testing--and this helps you find defects earlier, which results in increased confidence in test and project deadlines.
Do you have any closing thoughts on how test managers should think about these issues?
Test process improvement at its best is an organic activity; all members of the team should be continually looking for ways to improve efficiency. The test manager's role is pivotal, removing blockers, encouraging innovation and supporting improvements whether they originate from within the team or from outside. Of course some elements of test process improvement will take time and cost money, but there's the potential for a virtuous circle here: improving testing efficiency is likely to have an impact on testing effectiveness, which in turn can have an effect on efficiency. Finally, given the investment needed to adopt test automation or to provide a new test environment, a structured investment is required. Quantifying the return on such an investment can be difficult, but with concrete metrics in hand, the job is a lot easier.
Ivan Ericsson has been a Test Manager for over 15 years in a range of sectors and working for organisations throughout Europe. He is a Director with SQS Software Quality Systems and, as well as currently leading a strategic project in Sweden, he is responsible for delivery assurance within SQS. A regular presenter at test conferences, he is particularly interested in test management and test process improvement.
SQS is the largest independent provider of software quality management, quality assurance and testing services in Europe. Founded in Cologne in 1982, SQS employs 1,700 staff. Along with a strong presence in Germany and the UK, SQS has further subsidiaries in Egypt, Finland, India, Ireland, the Netherlands, Norway, Austria, Sweden, Switzerland, South Africa and the US. In addition, SQS maintains a minority stake in a company in Portugal and a cooperative venture in Spain. In 2009, SQS generated sales of 134.3 million Euros. The core business of SQS is providing managed services for software testing.
SQS is the first German company to have a primary listing on the AIM (Alternative Investment Market) in London. In addition, SQS has a dual listing on the open market of the German Stock Exchange in Frankfurt am Main.
With over 5,000 completed projects under its belt, SQS has a strong client base, including half of the DAX 30, nearly a third of the STOXX 50 and 20 of the FTSE 100 companies. These include, among others, Allianz, Beazley, BP, Centrica, Commerzbank, Daimler, Deutsche Post, Generali, JP Morgan, Meteor, Reuters and Volkswagen as well as companies from every other conceivable sector.
This was first published in November 2010