I have a love-hate relationship with quality metrics. On one hand, I find them extremely valuable in helping us...
analyze, reflect and improve. On the other hand, performance testing metrics can be very misleading or lead to behaviors that aren't always healthy.
Defect count quotas
When it comes to measuring quality, "defect counts" are common performance testing metrics. This has been problematic for me since my early days as a software developer at IBM in the '80s. At that time, we were using the classic Waterfall approach to software development, and the process that we followed was that code could not move along to the next phase until a certain number of "defects" had been found. The quota of defects was determined by the number of lines of code there were in the system under development. So, for example, developers were required to find so many defects during unit test, and then testers, who performed functional tests, were also required to find a certain number of bugs before the code could be deployed.
One problem with this approach is that there was no "weighting" for defects. The count of defects was not based on severity, so basically a typo counted the same as something that could cause a mission-critical outage. As a result, a lot of pretty trivial bugs were opened in order to reach the required quota.
Another downside was that the test team would report bugs on "ridiculous" -- at least in the minds of us developers -- scenarios that any "sane" customer would never do. This resulted in a lot of bugs closed as "user errors." There was also the classic and somewhat condescending, "It works on my machine," response from developers to bug reports. Needless to say, the relationship between developers and testers was not so good.
What happens when there is a lack of unit testing
Fast forward 15 years or so to the early 2000s, when I was a QA manager at Sun Microsystems. Having come from the world of development, I was focused on a collaborative relationship between dev and QA, and, for the most part, dev and QA were friends. As a QA Manager, I still had to do a lot of reporting on defects, but luckily, QA was not required to find a certain number of bugs depending on the number of lines of code in the system under test. However, not only were there no strict rules for QA, developers weren't even required to do unit testing.
Even though dev and QA were "friends," the teams were still separated organizationally with the developers coding and then passing code over to the QA team to test. There was an application that my QA team was testing that clearly was poor quality. My primary clue was that the team found obvious bugs easily. Despite my recommendation against it, the application was deployed to production "on schedule." This was a low-risk internal application, so even though I had my qualms, I understood the decision to move forward.
Zero defects may not mean high quality
Imagine my surprise when a month later, that application team won a prize for quality because there were zero reported defects. I love prizes so I was happy and proud of the team, but couldn't help but think, "No way!" I did a little behind the scenes investigating and guess how many people had used that application? Zero. That's right. Not a single person had even logged into the application. I can't remember now why no one had used it. It could have been they really didn't need it, or maybe its availability hadn't been well communicated. But one important lesson this taught me: Quality cannot be measured purely on the number of defects found. Customer usage and feedback is an imperative factor.
Fast forward another 15 years to today's world of Agile development. Developers and testers work together to provide quality. We have continuous integration and test-driven development and other techniques to help bake quality in early. We have demos and check in with stakeholders after every short sprint, so that we get early feedback.
There are no easy answers about how to best measure quality. Both the disciplined quality processes used at IBM and the more informal processes I experienced at Sun were appropriate at the time and for the applications we were testing. Agile processes don't guarantee quality, either. The important thing is to learn and refine, whether we're talking about software applications, processes or quality performance testing metrics. Whether you use defect counts or some other metric, Agile practices tell us to get feedback and to act on that feedback.
Get ready testers, things are changing
An insider's look at Agile testing strategies
Tips to effective software testing strategies