Quality is vital to your software. The cost of fixing defects, as measured over time, increases exponentially from
something that is virtually next to nothing when caught during coding to something that could kill your company if it reaches a general marketplace and has serious implications. Finding bugs by QA inspection is costly, maddening, and inefficient. Use of appropriate automated testing, both triggered and scheduled, is the way to efficiently bake quality into your code from the start, keeping costs low, quality high, and freeing your QA team to actually act in a quality assurance role, not just a quality control role.
One sure fire way to plan to lose customers is to have them find defects in your software for you. You probably have your hands full just producing, packaging, and selling your software. Schedule pressures are always felt the hardest in more traditional shops that use their QA staff to find the latent defects in the code, as everyone is anxious to take the code and “Ship It!” That pressure can result in defects not found, regressions in functionality from things that used to work, and late projects. The problem is that defects are tough to find and costly to fix.
And the cost to fix a defect just gets more and more expensive the later that you find it:
Why is that? The causes are:
- The closer a defect is observed relative to the time that a code change created it, the easier it is for a developer to isolate what change caused the defect. If you know that everything worked until the last change was made, you know where the defect was created and where to start looking for the lines of code that are the culprit.
- When a defect is caught right after the code change that created it occurred, the details of what the code is and how it all hangs together are still fresh in the developer’s mind. This makes fixing the defect quicker and easier.
- When a defect is not found until lots of people are affected, the cost of multiple levels of testing and retesting, defect tracking, release notes, customer notifications, and so on are much higher than if the defect is found and affects just a few people, such as the developers and testers.
- Once a defect affects end users, it may become something that costs more than merely the amount to fix the defect. There may be collateral damage in reputation or in corporate liability that can affect the bottom line! See “Why Software Fails”, IEEE Spectrum, September 2005, for some
humoroushorror stories of how software can really affect the bottom line.
Finding defects by testing is really inefficient
You have probably heard of the infinite monkey theorem of Arthur Eddington, where he stated,“If an army of monkeys were strumming on typewriters, they might write all the books in the British Museum.” I’ll restate that as,“If an army of QA testers were testing your software, they might find all of the defects in your software.” For any non-trivial piece of software, the number of paths through your code is really, really large. Having a QA team simply test “whatever” until they find a defect, and continue to do that until each and every defect is found and fixed (remembering to retest everything after each fix to make sure that nothing regressed from your fix) is a horribly inefficient way to make sure that your code quality is high.
There must be a better way to create quality defect-free software than by simply testing your way into quality.
So, what can be done to create more defect free software?
The answer is that instead of having QA act as a sort of “Quality Control” team where defective product is “pushed aside for remanufacture,” we need to bring them in to start testing early and often and in a matter that helps identify how a defect was created and keeps it from occurring again. We need to engage our QA team to work in the role they were hired to do, which is to ensure quality, and utilize their creative talents to drive them into areas that they would otherwise be unable to delve into due to the labor involved in complete regression testing every time some line of code changes. That’s where continuous integration comes into play.
Once we find a defect, either by some test (QA or otherwise) or by a helpful end-user (such as “beta testers”), we need to write an automated regression test, get the test to pass by fixing the code, and then add the test to the automated regression suite to make sure that no functionality regresses in the future due to further code changes.
Having all of these tests run as close to the time that code is changed, such as every time that the code is committed to the source control repository, is ideal. But that’s not practical, especially when running the entire regression suite may take an appreciable amount of time.
Instead, we can run a smaller subset of fast running tests on our Continuous Integration server after the CI server is triggered by a check-in to begin a build cycle. That will build confidence, but it’s not enough. We should augment that with a scheduled CI server run of the complete regression suite, perhaps every night. That’s better. The more often we run the tests, on as many configurations as possible (think different OS environments, different hardware variants, different browsers on different platforms, etc.), the higher we can raise our comfort level and confidence that the software we have created is of high quality. After all, we don’t have an infinite army of Eddington monkeys to do our testing for us. Nor can we afford to skimp on testing.
Ultimately, we want to use our QA staff for what we hired them for-- to creatively create tests that probe deep into the difficult crevices of functionality that may go astray and result in a defect. But using these QA people for a QC role, and simply having them manually run the tests and review the results is boring, costly, and inefficient. Let your CI server do the running and reporting part. Give your QA staff a power tool labeled “Continuous Integration server” and let them follow the development team by adding tests as the developers add functionality. That’s the way that quality code is produced.
Kent Beck once wrote that you should “write tests until fear is transformed into boredom.” And I’ll add that “You can’t manually test your way into quality code. You have to bake the testing into the code while the code is still cooking to make it work.”
For a comprehensive resource on continuous integration, see Continuous integration: Achieving speed and quality in release management.