What kind of testing is required during the build and release phase of a project?
I am going to make some presumptions because the shops I work in, and have worked in, do not use that specific term, and I want to be clear on the context in which I answer this question. I presume that you are approaching this as related to, or part of, Continuous Integration. (In this case, we call it generically “build & testing.”)
First, “What kind of testing is required?” Simply put: None. You don’t need to test. There may be business needs or legal requirements you must meet, but that will vary by context. Likewise, there may be business and legal ramifications if you do not test. Again, that will vary by context.
Having said that, most of us find we should do some checking – that is, machine-driven checks against specific expected results for specific conditions as opposed to testing with a human being involved, even if it is only reviewing results from each run (for the sake of convenience, I’ll call these checks “tests,” even though I really don’t think they are.)
There are some types of tests that I suspect are a generally good idea. I believe that unit tests designed around each of the components is a good start. That seems pretty obvious to most, no? Then you might add some simple tests around basic functions and exercising basic integration of the individual units and building toward logical function and function integration tests. Building levels of complexity in the tests may help in establishing some level of confidence in the build being generated.
Some warnings around these ideas: First, the higher level the tests, in my experience, the greater the likelihood of false alarms in the results. This may lead to the temptation to allow for “variances” to by-pass some of these tests as “getting in the way.” Some of these variances may be legitimate, some will need closer scrutiny. The Build Master will need to consider how these situations need to be balanced.
Looking at these in turn, one great challenge to creating tests to support CI efforts is to make tests that will be relevant and meaningful. The more complex or high-level you make your tests, the greater the likelihood that they will encounter difficulties and trip false errors. The more times your tests “cry wolf” with false errors, the more likely it is that legitimate problems will be overlooked, if not lost in the chatter of false failure reports.
Finally, a general cautionary note. Too often people rely on these automated checks to do all their testing. This is a potentially a dangerous idea. These tests cannot replace the work done by skilled testers in critically measuring and critiquing a system or application. I have encountered programmer/developers who argue that the test group must have done something wrong, because the build tests all “worked.” Alas, that may need discussion another time except to say, “Do not fall into that trap.”
Dig Deeper on Topics Archive
Related Q&A from Peter Walen
Software testing veteran Peter Walen explains how software testers can write test scripts that others can follow without having to test by rote. Continue Reading
Defect tracking can be tedious, yet comparing tracked defects can also help testers improve their work. Expert Pete Walen explains how the reasons ... Continue Reading
While many organizations may look for simple ways to measure progress, it is important for project managers to fully interpret and understand test ... Continue Reading