How can project management be aided by testing and automated testing in particular?
That is an interesting question. Let me try and explain what I mean by these ideas before answering it. Project Management, to me, is the planning, organizing, securing, managing, leading and controlling resources to achieve a specific goal: completing a software project.
Testing is the examination and evaluation of a software product to discover how it truly behaves.
Automated testing is what most people call what I think of as machine-assisted testing. Some tool or set of tools take redundant steps so that, ideally, a thinking tester will be free to examine results in-depth.
The pitfalls I see are that many organizations tend to look for simple ways to measure progress. A common approach is to count test cases executed, to be executed, in progress, failed or blocked. This gives us some aspect around what we believed needed to be done to test the application, at some point. It doesn’t seem to matter if it is manual or machine assisted/automated testing; this is a fairly common measure to try and judge progress in the test effort.
I normally urge caution against the temptation judge progress by the number of test cases executed, the number of high priority defects documented and fixed, and other of the common measurements used to inform people on the state of the project. Oftentimes, I have found these to be unreliable as measures of the actual state of the application.
This ties to how tests are envisioned, developed and executed. The information gained from them can share information on other tests that may not have been considered or envisioned at the time the tests were originally developed and mapped out. Like a map, these are representations based on the understanding of the environment (terrain or computer system). As we get into the actual environment, we may learn things that had not been known or considered possible at the time we did the original planning.
What I find more informative, and useful for project managers and others interested in the progress of testing, is an understanding of the areas of the system that have been tested and to what depth. Sheer numbers of tests run, passed, failed and waiting to be run tells less of the story of the system than we might wish to believe.
It is up to the testers to learn to relate what they have found about the system, its behaviors, things that seem good and things that are potential problems, even if all the test cases “pass.” It is up to the organization’s management and leadership to have the courage to step away from easy measurements and look for levels of understanding of the system’s true nature, state and suitability of purpose.
This was first published in July 2012