By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Despite the growing awareness that testing continuously throughout the development lifecycle both reduces the number of defects and benefits the quality of a system's design, it is still common to hear developers, managers, and customers talk of "the testing phase" and about "the testing department." This usage hints that a strict separation between testing and other activities is still lodged not just in the vocabulary of many people, but deeply in their perception and thinking as well. There is also an assumption that the principal motivation for testing is to discover defects.
Defects vs. value
The language of agile development often strongly focuses on the delivery of value. From the "Manifesto for Agile Software Development": "Our highest priority is to satisfy the customer through early and continuous delivery of valuable software."
Adding a new feature — at least one that the development sponsor wants — is an obvious addition of value. What then is fixing a bug? Bug is just a cuddly name for defect. So how do we square the presence and removal of defects with the language of value? The typical approach to doing this turns out to be a little simplistic at times. We might consider a defect to be a feature of negative value, which makes removing it a value-adding activity. Although a defect does indeed represent negative value, the problem is that treating it like a feature of negative value has the wrong emphasis and ultimately rewards the wrong behaviors.
If defects are viewed as features with negative value, they become managed as if they were features. Development groups store up prioritized repositories of bugs, treat bugs as user stories, outsource the fixing of bugs, and so on. While each of those can be a useful technique or perspective in dealing with a project in transition or crisis, it is not a long-term view that should be encouraged. After all, as the "Manifesto for Agile Software Development" says, "Working software is the primary measure of progress." It is a little disingenuous to pass off a feature with known defects as complete and working — "Yes, it's done... but there are a few bugs."
Reducing waste vs. reducing overhead
A defect represents a shortfall in the promised functionality. This means that in the first instance those responsible for fixing the defect, i.e., making up the shortfall, should be those who were responsible for undertaking the development of the feature. They are in the best position to know what needs to be done and, more important, to determine how the defect arose and what they can change about their development practices to prevent that kind of defect from arising again. Farming this out to another group removes a critical piece of feedback in the development process. It is a missed opportunity for learning and improving the process and practice of the development team.
In the language of Lean development, the notion of continuous improvement is termed kaizen. Lean thinking places great store in improving the generation of value. Any activity that does not add value is considered to be muda (waste). This brush stroke might seem a little broad at first sight, but there is a recognized distinction between necessary waste (type-1 muda) and unnecessary waste (type-2 muda).
Type-2 is genuinely waste by any definition of the word. Defects fall into this category, as do reports and documents that take time to produce but which no one spends any time reading. Type-1 activities are perhaps better considered overhead than as waste. Given that some of what is sometimes considered to be type-1 is constrained by the laws of a country, the laws of physics, or other aspects of reality beyond our control, it seems a little presumptuous to brand them waste, as there is little that we can do about cleaning them up!
For as long as there are defects, their removal is necessary overhead. By properly distinguishing between waste and overhead, rather than simply grouping them together under the one heading of waste, it becomes easier to see the relationship between them. Removing defects does not necessarily reduce overhead, although it does reduce waste. To reduce the necessity for this overhead, the root cause of the waste must be tackled.
Where does testing fit into this picture? Viewed even through the very narrow lens of testing as a defect discovery activity, testing qualifies as necessary overhead. Unless the product is developed by perfect people, in a perfect environment, with a perfect process, ensuring that what is delivered is of value — as opposed to just throwing the product as is over the wall to the consumer and hoping for the best — is itself a necessary activity. In this context, testing does not add value, but it does reduce waste. The net effect is that of an increase in value, but it is important to differentiate between these two directions on the value axis.
If defects are a source of waste, and testing is one way of uncovering them, then having a phase at the end of the development lifecycle labeled testing is a sure way of ensuring poor quality. (Having no testing at all is, of course, an ever surer way of achieving that end.) It is a very late stage to be discovering defects in implementation but, more important, it is also a very late stage to be discovering that the expression and understanding of requirements is at fault or that the design of the code is poor. Testing offers much richer feedback than simply defect discovery. Testing tests assumptions about a product as well as the product itself and offers opportunities for learning and reflection. Indeed, viewing testing as simply concerned with defects squarely misses the point of much testing activity and much of what testing can offer.
Tests at the system level represent a concrete expression of a product's requirements. They should not simply be viewed as testing of the product but also as definition of the product. Displacing this activity to the end of the lifecycle places the cart a long way down the road from the horse.
Raw requirements can be quite rough, more of a sketch and an idea than anything concrete and identifiable. It is important to end up with requirements that are refined and framed in terms of being testable rather than just vague and general. Treating tests as requirements (and vice versa) supports a reality-based definition of done — as in "Yes, it's done, here it is" rather than "Yes, it's done, except for a couple of features."
Test of the code on its own terms, such as unit testing, supports similar and additional aims within the code. It is not redundant to test both the code and the whole system because these two different perspectives are just that: two different perspectives. One is concerned with the quality and definition of the whole, while the other focuses on the quality and definition of internal design and its expression. If code is difficult to unit test because of dependencies, that is feedback about the design quality. If code is buggy, that is feedback about the quality of expression.
Although testing is not simply about finding defects, leaving it until late in the game of development will lead to the accumulation of defects, ranging from misunderstandings about system requirements to simple code oversights. The result is defect drag, which reduces velocity, throws a wrench into estimates and reduces the sustainability of future development. Testing is a part of development and not a supplementary follow-on.
If testing is a part of development, what does that imply for organizational structure? Making responsibility for testing a separate department is one of those ideas that sounds good in theory and looks good on paper. The idea is that it emphasizes the importance of testing and offers an opportunity for specialization — a center for software quality control excellence.
The reality is often quite different. Separating the responsibility for quality of a product from its creation removes essential feedback loops, discourages continuous and integrated testing throughout the development lifecycle, and encourages an abrogation of responsibility by developers — "I don't test my code; that's Testing's job." The result is that the process of software development can degenerate into an adversarial contest rather than a cooperative game. It can foster an "us vs. them" culture of politicking and schedule gaming. Somewhere in all this a software product has to be developed.
The idea that it is healthy competition is often promoted in support of the separation of departments. Although it has a superficial appeal, such thinking is misguided. A normal competitive sport is one in which two or more sides play the same game according to the same rules and out of which there will be winners and losers. If departments are organized with respect to separate activities and responsibilities, they are playing different games, not the same game. If someone has to lose in order for someone else to win, what kind of quality assurance does that offer our product? This seems like a recipe for waste.
If testing is a part of development, that should be reflected in both the development process and the organizational structure.
About the author: Kevlin Henney is an independent consultant and trainer based in the UK. His work focuses on software architecture, patterns, development process and programming languages. He is a coauthor of A Pattern Language for Distributed Computing and On Patterns and Pattern Languages, two recent volumes in the Pattern-Oriented Software Architecture series. You may contact him at firstname.lastname@example.org.