Is unit testing a waste of time? Some software pros think so, preferring regression testing or just doing testing after they've checked everything else off their to-do lists. In this tip, learn how unit testing can actually increase productivity.
According to a recent post on Software Quality Insights -- "Is unit testing beneficial?" -- developers who do not consider unit testing of their own code a priority cite a number of reasons:
- They don't know about unit testing.
- Good unit tests are hard to write.
- Unit testing is a waste of time and productivity.
- Writing the unit tests would take too long (especially for frequent iterations).
- Regression testing is more effective.
My previous article, "Making unit testing a priority," explored the first two issues. This article goes on to examine the remaining three.
Productivity versus the right stuff
Does unit testing reduce productivity and waste time? It depends on what we mean by productivity and whose time we are talking about. In a straight coding session, writing just new code, a programmer who is writing unit tests as well will likely write less code than one who is not. If this is how we define productivity, then yes, unit testing makes programmers less productive.
It should, however, be fairly easy to spot the problems with this take on productivity. Lines of code written is not a measure of productivity: it is a measure of the number of lines of code written. There are many examples of code that are an order of magnitude or more in length than they need to be to perform their function, from individual classes to whole systems.
What is needed is not more code, but the right code. Unit tests offer a convenient reality check -- and sometimes brake -- at the level of the code. They offer some feedback that the right thing is being developed in the right way. So more tests do indeed lead to less code, but not in a bad way. The difference between development velocity and development speed is in terms of direction. Taking a few steps in the right direction is better than heading off in the wrong direction at high speed.
Another fallacy that needs to be addressed is that writing new code is what developers spend most of their time doing. Although it is what most developers would like to be doing, and what they recall with pride at the end of a good day, the reality can be quite different. Meetings (with the team, managers, customers, venture capitalists, etc.), email, debugging, conversation, documentation, installation, exploration and evaluation, helping out, following up support, merging versions, dealing with the configuration management system and so on all need to be taken into account.
The ratio between these varies from project to project and company to company, but they are all non-coding activities that, when totaled, add up to more time than that spent coding. The question that has to be asked is how much of that time could potentially mature into coding time if a little coding time is initially invested in unit testing?
Local optimization, global pessimization
It is also important to see the whole picture. The following tale from Alfred Aho about the initial development of the AWK language has a couple of interesting take-home points:
One of the things that I would have done differently is instituting rigorous testing as we started to develop the language. We initially created AWK as a "throw-away" language, so we didn't do rigorous quality control as part of our initial implementation. …
[T]here was a person who wrote a CAD system in AWK. The reason he initially came to see me was to report a bug in the AWK compiler. He was very testy with me saying I had wasted three weeks of his life, as he had been looking for a bug in his own code only to discover that it was a bug in the AWK compiler! I huddled with Brian Kernighan after this, and we agreed we really need to do something differently in terms of quality control. So we instituted a rigorous regression test for all of the features of AWK. Any of the three of us who put in a new feature into the language from then on, first had to write a test for the new feature.
Of general historical interest here is early advocacy of what has now become the more explicitly articulated test-first approach to programming, where tests for a feature are written -- not merely planned or talked about -- ahead of the implementation of that feature. Also of note is the oft-related tale of how short-term, throw-away code became a long-term, durable solution.
However, of most interest here is the identification of lost productivity. It was not with the authors of the code -- it was with its users. More generally, it was the people who came next in the chain of delivery or development.
The example is focused on the consequences of carrying out rigorous acceptance testing before delivery, as opposed to unit testing before check-in. It illustrates, however, the consequences of failing to ensure the quality before handing off to the next stage in the process, whether that process is delivering to a customer or checking in code that others will use.
The effect of avoiding a "waste of time and productivity" by not testing is at best a local optimization for the programmer. Any problems there might be are now queued up, amplifying wasted time and productivity further down the line. Presumably the programmer will also eventually fix any problems, but that will probably be considered part of normal work. When people do not personally consider writing unit tests to be part of normal work, they are likely to treat it as a noticeable intrusion on normal work rather than to reconsider the balance of their normal work.
Solving the right problem
Many local optimizations that are in fact global pessimizations come about because there is a failure to recognize a deeper problem, so any fixes that are applied are to the surface effects rather than the root causes. If there is insufficient time to unit test because of frequent iterations, it suggests that the frequency of the iteration is a mismatch for the capabilities of the team or the organization. This may include a team's ability to unit test.
The goal of short iterations is not to have short iterations; the goal of iterations is to reduce risk, increase value and improve flow by producing functionally complete, release-quality increments in a sustainable manner. A mismatch is an opportunity to discover the actual constraints in the system that limit frequency. If it does not prove possible to remove the constraints, it is better to have iterations that are longer but well-matched to the context than to indulge in agile machismo and try to force a fit.
What of the claim that regression testing is more effective than unit testing? It depends a little on what is meant by regression testing, but it also hints at a misunderstanding over the role of regression testing.
By running the same tests again, regression testing aims to ensure that previously tested behavior is unaffected by any changes to code. But what about new functionality? By definition, you cannot use regression tests to specify new functionality or to demonstrate that new functionality behaves as it should. The clue is in the word regression.
Rather than being a different granularity of testing than a unit test, a regression test expresses a desired outcome for a test. It is possible to regression test at the unit level, the system level, or any level in between. The sensibility of regression testing is to try to ensure that what shouldn't have changed has not changed. The sensibility of new tests, at whatever level, is to ensure that what should have changed has done so, according to expectation.
About the author: Kevlin Henney is an independent consultant and trainer based in the U.K. His work focuses on software architecture, patterns, development process and programming languages. He is a co-author of A Pattern Language for Distributed Computing and On Patterns and Pattern Languages, two recent volumes in the Pattern-Oriented Software Architecture series. You may contact him at firstname.lastname@example.org.