Everyone is always focused on how to write codebases that are maintainable and refactorable, or the avoidance of
"tech debt." How do you create and organize maintainable test suites, and refactor and reorganize pre-existing test suites for future maintainability?
The premise of "technical debt"' is that writing a quick and dirty program incurs debt in the form of effort and cost. It does so by making subsequent program maintenance more difficult. Sooner or later, it is necessary to remove the debt by taking extra time to refactor the code so it is cleaner and easier to maintain. Agile's emphasis on short-term gains can cause a rush to code which then necessitates otherwise avoidable refactoring rework.
Contrary to conventional Agile wisdom, I'd contend that the technical debt people are conscious of is only the tip of the iceberg. The bigger cause of most avoidable added effort is actually all the things the team isn't aware of. These oversights eventually turn into backlog priorities that could have been handled far more easily had they been recognized at the right time. Agile's narrow focus on current work tends to prevent awareness let alone appreciation of such wasteful behavior.
To make a more maintainable set of tests, development teams need to think about testing sooner in order to see the potential for later waste. Testing takes up a significant portion of total development time. When changes are made to code, it takes disproportionally more time to change the test than to create a new one. By breaking free from the constraints of conventional testing wisdom, tests can be created which are not only more easily maintained but are also more effective.
The technical debt people are conscious of is only the tip of the iceberg.
I recommend test-first development -- similar to test-driven development. This is a valuable XP technique in which the developer writes tests before coding. The developer then writes and revises code until it passes the tests. Only when the tests have been passed is the code considered done.
The presumption is that tests written before the code are not reacting to how the code is written. However, it is common for developers to code programs in their head and then unconsciously write tests that demonstrate this expected code works. That's a form of white box testing that is especially subject to the problems that pop up when tests change as code changes. Moreover, because developers don't tend to be knowledgeable about testing, their test-first tests can be pretty weak.
These hidden shortcomings can be overcome by more consciously designing test-first tests. Take a black box perspective and apply test-design techniques that spot more of the overlooked risk conditions to test. Black box tests need to be passed regardless of how the program is written and, when the code changes, they generally change much less than white box tests.
Black box tests also provide a way to catch parts of code that are not being executed or should not be present. Such parts should be fixed as they pop up instead of having to be refactored later. Raising the thought of these previously overlooked risks also reduces the likelihood they will occur, because developers are aware of them as they begin the initial coding. Some issues will never make it to testing if the developers know they'll be tested.
In addition, the impact of changes can be reduced by developing and reusing test design specifications to describe related sets of test cases. A good test design specification can enable the team to identify most of the test cases they'll need in essentially no time. This may sound similar to test suites but, in fact, test design specifications are much different. Test suites are typically groups of prewritten test cases designed to run together. Test suites generally provide little to no guidance in identifying what test cases are needed. The test design specification helps pick the right tests for the project at hand.
Dig deeper on Test-Driven and Model-Driven Development
Robin F. Goldsmith asks:
Do you think test-first development will help mitigate technical debt?
2 ResponsesJoin the Discussion
Related Q&A from Robin F. Goldsmith
Requirements expert Robin Goldsmith explains how and why a software performance metric must relate to the type of performance one considers important.continue reading
There are many considerations that need to be taken into account when measuring usability, such as low respondent rates.continue reading
In this expert response, requirements expert Robin Goldsmith gives examples of a variety of tools, including tools based on use cases, state analysis...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.