SSQ: You describe test-driven development as a form of "unit test." Do you recommend that these tests be written by the same developers that write the code?
Christian Johansen: Definitely, anything else would not work. Test-driven development uses unit tests as its main tool, but the primary goal of TDD is not testing in itself, but rather code design. TDD is an iterative process where iterations are typically short and always start by writing a unit test which describes some specific behavior. When the test is in place and confirmed to fail in the expected way, just enough code is added to make it pass.
By starting with the test (or "spec"), the programmer is encouraged to focus on the behavior of the system rather than the implementation. The implementation comes second, and is typically changed frequently as the system grows through refactoring. Because the system is always approached through unit tests, the resulting code is inherently testable and usually loosely coupled, which can be a hard goal to reach when developing software the traditional way.
SSQ: Do you think that people who are traditional "testers" or have a QA background should learn to do TDD, or is this strictly a development activity?
Johansen: Well, TDD really is a programming strategy, and as such it does not make a whole lot of sense for testers to be doing it unless they are actually writing code. What can be beneficial for testers however, is to look into automated testing, where one such example is the unit test. Depending on the level of technical expertise, a tester might want to familiarize herself with tools such as Selenium, Cucumber or even a unit test framework which can be used to write high-level functional tests that directly support project requirements.
SSQ: When I was a software developer, we learned to unit test by creating tests that would hit all our code paths. We called this "white-box testing." With TDD, is there a risk that low-level code, for example code that handles things like memory management or cleanup or other activities that do not result in visible outputs, would not get handled?
Johansen: These white-box tests are basically the kinds you write the most when practicing TDD. Because of the way TDD works, there really isn't any risk of leaving out functionality in your tests unless you're "cheating." It all depends on how you are applying TDD to your project. Bottom-up development typically starts with the lowest-level unit and builds towards end-user functionality, one test at a time. In this situation there isn't much of a risk that low-level code will be left untested, but there is a risk of leaving out important functional or integration tests.
The top-down approach to TDD typically starts with a high-level test mimicking end-user functionality (or something close to it), and then the programmer works her way down to the lowest level. In this approach there typically are more functional tests and less unit tests, and there is some risk of leaving out important low-level tests.
There definitely is no higher risk of leaving out tests for non-visible behavior in TDD than there is in traditional development. If anything, the risk would be lower, but again, it depends on the programmer.
SSQ: I've heard of "Automated Test Design" where unit test cases are generated from a modeling language. Do you recommend this? What are the pros and cons?
Johansen: I don't think I have a general recommendation for generated tests. Like most things, there is a place for generated tests, but it's not necessarily the right tool in any situation. For instance, I don't think generated tests can really "relieve" you of writing TDD-style tests, because these tests are important programming tools. These tests are more like small specifications for your system, and writing them is an integral part of the development. However, generated tests could definitely be useful to exercise and test corner-cases and produce a more rigid regression suite than what results from a typical TDD session. Read part 2.