Published: 25 Nov 2013
While the average developer might think acceptance tests only measure quality, better acceptance tests actually result in better finished code. That is, according to Gil Zilberfeld -- a product manager for software test tools vendor Typemock. Zilberfeld holds that well-defined acceptance tests improve the quality of the code that passes them. His test-driven spaceship workshop is designed to teach developers how to build better acceptance tests, but studying the program can teach project managers how to improve acceptance tests and make developers more efficient.
Zilberfeld actively speaks and writes about topics such as Agile and test-driven development. One of his most popular presentations is his workshop on test-driven development(TDD). During the workshop, participants embarks on a far reaching space-exploration that requires many small teams to work on several complex components to survive and thrive. Well, they won't actually leave the classroom. But still, planning a hypothetical project that involves propulsion, navigation, life-support and weapons systems is just inherently cooler than more realistic scenarios. It also helps developers suspend their disbelief and go along with the program, which is important because the workshop packs a lot of parts into a two-hour run.
Participants are split into pairs, and each pair is given a particular component to work on. Zilberfeld used to split folks up into larger groups, but found that group dynamics take up too much time and are just a tiny part of the bigger picture. "Splitting them into pairs keeps everyone working and on task throughout the whole workshop."
Each developer writes tests for their partners to code against. For each step, they're given roughly ten minutes to crank out a design and another ten minutes to talk over what they did with the whole group. Participants are likely to suffer good-natured jabs from Zilberfeld himself for harebrained designs. "It looks like you just slapped this together in ten minutes," he might admonish.
But Zilberfeld is really drawing out the participants' thoughts and feelings about the process of building their models. Interspersed with the jokes and jabs are honest and open questions about what's going through the participants' minds during the process. For example, he might ask a developer working on the environment systems component (which requires a strict policy against bringing pets on board) "How did you decide that vaporizing dogs was the best way to ensure the 'no dogs allowed' requirement?" As Zilberfeld pointed out himself, "Funny things happen in space."
During the sessions, developers figure out on the fly how to test their code, when to refactor it, how to balance workloads and more. To simulate the real world -- where developers rarely sit together and work as a unit -- Zilberfeld asks participants to avoid talking to each other. It gets more difficult from there.
At first, Zilberfeld just wants the developers to pass code back and forth. That means the code has to be self-explanatory. They can't just tell their partner that the variable "leader" means the commanding officer. They have to either name the variable so it's obvious to their partner or add a comment in the code to let them know. And that's just one place where miscommunications pop up.
For the next level, Zilberfeld takes away the developers' ability to see the content of the tests. They get to know the names of the tests, but not the specifics. It's just an empty body. At this point Zilberfeld is demonstrating the importance of giving tests appropriately descriptive names. "Just writing good test names is a skill in itself," he said. "And it's a skill [most developers] can get better at."
The exercise lets participants share an understanding of the development process from a new point of view.
At the worst end are test names like "Test1," "Test2," etc. On the other end, Zilberfeld sees some programmers (notably Scala programmers) using full sentences to name their tests. At first, Zilberfeld laughed at seeing names that verbose. It seemed like overkill. But he quickly saw the potential in longer names. "The more expressive the code and the file names are, the easier it is for new people to come in and work on the code. That means it's more maintainable."
For the third and final test, Zilberfeld makes everyone switch partners. At this point, one of the partners knows at least half the code really well (because he or she just wrote it) and the other one is completely new. The new partner may have been working on a different component before and would therefore be less aware of the current requirements.
Now the developers get a brief chance to introduce themselves and the current status of the project they're working on, and then it's back to zero communication, aside from sharing code. This is the point where power struggles are likely to emerge. The new partner is likely to feel lost on the new project and probably won't be as productive as the other. The other is likely to feel that bringing the new partner up to speed holds the project back.
How do they deal with these new roles? The answers are different for each partnership. "The takeaway," Zilberfeld said, "is the introspection into the process."
The exercise lets participants share an understanding of the development process from a new point of view. "When you see people explain what happened to them and what they were thinking, it highlights that need for understanding." Building that understanding into the acceptance tests up front doesn't just measure how good the software is; it actually improves the quality of the code from the very beginning.
Have a question about test-driven development? Let us know and we'll pass your question on to one of our experts.