How is Agile testing different from traditional testing? In part one of this two part interview with Agile consultant...
George Dinwiddie, we explore some of the differences between testing in traditional and Agile environments, both in terms of skill sets and metrics. Read part 2.
SSQ: George, can you tell us a little bit about your background and how you became involved in Agile testing?
George Dinwiddie: I started my engineering career in hardware development, which naturally led to writing embedded systems code. There being relatively few embedded systems jobs on the East Coast at the time, I moved into government and business systems, bringing high standards for system correctness that I learned in embedded systems where things often have to run for long periods, unattended.
I never actually had a job title of tester, though one employer tried to push me in that direction because I wrote tests for my own code. After picking up the practice of test-driven development (TDD), it was a natural progression to writing tests that embodied the user story I was implementing. After making the transition from developer to coach, I saw the value in getting testers involved in this process, rather than making them wait until the end when everyone was anxious to ship the system.
SSQ: If someone has been a skilled and high-performing tester in a traditional environment, will they succeed in an Agile environment? What are some of the differences?
Dinwiddie: I certainly think so, if they're really a skilled and high-performing tester. I've sometimes seen people who were thought to be that, but their skills lay more in the ability to write beefy documents. It's the testing, and the critical thinking, that's important -- not the documents.
The differences lie in the need to be more collaborative, rather than merely pointing fingers at mistakes. Too often traditional environments set the programmers and testers in opposition, rather than on the same team trying to produce needed software. In an Agile environment, testing moves forward in time, requiring the testers to work with unfinished systems. This often takes a bit of getting accustomed.
SSQ: Often Agile development emphasizes the importance of communication between business stakeholders, testers, and developers, but recently there’s been more talk of bringing “DevOps” into the mix. Do you see this as a growing trend?
Dinwiddie: I think it's too soon to call it a trend, but I hope so. I've seen too many times where the operational and maintenance needs were given little though during development. The customer is not the only user of the system. What about customer service? What about ops, when they need to upgrade the system? What about the people who need to track down intermittent or infrequent bugs?
I call the conversation between business stakeholders, testers, and programmers the Three Amigos. That doesn't mean that there are only three views involved, however. Three is just the minimum, and there are often more than three people in those discussions. Include whatever specialties that have a stake in the outcome. I've seen user experience experts, user manual writers, and representatives from other systems included in these discussions.
SSQ: Bugs are often tracked differently in Agile environments. In the past, statistics and metrics around defects were used to measure quality. What are the key performance indicators for measuring quality in Agile environments?
Dinwiddie: The best measure is the satisfaction of all the stakeholders. That's a hard measure to put in numbers, of course, but it's pretty easy to observe. I've seen companies track the number of bugs reported in the first 30 days after a new release. That seems very reasonable for comparing one release to another. As with any metric, the absolute numbers are much less important than the trend. There's noise in any measurement you take, and any measurement that gets mistaken for the real goal will cause unforeseen consequences. Looking at trends over time is generally illuminating, though.
Many times a company already has a tradition of measuring certain aspects that they can continue after they transition to Agile. Other measurements just don't work very well. There's a long-standing technique of estimating the remaining number of bugs in a system by the rate at which they're being found by testers. This technique depends on the defect-injection that you get with open-loop development. When you close the feedback loop by writing tests first, you violate the expected conditions of this technique.
Read more about George Dinwiddie’s thoughts on Agile testing, automation, certifications and professional development in “What skills are needed for Agile testing? Interview with George Dinwiddie – Part 2.”
George Dinwiddie is an Agile consultant specializing in team- and skill-building with broad technical experience ranging from embedded firmware to business information technology.