Manage Learn to apply best practices and optimize your operations.

Testers debate differences between waterfall, Agile test automation

Two professonal testers continue the timeless debate, agile vs waterfall, which is the best methodology for test-driven software development.

How does test automation differ on an agile project from what happens on a more traditional testing project using waterfall or other methodologies? invited two seasoned software testers, Matt Heusser and Lanette Creamer, to answer that question. In this he-said, she-said discussion, they opine on the role of test automation in waterfall and agile testing, as well as how automation in either model and each practice's differences and commonalities.

Heusser and Creamer are career software developers and testers. Creamer is lead tester at Adobe Systems and has worked primarily on traditional software teams. She joined her first agile team about a month ago. Heusser is a software tester who specializes in testing in fluid, high personal-responsibility environments undergoing rapid change. His tips series on ways to speed up software testing in agile development, spurred this debate.

We join the conversation as Creamer brings up the topic of test automation.

Creamer: It seems that increased test automation is a part of all modern test strategies. I'd much prefer to see a focus on better test automation rather than just more test automation. What is special about agile in terms of test automation? Does agile do anything to address the quality of the test automation created?

Heusser: I would propose that if you are releasing software to production every two weeks, you can have a reasonable payoff with browser driving tests at some point. Because of the high maintenance cost, it might be dozen iterations or more. If your project is small and won't be under heavy maintenance after release, then maybe automation doesn't make sense.

To get rid of the cost of changing the graphical user interface (GUI), one approach is to get behind the GUI and test only the business logic with a tool like FitNesse. This can work on certain applications, especially Create Read Update Delete database apps that just don't have a lot of front end logic. Another way to lower the cost is to pull the common operations (login, search, etc) into a common business language. Then when the GUI changes for login, you only have to change one function.

Another issue with test automation, which becomes more noticeable in short iterations, is the time to run. I've seen a number of companies build so-called test automation suites that are brittle and take so long to run that the devs have done a bunch of commits in the meantime; or, even if the commits are on a different branch, the test run simply fails out. Either way, we don't have complete or consistent results, or have them too late.

Related content
You are currently reading part two of a two part tip, below is a link to part one

Test-driven testing face-off: Waterfall vs. Agile (part 1)
Most software test pros pick a preferred methology and stand by it, in this tip two testers square-off, one advocating for agile development, the other in the waterfall corner.

So I like to focus on 'just enough' automation - especially the slow stuff - instead of trying to go for comprehensive. At SocialText, we have different 'test sets' for each browser, including one -- 'sunshine' -- that combines our fastest tests with core functionality verification.

Creamer: It isn't clear which testing practices are as 'agile' as intended, and what agile ideas are being applied bit by bit without the fundamental shift to being an agile team.

My understanding is that the core differentiating elements that make up the cord of "agile" are the short iterations, the backlog, and the burn-down charts, and the team making their own decisions on how to develop the software with constant input from the owner and other stakeholders.

Is it fair to say that everything else is optional and can be applied to other development methodologies? I think my confusion stems from the wide variety of testing practices I'm seeing labeled as agile.

Heusser: I suppose I'd agree that your definition is a good 'what' of agile, but it's not really the what that I am interested in. Personally, I care if a team can reliably ship working software to it's customers, in short increments, responding to change by adapting the plan. This focuses us on the outcome of Agile instead the process. So along with that 'why' of Agile, you have all the project management practices you mentioned, the self organizing team aspects you mentioned, and the technical practices. I look at these as four different 'dimensions' of agile, and it's fair to say that teams with three out of four could all look very different and refer to themselves as agile. [I'd like to go on here, but I'm afraid it's beyond the scope of this article. If you'd like to see an article on the different attributes'of agile, send me an email: [email protected]

When I talk about agile testing, I mean a set of practices design to make those short releases - and plan adjustments - possible. So you have this series of practices that might be used by any team, but might specifically be used by a traditional team looking to switch to a more agile way of working.

Creamer: I'm concerned that I'm not hearing enough ideas from agile testers on how to get deep test coverage on feature integration. The focus is on covering the functionality, covering the code, doing everything to completion and then moving on. Is there enough integration testing happening? Are agile testers doing a good job of finding bugs that span feature areas? How is the "big picture quality" being covered?

Heusser: I grant your concern is well founded. Aside from Michael Feather's book on dealing with legacy code, which is very code centric, there isn't a lot of info available in this area.

Jeff Sutherland, one of the earliest popularizers of Scrum, has recently said that 75% of the teams that adopt Scrum fail to see the improvements they would expect. I suspect it is because of exactly the issues you mention, namely the existence of legacy systems with significant integration and test cycles.

To be terribly forward, one of my criticisms of the KanBan community is that they tend to imply that continuous deployment to production will work for everyone, without talking about the risks that creates for large systems and how they will mitigate those risks.

So yes, we've got work to do. I think more examples of test strategies -- with details all the way down to how it was tested in each instance -- would be helpful, and I've done a little of that on my own.

Still, I am reluctant to generalize to what all agile teams are doing. Let's take a specific problem and deal with it again sometime soon, maybe in public?

Creamer: One of the most encouraging things I've seen happening as agile is adopted on teams at Adobe is this conversation is starting to happen inside teams, between teams and also with the community of those already using these practices. Being able to talk about the risks and pain points and find out what is working or, more frankly, what is failing for other people can enable us to collaborate and come up with solutions to try. After all, if there are no problems, there is no need to look for a solution.

Heusser: And that emergent, self organized nature of work is a big part of the essence of Agile development. Arguably, as a community, we don't talk about that enough.

Are you interested in other waterfall or traditional testing pros face off with agile testing pros on other topics? If so, send those topics and questions to [email protected], and we'll ask our experts to start talking!

Dig Deeper on Topics Archive