Accelerate your agile software testing

This expert tip explains how adopting agile development and risk-driven and test-driven development can accelerate testing. Test pro Matt Heusser also covers ways to handle tight software testing cycles and using slide-show tests.

This Content Component encountered an error

Imagine the tasks in a software development project. Listing just a few, we've got figure out what we are going to build; figure out the design or how; write the code; test it; and deploy.

If we batch up the work and try to do the entire project one step at a time, we run a number of risks, including:

  • Changing our minds about what to build later becomes much more expensive.
  • If we bite off more than we can chew, and the project is be late, we're not likely to know about it until the middle of coding.
  • Moreover, there's nothing to do about it but ship late or ship buggy or both.
  • We are forced to lock down the project early, when we know the least about what we will build.
  • Finally, even if we are successful, we don't see any actual value delivered until the work is deployed in one big "chunk."

Enter agile software development. With agile development, we turn the project sideways, splitting it up into slices that can be implemented in a few weeks to a few months. We implement a slice, perhaps deploy it, get some feedback and implement the next one. Developing this way, we get value early, and we are guaranteed to ship something by the original deadline. Because we implement features in priority order, we're also going to get the most important features first, and the most bang for the buck.

At least, that's the good news.

The bad news about agile development

Imagine a software team shipping code every two weeks. The team might use the last few days for regression testing before they deploy. Now imagine that same team in iteration 30, 40 or 50. How much code have they developed? How long should it take to test?

In general, the regression testing burden increases over time!

With phases of three months, a team might be able to simply "compress the waterfall' into a few months; but the current iteration standard is a week or two. At two weeks, traditional techniques for test cycle management simply fail to work; either the iteration will be late, or the QA will be stuck working on an old branch of the code while the developers go ahead to deal with the brave new future.

Agile development puts schedule and effort pressure on the testing function. So, let's look at a few things we can do to cope.

Handling short testing cycles

To compress testing while continually increasing the software under test, I recommend a number of activities including:

  • Moving risk management out of the test process;
  • Automating some portion of the test activities;
  • Refocusing our time on risk management;
  • And, yes, skipping some tests.

No, don't give me that look. The reality is that the number of input combinations of the software is infinite; as such, we are always skipping some tests. We do this based on our understanding of the risks in the release, our experience and our knowledge of failure modes and effects analysis.

Move risk management out of the test process

Some of us remember the "bad old days" when then the initial build delivered to quality assurance (QA) just didn't work. Perhaps someone wanted to tell a project manager the "code is complete," maybe even actually believing it might work. Either way, the software wouldn't work on the first build or the second or likely the third or fourth. Under agile development, teams just don't have time for that! If your bug fixes take a half day, and the interation is 10 days long, you don't have time either.

Related Content
Five tips from the Agile trenches Agile and software development experts advise on how to begin Agile projects, Agile practices to avoid and more.

Agile development growing, but problems remain
Despite success among businesses such as IBM, Agile development is still in its infancy. Adaptation and reorganization practices are making strides in the way of improvement.

Imagine if things were different, if the software turned over to the testing group generally worked the first time out. Why, the back and forth handoffs from test to QA would be shorter. The testers would spend less time "reproducing bugs," writing bug reports, waiting for a fix and re testing with a new build.

Two of the common approaches for developers to improve code quality are to use test-driven development, wherein developers write automatic, extremely low level tests and the code to make them pass. They also pair programming, a process in which programmers implement features in pairs. You say: "But Matt, that is a developer thing! We don't have any control over that!" To which I reply: When management keeps asking, "Why is QA always the bottleneck?", tell them the software has got to be better quality on the way in.

A second way to move risk management out of the test process is to somehow integrate your companies activities with what you are building. For example, if you build email tools, use the newest version of the tool internally for a week and then release. This "many-eyeballs" approach provides a final opportunity to find bugs, especially bugs involving interactions that one tester alone may be challenged to do.

Finally, I'd like to suggest making the testing for each story -- or new business feature -- happen as soon as the story is complete, instead of waiting for the entire iteration to move to QA. After a predefined period of time -- such as Thursday of the second week – then the code is deemed "complete," and regression testing begins. This prevents bleeding over into the next iteration and allows some testing to happen earlier in the process.

Automate some portion of the test activities

Earlier I mentioned test-driven development, which occurs at the developer level. Here I will discuss test automation at the customer level. Two places to start are with a slide-show test and a "smoke" test using some sort of tool. With a slideshow test, an automation tool drives the user interface, either with a human watching or with periodic screen captures being taken. In this way, the software will run faster than normal, but still have a human's eye for defects. A human can catch things that are problems, but that the computer might not be programmed to catch. Likewise, the human can visually compare the day's results with yesterday's, or perhaps look at those that come up different after an automated file compare.

Smoke tests test the essential functions of the system and run very quickly. They may require some sort of automation tool. In many cases, the developers can help build a Homebrew Automation Framework. The test framework then enables the team to release software in tight cycles, and can be "funded" by the product manager as a business valuable feature. Once the framework is in place, the team can add additional features in each iteration to cover the new software as it was tested. In many cases, some aspects of the acceptance tests can be automated.

Refocus test teams' efforts on risk management

The typical test team spends a lot of time working on things that are good, and often required, but are not really testing.

When I interview teams and ask what percentage of the time they spend in meetings, doing support or other projects, preparing documentation, writing status reports, checking email, performing compliance activities and in other "required" activities, the number can reach 80-90% of their time. That is time that is not spent testing, and generally, the testers feel that most of those activities do not add value to the project or company at all.

Under agile development or lean software development, the team itself judges the worth of activities and views those that do not aid in delivering the product as waste. By eliminating or minimizing wasteful processes – the team declared, non value added activities -- the team can suddenly find more time to test. Managers love options, and proposing a plan to hit Friday's deadline by dropping five or six unrelated things might go over quite well.

If your management is uncomfortable with agile software development, you can speak their language by using terms like "lean" and "waste." If your management prefers business terms, create a "stop do list" that shows the wasteful things your team can stop doing in agile development. For more information on this concept, check out Jim Collins' stop-do article. He created the list.

Change from "test everything we can think of" to risk driven testing

Most of the teams I work with have a bug-tracking system, but they've never taken the time to break the bugs in that system down by impact and category; that is to say, what category of tests should be finding which bugs. In many cases, the teams are doing repetitive testing over pieces of functionality that never fail and ignoring whole feature-sets where bugs repeatedly appear. By adjusting our test effort to match where the worst bugs have been historically, we can move from "test everything" to "risk-driven" testing.

Another approach is to examine the logs of user behavior to see what the users are actually doing - or what business processes are most important to them - and test those first.

First steps to accelerating testing in agile

Your team probably knows what it does well today. You may have an envisioned future of where you want to be, but that future might require a much different test process. After examining the gap, the next question is" "What will it take to get there?" You might answer that with a list: The new risks the will emerge and how your testing might change to address those risks. Then, hopefully, taking a few of the ideas above, the whole team -- including developers and management -- works on an implementation plan, picking off the list item by item.

Just remember as you go into the undiscovered country, that you are surrounded by a great cloud of witnesses; and we are rooting for you.

This tip was peer-reviewed by Lanette Creamer, veteran software tester and Quality Lead for Creative Suites at Adobe Systems.


About the author: Matt Heusser is a technical staff member of SocialText, which he joined in 2008 after 11 years of developing, testing and/or managing software projects. He teaches informaton systems courses at Calvin College and is the original lead organizer of the Great Lakes Software Excellence Conference, now in it's fourth year. He writes about the dynamics of testing and development on his blog, Creative Chaos.
This was first published in October 2009

Dig deeper on Agile Software Development (Agile, Scrum, Extreme)

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSOA

TheServerSide

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

Close