For those of you "up" on your agile practices, it should come as no surprise that testing is an oft-talked about subject. However, it is often an overloaded term, implying you need to get at the meat of the details when trying to determine what sort of "testing" is being discussed.
There are many different ways that testing can play a role in agile development. Indeed, there are even different types of tests that can play different roles in agile development. To address these roles, it is important that you have a good grounding in some of the basic philosophies behind agile development.
If you need more information on agile development, you can check out the Agile Manifesto for Software Development here:
For this short discussion on the role of testing, the aspect of "Agile Development" that comes to the fore is simply:
"Reduce the gap in time between doing
some activity and gaining feedback."
Though typically couched in terms of discovering errors sooner, there is a positive side to this concept. You also want to discover you are finished (with a feature) as soon as possible! One of the big parts of being in an agile state of mind is trying to always do the best you can with the time and resources you have. Not discovering a bug until late in the development cycle is obviously expensive. However, going overboard on building code to meet more than the feature demands is potentially no less wasteful. After all, if you add "polish" and "gleam" above and beyond what was needed, you have added more cost to development. In short, you wasted time and money - neither of which can be recovered - and you added more cost to downstream maintenance.
In agile development we want to streamline the process to be as economical as possible and to get feedback as early and as often as possible.Why test?
Tests are a great way to get feedback. Tests are useful to ensure the code is still doing what it is supposed to, that no unintended consequences of a code change adversely affect the functionality. Tests are also very important in that they help add "crisp" definition to the feature description such that a developer will know when the feature is complete. To this end, I have two simple rules for knowing when a feature has been described in enough detail:
- The developer can supply a pretty accurate estimate
- The tester can write an acceptance test
Not only are there different types of tests, but there are also different ways to conduct tests, as discussed next.How do we test?
Typically testing is done manually and through automation. Okay, that is kind of a "duh," but the real issue is to not confuse the role that each technique can play during development. Though you should automate as much of the testing as possible, this does not mean that all manual testing is no longer needed. Indeed, there are many cases when a system may have passed all the automated tests, only to immediately fail when a user starts to exercise some part of the functionality.
In keeping with agile principles to do smart things, be sure you are not using manual tests to do that which could be automated. And, be sure you are not trying to do expensive automation of tests that are better left for human testers. Additionally, be sure you are not avoiding doing certain tests just because you cannot automate them!What kinds of tests should you use?
There is no "one-size-fits-all" strategy. Like everything in agile development, testing also requires you to use your brain! I know, shocking, eh? Here are an assortment of test types and the purpose/role they can play during development.
- Unit tests are good for the developers to build up as they are coding their assigned features. The purpose is to ensure that the basic functionality of the underlying code continues to work as changes are made. Plus unit tests are a useful way to document how the "API" is to be used.
- Acceptance tests are used to ensure specific functionality works. Essentially, the acceptance test is a client-specified scenario that indicates the desired functionality is properly implemented. By having an acceptance test, the developer can know when "good enough" is met, and the feature is complete.
- UI tests - these often involve some means to step through the page flow, provide known inputs and check actual results against expected results. Some UI tests can get fancy enough to use bitmap comparisons; for example, for graphics or CAD packages.
- Usability testing - this is a whole other area of testing that often involves humans! I won't go into details here, but usability can often be "make-or-break" acceptance criteria. Some projects can also benefit from automated tests that ensure UI standards are being followed (where they may not be easily enforced through UI frameworks).
- Performance tests - running a suite of tests to ensure various non-functional metrics are met is critical for many apps. If your app requires stringent performance metrics, you need to tackle this up front during the initial architecture and design phases. Before building the entire app, you need to ensure the performance benchmarks are being met. Typically, you build a thin slice of the application and conduct stress testing. You may run simulated sweeps across system-critical parameters. For example, simulate 1 to 100 users simultaneously accessing 100 to 10,000 records with varying payloads of 100K to 1GB. The benchmark criteria may be the need for response times of 1 second or less. The performance benchmarks are usually automatically run at least on every "major" build - maybe you do this once a week on Friday, or at the end of each iteration. I like to keep running tables and graphs of these benchmark results. You can spot trends.
If you are practicing test-first development, you can start with an acceptance test for the feature you are implementing. But you will quickly get involved with writing lower-level tests to deal at simpler, more granular levels of the system.
Personally, I don't use tests to derive the architecture and major classes involved in the design. But I can see where some people do. Much depends on your habits, and more likely how your brain is wired. Mine is pretty loose :=, and I tend to visualize objects.What test techniques are useful?
Though automation is key, it doesn't have to be the only kind of tests you run! I generally like to have a mix of automated and manual tests. And for the automated tests you can refine the frequency of when they are run. Here are the various types of testing and the role it can play in an agile project:
- "Smoke" Tests - essentially a handful of critical tests that ensure the basic build functionality works. Some can be automated, but others can be manual. If it is easy to run the smoke tests, it can help the development team know that the daily build is probably useful. When manual tests are involved (that is, a bit more expensive to conduct), this technique is often reserved for "major" builds that might be ready for undergoing more formal QA testing (to prevent you from doing expensive and exhaustive testing on a "bad" build).
- Test Harness - is frequently a good way to "institutionalize" exposing functionality of the system (e.g., via Web Services) for coarse-grain functionality typical for acceptance tests and major system scenarios (and even external invocation). Wiring up a test harness enables ease of extending the breadth of the tests as new functionality is added. You can build test harnesses that are very automated in terms of capturing the user input (using a record mode), and capturing the output, and getting a user's (in this case typically a tester) acknowledgement that the output is correct. Each captured test case is added to a test database for later playback orchestration.
- Automated stress test "bots" - if you do a good job of designing a layered architecture with clean separation of concerns, you may be able to build in some testing robots. This is often especially easy when dealing with service-based systems. You can use XML-style config files to control the tests being run. You can build tests to exercise each layer (e.g., persistence, business, messaging, presentation) of the system, building up to tests that replicate actual user usages of the system (except for the UI part). We have built such bots that can then be distributed on dozens of systems and can even be spawned in multiples on each system. This allows for very simple concurrent testing to "hammer" the server from all over the world. The config files control the intensity, breadth, and duration of the tests.
- Manual tests - these are reserved for those aspects of the system best left for human testing. That is, to use a tester for boring, mundane, tedious, and monotonous testing is to be very wasteful of human potential. Not only is the likelihood of having errors in the testing high, but you will probably not have time to catch the more elusive bugs. Instead, allow the testers to focus on being more exhaustive with using the application in complex ways that may not be easy to automate.
All of the test results run against the build machine should be visible in summary form, with drill-down as required. A good way to do this is with a special page on the wiki (or other common place) to show current build results. We also always send out emails when new (successful) builds are available. And we send emails to the folks who may have errors in their code, and create defect issues as required!
There are many tools to help in this publishing process.Putting it together
So you should have a mix of manual and automated tests that go something like this during development:
- Grab a well-defined feature
- Write the acceptance test, write some code, write some unit tests and more code as needed, get the tests to pass, check in the code
- Is this feature performance critical?
- Write a performance test to measure a critical benchmark
- Ensure the test passes as you are doing development
- Add to the benchmark suite
- Does the feature have unique UI aspects?
- May need to use manual testing to ensure usability and complex functionality that is too difficult to automate
- Add the acceptance test to the functional test suite (use an automated tool for this)
- If the feature passes the acceptance test and the (optional) performance tests, you can declare victory on the feature, close it and move to the next feature!
If you are using other testing techniques (test harness, test bots, helpful tools such as my company's functional testing, load testing, code coverage (DevPartner), or system-level analysis tools (Vantage Analyzer); or my friend Bob Martin's ObjectMentor FitNesse (http://fitnesse.org) app) you will need to keep those tools in sync.Constantly improve
Start small. Do not overdo the process from the get-go. The tests and techniques should grow over time to meet your needs. If you get some nasty bug reported that could be prevented in the future by a test, add a new acceptance or unit test(s)! If users complain about some performance aspect of the system (like it takes 2 minutes to open up a large project file), add a benchmark test to your performance suite. This will be used to precisely document the poor performance, and then to show the improvement once the developer makes the fixes. If some tests are no longer worth executing, comment them out (and eventually delete them).
The point is to use your brain!Agile is a state of mind!
Testing in an agile project can vary in depth and style, but not in its intent. Build a mix of tools, techniques, process, and automation to make the right tests work for you.
Post any comments on my blog.About the author:
Jon Kern is a software engineering evangelist, Agile Manifesto co-author, speaker, agile coach, practitioner and author. His experience is wide-ranging across varied problem domains and technology platforms, and he is constantly learning from colleagues and friends. From jet engine R&D (he's an aerospace engineer, after all) to real-time flight simulator design and development, from TogetherSoft's and OptimalJ's commercially successful modeling tools to building IBM's Manufacturing Execution System software - Jon has seen and done a lot in his 20 years. firstname.lastname@example.orgThis article originally appeared on TheServerSide.com.