Software organizations understand that it's important to test, but understanding the types of tests and how to ensure quality during rapid development cycles is a challenge for most software teams. Agile expert Lisa Crispin explained some useful techniques, including Agile testing quadrants and the test automation volcano during her presentation, "It Takes a Village," at McKesson's World Quality Month Kickoff event, which took place...
in October 2012, in Westminster, Colo.
Crispin's presentation dealt with issues that every software organization faces: how to develop code faster and with higher quality. Agile testing quadrants provide teams with a method for categorizing the types of tests they'd like to perform for planning purposes. The automation volcano helps teams plan for automation and keep an eye on technical debt before they are caught in a bed of hot lava. In this tip, I explain testing expert Lisa Crispin's Agile testing quadrants and the test automation volcano.
Agile testing quadrants
Crispin explained the concept of Agile testing quadrants, which organizes tests according to whether they are business facing or technology facing and whether they support the team or critique the product. The quadrants are labeled Q1 through Q4, but Crispin points out that the numbering does not indicate the order in which these tests are to be performed.
"There are no hard and fast rules about what goes in what quadrant," said Crispin, who co-authored with Janet Gregory, Agile Testing: A Practical Guide for Testers and Agile Teams. The idea is that teams think through the types of tests they want to execute so that they can staff properly and plan their test efforts.
Here are examples of the types of tests that might fall into each quadrant:
- Q1 -- Technology-facing tests that support the team:
- Unit tests
- Component tests
- Q2 -- Business-facing tests that support the team:
- Functional tests
- Story tests
- Q3 -- Business-facing tests that critique the product:
- Exploratory tests
- Usability tests
- UAT (User Acceptance Tests)
- Alpha / Beta tests
- Q4 -- Technology-facing tests that critique the product:
- Performance and load tests
- Security tests
- "ility" tests
Crispin also noted that technology-facing tests (in Q1 and Q4) are typically automated or executed using tools, while business-facing tests (in Q2 and Q3) are often manual, but might be automated.
The test automation volcano
A second tool that Crispin presented was one adapted from Mike Cohn's test automation pyramid. The test automation pyramid is an Agile technique that splits automation into three layers that represent the return on investment. There are variations on what different experts believe should be in each layer, but the bottom layer, which includes automated unit tests, yields the biggest return on your automation investment. In Agile environments, these automated tests are often written using test-driven development techniques and are executed with each build.
The next layer includes API layer tests. The top layer includes the GUI level tests, considered the most fragile of automation tests. These tests usually demand a high-degree of maintenance because they typically require updates any time there is any change to the GUI. Crispin also included integration tests as upper layer tests and placed exploratory tests above that, as a cloud at the top of the pyramid.
In Crispin's variation, the pyramid is a volcano which can erupt, throwing lava on the development team if they don't keep up with automation. She explained that manual regression tests would continue to grow and become unmanageable if the team didn't automate early and throughout the development cycle. She warned that if the team didn't properly manage its technical debt and maintain discipline with automated unit tests, it would soon face dire consequences.
Dealing with legacy code
This all sounds fine in theory, but development teams often inherit legacy code that needs some refactoring before it is tested. This is challenging for development teams that may not be sure of the original requirements or design of legacy code and don't want to risk inadvertently causing more problems. Crispin stated two ways to deal with this.
- Rescue the legacy code by carefully refactoring it, bit by bit. The go-to guides for this are Michael Feathers' Working Effectively with Legacy Code and Martin Fowler's "strangler" pattern.
- Leave the legacy code as is, but develop all new features in a new architecture that is designed to be testable. My previous team took this approach, and over eight years were able to reduce the amount of legacy code by about 70%.
Even with the strangler approach, you may need to make changes in the legacy code. In my team's case, we were able to protect the most critical areas of the application with GUI smoke tests, which were effective in keeping major regression issues out of production.
The biggest obstacle with any code base, old or new, is learning to write automated unit tests effectively and mastering the challenge of learning how to do test-driven development. It's just hard. It takes lots of time, training and experimenting to learn before it becomes so easy that it's second nature for the programmers.