The project that I'm working on is to create regression test scripts for applications which are migrating from another location. This test team has mainly functional (system test) scripts and there are many applications that we are taking over. How would you approach the regression scripting considering we have many applications and limited resources?
I follow the same basic steps when I think about regression testing as I do for any other type of testing. I try to force myself to think about coverage, risk and cost. I'm always looking to evaluate what tests can best address my most important risks with the most coverage given the time, tools and resources I have.
Understanding what you have to cover
I would encourage you to start off by making an outline of all the things you could potentially cover in your regression testing. Start with large areas of coverage (in your case those may be specific applications) and then drill down into more focused areas within each of those (for example: core functionality, performance and compatibility, among others), followed by subsequent levels of detail until you get down to specific testable units.
Understanding your risks
After you have a list of what you might want to cover in your regression testing, start thinking about the specific risks that you're concerned about. Surprisingly, I find that the FIBLOTS mnemonic that Scott Barber developed for modeling system usage for performance testing works wonderfully for evaluating regression testing risk:
- Frequent: What risks are associated with the most used features?
- Intensive: For the intensive features in the application, what specific constraints are concerning, how has the supporting platform or code base changed over time, or how has the use changed?
- Business-critical: Which features are most critical to the business or the purpose of the application?
- Legal: What has to be tested because of legal requirements or SLAs?
- Obvious: What will get you bad press if it doesn't work?
- Technically risky: Where's the technical risk and how has it changed over time?
- Stakeholder-mandated: What have you been told to test?
I would encourage you to again make a list of the different risks. In your specific case, perhaps a list again by application, but it's also possible you may have a "general" list that goes across applications.
Putting your risks and coverage together (chartering your work)
Once you know what you want to test (coverage) and why you want to test it (risk) you're ready to charter your work. Chartering is the activity where you put it all together in meaningful terms. Think in terms of little testing missions that take on the general form of, "I want to test these areas for this problem." As you charter your work, think both about how you would actually do that testing, and how long it might take you to do it. Shoot for 45- to 60-minute testing missions.
For example, "I'd like to test creating, updating, and deleting appointments from different time zones from the desktop, Web, and mobile interfaces to our calendar program." There are two type of coverage specified: interface (desktop, Web, and mobile) and function (create, update, and delete). The risk is that time zone conversion may not function correctly. The time it takes to do that testing might take 45 minutes depending on the number of time zones tested.
Prioritize the work and figure out what you can do
Once you've chartered your work, you should have some idea of how much work might be in front of you if you wanted to test it all (or everything you could think of at least). Given that you can't test everything because you don't have unlimited time or resources, you can forget about that! Now you have to prioritize the work. I like to prioritize as a team exercise, since it gets good discussion going, but sometimes that's not possible. If you can't do it as a team, at least get people to peer review the prioritized lists.
Once you have the work prioritized, evaluate what you can realistically cover given the time, tools and resources you have. Some of the testing you might be able to automate. More than likely, most of it will be manual. Some of the high-priority tests you may not be able to do because of a need for specialized tools, environments or data. If you don't have the resources, note it and move on. At some point you'll want to review what you are and are not testing with your boss. You'll also want to have a list of reasons why you aren't testing something available. If it's important to your boss that you test it, you'll want to be able to articulate what you'll need to move forward -- people, time or tools are the most common reasons I have.
At the end of all this, you'll have a list of charters for each application that you should be able to execute given the resources you have. Following some sort of schedule that makes sense for your team, get in the habit of incrementally reviewing the coverage, risk, charters and priority you have for each application. You'll find that what you want to test and why you want to test it will change over time.
This was first published in August 2008