Granularity in test case design

Test case granularity is a complex issue in test case design. Expert John Overbaugh explains how to determine how detailed your test cases should be.

I want to develop a series of test cases, which will then be used to guide the creation of automated scripts. I'd like to know how detailed the description of the process should be. Should I be including detailed steps, or a high level overview of what I'd like the scripts to test?

Test case granularity will probably be one of those bones of contention in the QA industry which never goes away. It's a great discussion though -– and I'll answer it philosophically first and then offer some guidelines I like to use. From a Zen approach, the granularity to use is the granularity to serve you best. I know that is sort of frustrating, but it is true. Agile testing is all about this approach -- write as little as you need to achieve the highest quality. It comes down to my mantra and my personal mission statement: effectiveness and efficiency. To some degree, these are complementary. To some degree they can be mutually exclusive. On any test project, I aim to maximize both our effectiveness (how well we test) and our efficiency (how many resources -- time, equipment and so forth are used). Write test cases that allow you to achieve the highest level of quality with the least amount of time, people, and resources.

OK -- now for more concrete answers to your question. In my experience, the granularity needed in writing your cases depends a lot on two main factors: How many times the cases will be executed, and who will run them. There may be other factors at play here, such as management's top-down process, client requirements and so forth, but in the end these are the two key factors for me. Let's look into each factor briefly.

How many times will the cases be executed? This is the number one most important factor, in my opinion, about how granular to go (coupled with the next factor, which is who will execute them). When I'm working a project in my current role, on a content-heavy Web site, I look at the project being worked on. If it's a one-time, content-heavy project with just a few templates, I recognize that 1) it needs to go out at the highest possible quality, and 2) it will probably never be touched again. In these cases, I encourage my team to write "titles-only" test cases. I focus on capturing our thoughts and demonstrating our coverage in the least-burdensome way. As long as I know that the author, or another member of the team who is familiar with the technology and the goals of the project, will be running the case, I trust that a title-only test case will be executed as intended.

At the same time, when I'm working on a long-term project which I know will undergo several releases which will modify the core code as well as add new functionality (especially true for software products and services I've worked on like Microsoft Exchange ActiveSync, Microsoft Office Live Meeting, or Microsoft Office), then I want to be sure my cases are quite clear. I emphasize capturing the goal of the case (justification for why we should "spend" time and money executing the case), clear preparation requirements, clear steps, and a single expected outcome. I make this up-front investment because I know that people's memory of one case out of 1,000 or 10,000 or more cases simply won't be fresh during the next release. Documentation pays off here. It's the same with subject matter dependent cases (for instance, cases we were writing on Circuit City's retail management system transformation project). If a case requires a high degree of subject matter expertise, and the case will probably be exercised by someone without that knowledge, every bit of information needed to execute that case had better be in the case itself.

Another key factor to consider is who will be running the test case. If the author, or someone equally experienced, equally familiar with the technologies involved, and equally familiar with the product or service being tested will be executing the cases down the road, I'm more inclined to allow titles-only cases -- as long as the project is a one-time thing. If the case has a likelihood of being outsourced, or there's a strong chance the executor will not be as knowledgeable as the author, I will fall back to my "full-test case" approach and require justification, steps, and so forth.

If my team is responsible for both the test case design as well as the automation, I'm again open to titles-only, as long as the author is comfortable that she or he can remember the intent of the case when it comes time to author. In more agile environments, this is more likely simply because the test case is written in the same general time frame as it will be automated. If another person will be automating cases (in a test environment, for example, where non-technical SMEs are writing the cases and handing them to a couple of full-time automators), I'm definitely going to require full cases be written again.

Software testing resources:
How to design test cases

How to define a test strategy

Modular test case design consolidates tests

At a second level, write the cases in such a manner that you will personally be best served. I'm not the world's strongest coder, so when I sit down to write automation for a given test case, I generally start out with a series of comments outlining the steps I have to perform. Writing these in the test case may or may not be helpful; Generally, manual execution is different from the steps required for automation. So I try to keep the concept of test case granularity separate from automation procedures.

I recently tackled this subject on my personal blog ThoughtsonQA.blogspot.com. You can read the article here.

Dig Deeper on Software testing tools and techniques

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close