Manage Learn to apply best practices and optimize your operations.

Does a tester actually need test cases?

Discover whether or not test cases are necessary in this expert answer by consultant Robin Goldsmith.

The short answer is yes, testers need test cases. But the test cases they need might not be what you're thinking of.

From time to time, I encounter the seemingly implausible argument that testers don't need test cases. My informal analysis suggests two related and, I believe, mistaken premises that underlie the belief that testers don't have or need test cases.

First, many testers believe a test case must be in a particular format. The premise is that only test cases written in the right format are "true test cases" (like the "true Scotsman"). By that reasoning, if testers don't have anything in the correct format, it must mean they don't have test cases.

The format that seems most widely accepted as "necessary" for a test case is a step-by-step written script that includes extensive, often keystroke-level, procedural detail. The script also may be accompanied by additional written descriptive information, such as a test case identification number, short title, longer description of purpose, context, owner, various categorizations, related test cases, priority, change history and more.

Each step in the script describes a typical user input action or condition, followed by an expected intermediate result. The script consists of a series of steps to be carried out in the prescribed sequence, which ultimately produces an expected end result. Both end and intermediate results could be displayed values, reports, transmissions, signals, changes of state, additions or modifications of values in a database, stopping or starting some other program or action, and so forth, along with combinations thereof.

The advantage of such test scripts is that they can be repeated precisely, even when they're used by someone with little or no knowledge of the system being tested. Such test scripts also have several downsides, starting with the sheer amount of time it takes to create and maintain them. The more time testers spend writing test script documents, the less time they have to execute the tests. Moreover, precisely following a script could interfere with testers' ability to detect defects the script didn't specifically provoke.

To overcome these weaknesses, exploratory testing advocates not writing anything and thereby using all of the testers' time to execute many more -- and presumably more thorough -- tests that they come up with, based on the context as they execute them. Here's where the second false premise comes in. Testers who believe a test case must be written falsely believe exploratory testing does not have test cases.

Let me suggest instead that a test case should consist of inputs or conditions and expected results, period. Inputs are the commands explicitly entered by a user. Conditions are not explicitly entered, but must often be created by a tester to carry out a given test. For instance, a condition might be that the database is full, and an input might be to add a record to the database.

That's all you need to carry out a test. Tests have inputs or conditions and expected results, regardless of whether they are in some written form or made up at the moment of execution. Writing tests offers some advantages, including helping the user to avoid forgetting things and facilitating repetition and refinement. However, a written test is not required, and a written test case does not have to be in a particular format -- especially not that of a script with extensive procedural detail. Inputs or conditions and expected results suffice.

Next Steps

How to automate regression test cases

How to select test cases for regression testing

Design test cases with use cases

Dig Deeper on Topics Archive

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Do you think test cases are necessary?
It was explained to me that the step-by-step process you are detailing here is called a test procedure; that explains how the test case is to be verified.  It was further explained that the test case is an indication of what needs to be verified.
Test cases are mandatory. This allows to track execution and build metrics, etc.
However, they are not only exact actions to perform, this is more a tags for pay attention regarding the points-of-test. For example, the test case defines to start some process and make sure that it is started successfully. It does not mean that it is needed to do launch some command and wait till it finishes - it is expected that tester also checks some additional cases, like start process from different path, users, etc
CO, thank you for your comments, with which I generally agree. People often use the same terms to mean different things and use different terms to mean the same things, frequently without informed or consistent differentiation. A test script format consists of a series of inputs and/or conditions and expected results, which generally are called steps and can be considered a single complex test case or a set of simple test cases. Many people, mistakenly IMHO, embed test procedure in the test script steps. Keeping the procedure separate from the inputs and/or conditions and expected results usually makes the script much more manageable. Regardless, one does not have to have a script in order to have test cases.
I believe that the drive should be to make a system or systems testable. In this case the test cases (if created to a standard and maintained with all changes) becomes a test asset that has inherent value. It is effectively Intellectual Property. If done properly according to a standard anyone with the testing skill should be able to pick up the test cases and test the system effectively..
Test cases are not or should not be just about the project but rather looking ahead to the life of the system/s involved.
I agree with this. What I believe is we need test cases but they should not be thorough but lean test case which memorize the functionality. Specially in agile (Scrum) environment, we do not have weeks to write and test the feature as we have to ship the release after every sprint or two. Hence, we need to make sure that the test cases are catering all the features of the agile environment and not only test the feature.
As this article mentioned, we need to give more time to do exploratory testing and find unexpected or non-happy path related issues by creating lean test case which is more challenging to be executed rather than go step by step.
Thank you Anushkaw and GianniR for your comments. The discussion question is a bit misleading, since I contend test cases are what testers execute and thus are present whether one recognizes it or not. Thus, these comments really are more related to the form that people feel test cases must take; and I contend that test cases neither must be written in a particular form nor even be written.  However, writing provides several advantages; and the IMHO mistaken implications that unwritten exploratory tests somehow are superior to written tests instead suggests weak test planning and designing and probably ineffective and inefficient ways of writing.
While many test cases operate with much redundancy we still think that test cases are important. The information gleaned from the studies and test cases help to prove the efficacy of the campaign or strategy to be implemented. The test cases help to prevent going off in unwanted directions for campaigns and launches, thereby saving the company time and money. We will continue to use test cases to ensure campaigns go smoothly.
Robin, personally I agree with your article. Perhaps I'm reading my own thoughts into yours but I think traditionally testers spend too much time describing what they think they will test when they get to that point. I much prefer a write less/test more (by test I mean specifically test execution). Your contention of a leaner test case makes sense to me and is something I have been working on for roll out at work. I think Carol482 is correct that test cases are important but the form they take and evidence of execution does not have to conform to "traditional" practices.
I think that you're right about many testers conflating test cases with test scripts, and that test scripts are certainly not necessary and moreover can inhibit valuable testing.

However, you make mention that exploratory testing 'advocates not writing anything'. No definition I've ever heard of, including the one you linked, indicates that you don't write anything when you use exploratory testing. Whether you write or don't write anything about the testing you perform is unrelated to whether the testing is exploratory or not.

That said, you're right that writing is not required to perform a test. It is often useful to take notes on what you did so that you can accurately describe your testing work and perhaps decide to create written test cases around your work later, but it's not a mandatory component of testing. Testing consists of doing the testing, not writing it down.
Thank you carol482, pauls63,and CarolBrands for your comments. Again it seems we’re all agreeing. Perhaps what’s being overlooked in the emphasis on test cases is the generally greater value that comes from effective test design’s ability to prevent errors, even though some or perhaps many of those identified conditions to test don’t actually get created let alone executed as test cases. This is another major Proactive Testing™ benefit that exploratory testing’s and test-driven development’s often sole emphasis on execution easily misses.
Test cases devoid of context or logical thought are next to useless. Test design that emphasizes interaction with the product and getting answers from a product are, in my opinion, considerably more important. My personal preferred definition of testing is to ask a product a question, and based on the answer(s) we get back, continue to ask more and more specific questions. Rinse and repeat. To that end, dynamic and evolving test cases and a design that takes that approach in mind is helpful. Having a batch of non-deviating test questions that we already know the answers to does not.
1. To add on the point mentioned: test cases are useful in tracking progress of the project - when test cases are documented and linked to associated requirements, it makes it easy to keep track of what requirements are not covered yet - Test cases also play an important part in traceability matrix. 2. I find test cases very useful in keeping evidence of what has been tested.
Test cases is necessary but not in like formatted manner. Early test cases detection technique is give a good way of brainstorming and its gives us pre-steps of any users which can be occur in future. Its provide base to understand any product before release and its clear the idea when build is release. 

This is indeed a topic for which Yes and No are correct answers. When the testing team has new members or working on new technology etc. then Test cases will definitely help to take the judgement.

In case of experienced test resources and prevailing development technology instead of test cases and requirement validation workflow are more helpful and fast in execution.

RVWF, is a workflow where tester use different input parameters and validate the workflow. Completion of this workflow means requirement is fully met in all normal, alternative and exceptional condition. This approach also helps in reducing the regression testing efforts by doing impact analysis of the changes done in code.

Of course, for tracking purpose we need an artifact that all requirements are tested either through RVWF or through test case document.


Vinod Vaya

Test cases are just artifacts. What's necessary is testing.
Good testing is risk-based. The focus is on finding important problems earlier. Test case driven testing tends to be risk-oblivious and focused on confirming stuff that is already known to be working.

Does a tester need test cases? In any case, tester needs realistic test cases to cover the adequate test coverage.

To get better test execution result, the quality of test cases is really important in manual or automated test execution.

I  am using and suggesting agile teams to use the simple technique to identify test cases for any kind of testing such as ( Unit/API/Functional) using three words such as ACTION- SCENARIO-EXPECTED RESULTS.

 It enforces developers and testers to think holistically and identify possible test cases to have good test coverage.


Thank you @AlbertGareev
and @AniketMhala
for your comments.  Test cases are what
is executed, regardless of whether they are documented in a particular way or
even documented at all.  I think everyone
agrees that test cases can vary considerably in quality.  Hopefully we all recognize that peoples
perceptions of what makes a test case high quality can and
probably do differ.  Less likely to be
recognized is that what one person gets all concerned about in fact may have
little if anything to do with the test case
s adequacy. 


All testing is risk-based,
again regardless whether or not risk is evaluated in some particular way.  In all my years in testing, I believe this is
the first time I
ve encountered the term test case driven testing.  I have no idea what its
supposed to mean but encourage re-examining conclusions that it
to be risk-oblivious and focused on confirming stuff that is already known to
be working.


In my experience, there is a major difference between how traditional
(what I refer to as
reactive) testing and my Proactive Testing approach risk.  Traditional reactive testing tends to create
test cases, analyze and prioritize them based on the risks they address, and
run the highest risk test cases more.  Such
an approach is not wrong; but it tends to miss a lot of risks and has little
awareness that it has missed them.  In
contrast, Proactive Testing
uses powerful techniques to more fully identify risks,
including many ordinarily-overlooked risks, prioritizes and then creates and
executes test cases for the highest risks. 
Not only can Proactive Testing
enable running more of the more important tests, but many can
also be created and run earlier when it is easier to fix detected defects.

This is a chicken or egg question. Fundamentally, the answer is "no, a tester does not need test cases to do testing" but that begs the question "what testing are we doing without test cases?" In general, when we interrogate a product,and we want to see how it responds, we are already doing testing, even if we have not defined a test case to follow. Much of the exploration of a product is done with a vague charter at first, and with familiarity, we create more concrete examples to focus on. Very often, we test first to define test cases to look at later. 

Getting back to the main question. Are test cases necessary to begin testing? No. Are they helpful in guiding testing efforts? They certainly can be. Will better testing be performed with readily defined test cases? Not necessarily, especially if those defined test cases get in the way of questions you might ask were they not so readily defined.
This is a loaded question.  One should have to define what they mean when they say test cases.  If the answer to that question is, oh step by step procedures to flex a piece of functionality. I would say no.  It is not necessary to write test cases.  In fact, doing so will actually lead the people who execute said test cases later to turn off their brain, become unengaged, and focus only on the written script.  In fact I'd argue they do not involve testing at all.  They are mindless, clueless, and a left over from a by-gone era in which people think written test cases look like documentation of testing done (which is so far from the truth).    

You can test an application many ways without any kind of written test cases, so I assert, that no they aren't needed, and in fact they may be a colossal waste of time, because like all documentation, they require updates periodically and fall behind and out of date.
Michael, thank you for your comments. My point is that a test case is what is executed by a tester and consists of input(s) and/or conditions and expected results, regardless whether it is written or written in some particular format.
Veretax, thank you for your comments. You seem to be falling into the common traps of believing that a written test case must have keystroke level procedural detail and that the only alternative is to go to the opposite side of the pendulum and write nothing. Writing indeed provides value, such as, avoiding forgetting and enabling sharing, review, reuse, and refinement. The key is to write in an economical fashion that gains the benefits without incurring excessive overhead. In turn the key to that is realizing that a test case consists of input(s) and/or conditions and expected results.
Conversations between project members as below-

With out documented test cases:
A - Hey B, did you do that test where input 1 is false and input 2 is True ?
B - No I did that test where input 1 is false twice and input 2 is True only once. Should I do the test where input 1 is false and input 2 is True too?
Team Lead - Hey B, you just sent me an email saying testing-status - completed, however customer complained that the software crashed with input 1 as False and input 2 as True.
B - hmm..I don't remember if I did that test..I just returned from vacation, you remember ?
customer - ?????

With documented test cases:
A - Hey B, how about test case no-4 ?
B - I just did test cases 1 till 3. Have to do 4.
Team Lead - Hey B, you just sent me an email saying testing-status - completed, however customer complained that the software crashed with input 1 as False and input 2 as True.
B - hmm..I don't remember if that's part of our test cases we planned to test with. Let me check our docs and can confirm and we can proceed accordingly.

Customer - Good to know my reported issue is being addressed in a controlled fashion.

Bottom line - the above is just a small sample besides lots of advantages to have test cases on document. This shouldn't be seen as orthodox practice but rather something to get the deliverable-quality managed easily at least better than the testing-neanderthal era.
No matter how well documented the testing I have found things that slip through the cracks. If we think logically as a developer and test the application as written we would have different results than if the outside world did the testing.  We have found issues where some data or rules that should never happen,thinking  logically,  get entered by a user and cause a failure. They key in data that logically would never happen like mixing data types, backdating dates. I also come down to the validation data. In some cases it has been uploaded from an outside source that may not have the rules applied as if it were entered manually. 
@RobinGoldsmith I don't think that's what I was saying.  Why is it when people start pushing back on test cases people immediately think it means zero documentation.  There are ways to map and plan testing that do not take the form of test cases in the traditional sense.

I also fail to see why reuse is such a big issue where test cases are concerned.  Because of the pesticide paradox, running the exact same prescriptive test case is unlikely to find new detail if your dev team is careful in what it changes.  I feel the focus on ability to repeat test cases to be misplaced.
There are a couple things about this article that don't quite fit my understanding of testing.

1) "To overcome these weaknesses, exploratory testing advocates not writing anything" This statement is a poor model of my understanding of exploratory testing. While documentation is not mandated to fit any form, I can't imagine testing without writing anything down. Instead, my understanding is that exploratory testing just means that while you may start with a general idea in mind, most of your planning of your testing choices happens 'as you go', allowing for a great deal of flexibility in deciding what and how to test, using whatever form of documentation you see fit. For me, this usually consists of a series of Excel tables documenting the trials that best represent what was tested and interesting results.

2) "Let me suggest instead that a test case should consist of inputs or conditions and expected results, period." I think even this definition of a test case may be too limiting, because we do NOT always have 'expected' results. There are many occasional where I select input specifically because I'm not sure what's going to happen, due to conflicting expectations.
Hi Carol, thanks for chiming in. I like that this comments thread has a variety of conflicting viewpoints. I think it gives us all a lot to think about and improves out ability to apply these concepts appropriately to our own testing.

If you mean "some artifact" as a "test case" then - no, those are absolutely not necessary for a good testing.

If you mean "some test ideas" as a "test case" then - yes, testers absolutely should have their thinking "on" while testing.