Get started Bring yourself up to speed with our introductory content.

Four tips for effective software testing

4/5

Application testers must compare actual to expected results

Source:  iStock
Visual Editor: Sarah Evans/TechTarget

I'm frequently amazed how often application testers define correctly the right expected results, get actual results by running tests, and then don't take the final comparison step to make sure the actual results are correct (i.e., what was expected).

Of course, the most common reason this key comparison of actual to expected results is skipped is because the right expected results were not defined adequately. When expected results are not externally observable, who knows what the application testers are comparing against? Sometimes the application testers mistakenly assume the actual results are correct if they don't appear outlandish. Perhaps the tester makes a cursory comparison of mainly correct results but misses some of the few exceptions whose actual results differ from expected results.

I appreciate that comparing actual software testing results to the expected results can be difficult. Large volumes of tests can take considerable effort and become tedious, which increase chances of missing something. Results that are complex can be very hard to compare accurately and may require skills or knowledge that the tester lacks.

Such situations can be good candidates for automation. A computer tool won't get tired and can consistently compare all elements of complex results. However, an automated test tool requires very precise expected results. An additional downside of automated tools is that they won't pick up on certain types of results that a human application tester might notice.

View All Photo Stories

Join the conversation

21 comments

Send me notifications when other members comment.

Please create a username to comment.

Have you ever skipped the step of comparing actual to expected results? Why or why not?
Cancel
"Ultimately, tests need to demonstrate that products not only work as designed but in fact satisfy the real business requirements, which are the basis for the "right answers" and should include most quality factors. "

Wow, a product can work as designed, satisfy the written requirements and still be so far wrong.  How many projects fail at delivering real value? Far too many.  The focus on the big design and spec up front IMO is part of the problem.
Cancel
This is probably the most confusing section of this article. It sounds like you're saying sometimes testers don't compare results against oracles of any kind. This seems improbable. I could imagine that they don't compare against enough oracles to identify problems, but that's not an issue of failing to compare as much as it is an issue of failing to identify appropriate oracles. I could also imagine that they don't identify test cases sufficient to reveal problems, but again this is not a matter of failing to compare as much as matter of failing to analyze the risks of the feature under test.

If you're saying that if a feature generates 1000 data points, that we need to individually compare all 1000 data points to an expected result, I think that is a poor decision. Instead, we can organize and analyze those data points categorically to reveal potential problems, work to identify which data points are most likely to contain failures heuristically, and analyze that subset of data points individually.
Cancel
Yes, I have skipped comparing actual with expected results in below scenarios:
1. When I perform retest or regression, meaning whenever I am repeating a test case
2. When I know what the functionality is supposed to do. For example , if I click a dropdown/list box, the expected results might read that "the list should expand below the field", but in case if the list expands above the list but with no UI issue, i might still Pass the test case though it might be not par with expected results.
Cancel
Big design no doubt is part of the problem, but in my experience the bigger problem is that design of any size is not appropriately responsive to REAL business requirements, usually because they have not been defined adequately. @CarolBrands,
indeed both professional and nonprofessional testers mistakenly take for granted that they can spot incorrect results. 
Insufficiently identifying conditions to test is a different but very important issue, which is the primary focus of most testing training and writings. Guiding such identification by actual results is one reason for its insufficiency.
Cancel
Yes, I've skipped the step of comparing actual results to expected results. Typically, the reason is probably that I don't have expected results. Of course, that's an entirely different problem in and of itself. In a perfect world, we'd always have expected results defined. However, Testers tend to get put in a lot of uncomfortable situations, with crappy, old, legacy software, that someone has requested a band-aid fix for.  Life is full of compromises, I suppose.
Cancel
Why bother testing if not to compare actual to expected results...? That would seem to be the point of testing. To see how far results have dropped (or risen) from expectations so they can be adjusted accordingly.

When our results are too divergent from our expectations, we review our testing procedures. And then retest. Once confirmed, good or bad, we have to believe the results. No doubt we've had a few surprises, but they'd be meaningless without comparison.
Cancel
Exactly my point, @ncberns, there’s not much reason to bother testing if one does not compare actual to expected results.  Yet, it happens more than folks are likely to realize, often because they take for granted or are otherwise deluded that they can tell an incorrect result when in fact they can’t.  When expected results have not been defined, of course, comparison is impossible; yet it happens often, and running a test without having reliably--defined expected results makes the test doubly wasted effort.
Cancel

Yet another discussion thread..

@RobinGoldsmith

Can you elaborate on the process you keep talking about - "comparing actual to expected results"? Can you give me the full mechanics of it? I wanna make sure that you can explicitly describe everything that must be explicitly predefined to guarantee non-wasted effort in testing.

I'll respond back with my analysis.



Cancel


Let’s say the program is multiplying 4 times 3 and should
give a result of 12.
  However, the
software instead displays 13 and continues to operate normally.
  Some testers miss such an error because they
don’t know to look for 12 and the software hasn’t done anything like blowing up
or giving some other blatantly obviously outlandish result that catches their
attention, so they don’t even question the result.
   



Cancel
This makes testers seem like a bunch of idiots.  Or could it be, that a Tester is smart enough to be sensitive to change, sees a result, and then questions whether the oracle they would be using as the expected outcome is really 'correct'?  I think those are smart questions to be asking, actually.
Cancel
Thanks for the feedback, Veretax. I'm sure Robin has the utmost respect for software testers. I don't think the intent here is to say all testers are making these mistakes all the time. I think it's just a common human error that he's seen in his career. Making human mistakes doesn't make testers idiots. It just makes them humans.
I'm sure we could all come up with common scenarios when a slightly different actual result than the expected result does not necessarily mean there's a defect there. But I think in general it's a big red flag that we should take a closer look to figure out if we were expecting the wrong thing, if we've got a bug, or if there's something else going on here.
Cancel
I agree with the point made about automation. If you are sitting and checking through results to the point that it becomes tedious and you find yourself more likely to miss something, that's a problem. I don't really find myself in that situation because most of the time, most functionality is covered by automated tests. That allows me to concentrate more narrowly on the specific functionality that I'm looking for. 
Cancel
Thank you @James. I’m using “tester” broadly, not just for those who have the term or something similar in their title. The fact is that testers of all stripes miss defects their tests actually revealed. Too many take too much for granted, such as by relying on the system’s highlighting the defect by blowing up or otherwise acting so outlandishly they can’t miss it. When the defect is less apparent, it’s easy to miss when actual results are not compared conscientiously to expected results. I’m intrigued that @Veretax now is acknowledging my prior article’s point about having to know what the right answer is. Automation can help with tedious comparisons but often is not suitable for detecting some of the other types of “ilities” errors @Veretax alluded to in prior sections.
Cancel
The article spectrum covers a lot, and I must say, that the way ITKE broke them up may actually do Robin's series a bit of a disservice because each piece begins to look as distinct, and are easy to be taken out of context of the whole.  At least, that's what I feel looking back at this a few weeks later.
Cancel
It begs to ask - who cares about your expectations??
Testers must observe behavior of the product under test and constantly ask - "is there a problem?". Testers must develop critical thinking and bug advocacy skills.
Cancel
I would suggest that answering, “Is there a problem?” relies on comparing the behavior to some expectation of what that behavior should be. When there is no expectation, it can be easy for wrong behavior to occur without its even catching the tester’s attention, let alone prompting the tester to ask and hopefully correctly answer the question, “Is there a problem?”
Cancel
@RobinGoldsmith
The problem with the approach you articulated - "When there is no expectation, it can be easy for wrong behavior to occur without its even catching the tester’s attention" - is rooted in the phenomenon called by scientists "bounded awareness"*. If focused on expectations people ignore the actual information and fail to recognize newly emerging problems.

* Reference: http://www.prioritysystem.com/reasons1c.html
Cancel
I'v been in IT for 30+ years. I have never worked for company where we had testers. We were always responsible for testing our own code. Then we may occasionally pas to to a fellow code developer to review. If you are trying to match apples to apples, that may be fine. What happens if someone throws in a banana that did not exist on either side?
Cancel
@ ToddN2000, thank you for your comment. I don’t really understand your banana point or what you mean by “did not exist on either side.” As mentioned above, I’m using the term “tester” broadly to include anyone who does a test, including the developer. The article is pointing out that many people rely on the code doing something outrageous in order to detect a defect. Less flamboyant errors can go undetected because whoever is testing doesn’t look closely enough at the actual results to realize they differ from expected results.
Cancel
In my i-Series days, I have found things in code written by others like allowing invalid dates.You could enter Feb 30th.. Something like this may only get caught by bad data entry.. There may be no "test" scenario for this and it may never be caught until something blow up. These are the little ting I try and test above and beyond the obvious. If you use standard thinking like a phone number should be xxx-xxx-xxxx, what if they customer is international?? It won't work.
Cancel

-ADS BY GOOGLE

TheServerSide.com

SearchAWS

SearchBusinessAnalytics

SearchHRSoftware

SearchHealthIT

Close