Get started Bring yourself up to speed with our introductory content.

Four tips for effective software testing

4/5

Application testers must compare actual to expected results

Source:  iStock
Visual Editor: Sarah Evans/TechTarget

I'm frequently amazed how often application testers define correctly the right expected results, get actual results by running tests, and then don't take the final comparison step to make sure the actual results are correct (i.e., what was expected).

Of course, the most common reason this key comparison of actual to expected results is skipped is because the right expected results were not defined adequately. When expected results are not externally observable, who knows what the application testers are comparing against? Sometimes the application testers mistakenly assume the actual results are correct if they don't appear outlandish. Perhaps the tester makes a cursory comparison of mainly correct results but misses some of the few exceptions whose actual results differ from expected results.

I appreciate that comparing actual software testing results to the expected results can be difficult. Large volumes of tests can take considerable effort and become tedious, which increase chances of missing something. Results that are complex can be very hard to compare accurately and may require skills or knowledge that the tester lacks.

Such situations can be good candidates for automation. A computer tool won't get tired and can consistently compare all elements of complex results. However, an automated test tool requires very precise expected results. An additional downside of automated tools is that they won't pick up on certain types of results that a human application tester might notice.

View All Photo Stories

SearchCloudComputing

SearchAppArchitecture

SearchITOperations

TheServerSide.com

SearchAWS

Close