Get started Bring yourself up to speed with our introductory content.

Four tips for effective software testing

1/5

To ensure success, follow software testing concepts

Source:  iStock
Visual Editor: Sarah Evans/TechTarget

Regardless of development methodology or type of software testing, multiple factors can come into play that determine the effectiveness of software testing. Generally, testers do not pay conscious attention to these key software testing concepts. Too often, lack of conscious attention means these essential factors have been overlooked, even by experienced testers who may take too much for granted. Not applying these software testing concepts not only leads to less effective software testing, but the lack of awareness can make the tester oblivious to a test's diminished effectiveness.

Here are four fundamental factors that determine effective software testing.

View All Photo Stories

Join the conversation

7 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

"What makes the execution a test, rather than production, is that we get the actual results so we can determine whether the software is working correctly. To tell, we compare the actual results to expected software testing results, which are our definition of software testing correctness."

No, that's not testing, that's Checking!

Should we have some notion of how something works?  Sure, but there is more to testing then a binary assertion made by a tester on some thing matching an expectation. (and what if the expectation is wrong, what if it matches, but there are other issues like performance, or usability, or security, that such a test doesn't account for?
Cancel
From a high level, these are good tips for a newer tester. These ideas would also be good for experienced testers to review every now and again. It seems as though these concepts should go without saying, but for months I've been working with an entry-to-mid level tester, and find myself checking over her work, because I have found some mistakes that seem like they should have been obvious. It could be a case of not having a tight enough process of defining expected results and checking against actual results. 
Cancel
Probably since the beginning of time, the purpose of testing has been to give confidence something works. Such confidence comes from determining that it does what it is supposed to do and does not do what it’s not supposed to do, which in turn is based on comparing actual to expected results. Confidence is diminished when testing detects defects—failing to do what it is supposed to do and/or doing what it’s not supposed to do. A test must include an evaluation of correctness, or it’s not a test.

Testing’s biggest challenge always has been adequately identifying what it’s supposed to do and what it’s not supposed to do. There often are many such conditions, including various “ilities,” and many often are overlooked or misidentified, including mistakes defining expected results. Other big challenges involve creating necessary conditions and evaluating results. Any given test is unlikely to address all relevant conditions, but that doesn’t reduce its value for the conditions it does address.

Exploratory testing generally involves executing a system somewhat spontaneously, largely reacting to context without explicitly defining actions to be taken or their expected results prior to execution. While some exploratory tests are merely to ascertain how the system does work, most involve the tester’s evaluation of whether the actual results seem appropriate. In other words, even though exploratory testers may not realize it, they too are comparing actual results to some sort of expected results, albeit ones that have not been defined explicitly. Exploratory testers also can make mistakes identifying conditions to be demonstrated and expected results. An “I’ll know it when I see it” approach seems especially prone to mistakes.

Some exploratory testing “gurus” have attempted to appropriate the term “testing” to mean only the type of testing they do without explicitly defined inputs/actions and expected results. In addition, they have attempted to relegate what has been called “testing” essentially forever to the presumably pejorative term “checking.” I encourage everyone to reject their self-serving specious ploy.
Cancel
Please recognize the hyperbole of the article title, "To ensure success, follow software testing concepts." These four concepts are necessary but not sufficient for effective testing. Skipping any of them will impede test effectiveness.
Cancel
Robin hit the magic word dead on: "Confidence." Until we reach the Holy Grail of fault-free source code (and many assume that's impossible) true confidence cannot be obtained. As Lawrence Paulson put it in an article for the Association for Computing Machinery: "The software development industry claims it is simply too difficult to build correct software. But such a position looks increasingly absurd." Readers might want to check out this White Paper published by the Information and Privacy Commission of the Government of Ontario. https://www.ipc.on.ca/images/Resources/pbd-fault-free-software.pdf
Cancel
It certainly makes sense that there should be an expected vs. actual results check. Like Tim says, however, there's expected results that don't line up with reality or conditions that intermittently make the actual results and the expected results not line up (as well as aspects of "Expected" that we actually hadn't considered).
Cancel

@RobinGoldsmith -

"Probably since the beginning of time, the purpose of testing has been to give confidence something works. Such confidence comes from determining that it does what it is supposed to do and does not do what it’s not supposed to do, which in turn is based on comparing actual to expected results."

Oh, how interesting! Probably, you'd be surprised to find out that one of the highly recognized classics disagrees with you.

From "The Art of Software Testing" by Glenford Myers, 1979
Chapter 2: The Psychology and Economics of Program Testing

One of the primary causes of poor program testing is the fact that most programmers begin with a false definition of the term. They might say:
• “Testing is the process of demonstrating that errors are not present.”
or
• “The purpose of testing is to show that a program performs its intended functions correctly.”
or
• “Testing is the process of establishing confidence that a program does what it is supposed to do.”


These definitions are upside-down.

When you test a program, you want to add some value to it. Adding value through testing means raising the quality or reliability of the program. Raising the reliability of the program means finding and removing errors.
Therefore, don’t test a program to show that it works; rather, you should start with the assumption that the program contains errors (a valid assumption for almost any program) and then test the program to find as many of the errors as possible.

Thus, a more appropriate definition is this:


Testing is the process of executing a program with the intent of finding errors.

Cancel

-ADS BY GOOGLE

SearchMicroservices

TheServerSide

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

Close