Get started Bring yourself up to speed with our introductory content.

Four tips for effective software testing

5/5

Follow software testing guidelines to avoid oversights

Source:  iStock
Visual Editor: Sarah Evans/TechTarget

The fourth key to effective software testing deals with the common experience of overlooking things that can "fall through the cracks." The simple but not always easy way to reduce such oversights is to follow software testing guidelines that help the tester be more thorough. Software testing guidelines include checklists and templates meant to guide development or testing.

Consider the difference between going to the supermarket with and without a shopping list, which is an example of software testing guidelines. Without a list, you tend to spend more yet come home without some of the groceries you needed. With the shopping list, you get what you need and spend less because you're less likely to make impulse buys.

Software testing guidelines also help detect omissions that exploratory testers are much more likely to miss. By definition, exploratory testing is guided by executing the software as built. That tends to channel one's thinking in line with what's been built, which easily can lead away from realizing what hasn't been built that should have been. Software testing guidelines can help prompt attention to such items that following the product as built can obscure.

View All Photo Stories

Join the conversation

17 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Do you feel more confident about testing software now that you've read the four tips for increasing effectiveness?
Cancel
I call BS.  Software Testing Guidelines.. are you kidding me?  Templates?  There isn't a template made that guides a tester to making better test cases, or designing better tests.  And Guidelines will only ever be a starting point.  Not all software is created equal, the needs, and risks involved greatly impact what needs to be tested, and also how it needs to be tested.  
Cancel
Good article. It gives the basic principles that can easily be overlooked.
Cancel
The part that mentions "realizing what hasn't been build that should have", for me, usually comes before testing. When someone first comes to us and has requested work from our team, that's when I try to discover what their goals are, and if I come up with any ideas about alternative solutions that may be easier and/or better, I will bring them up with the business owner. 

Occasionally, though, those ideas don't come to me until I'm testing. I'm not opposed to using guidelines, but I don't think that they would necessarily help for this particular use. 
Cancel


Please see my comment to the main article clarifying that it’s
independent guidelines, not software testing guidelines, that help prevent
things from falling through the cracks.
  While
they sometimes spot big oversights, more commonly they catch smaller things
that nonetheless would have caused problems had they been missed.
  A key point about these four tips for
increasing test effectiveness is that they pertain to fundamental concepts that
easily and often reduce effectiveness when they are taken for granted by both
beginning and experienced testers.
  The impact
is exacerbated because the effects seldom are recognized.



Cancel
Just to add onto Robin's Comment, when you consider his recent comment it changes a lot of the meaning in the article, in my opinion.
Cancel
Overall, the most important thing we can do as testers is check our biases at the door, or at least as much as we possibly can. Some of the ideas of expected vs. actual results can help a bit in that regard.
Cancel
I think @MichaelLarsen is right - we need to be aware of our biases and work diligently ot leave them behind. For example, the article points out that exploratory testing “tends to channel one's thinking in line with what's been built, which easily can lead away from realizing what hasn't been built that should have been.” Where as an approach that uses less of an exploratory approach may channel one’s thinking in line with what was specified, which can easily lead away from realizing what has been built that shouldn’t have been.
Cancel
I feel more confident that there's still a lot of work to do educating what really software testing is. And more confident in my own job perspectives as I'm - as a practice lead - is often called to help recover testing.
Cancel
What this article fails to do is explain where these software testing guidelines come from.  Oh 'checklists' and 'lists'.

In my experience checklists and lists are a little like the laundry.  Constantly piling up, constantly being done, yet constantly never finished with.  I'm afraid the terms here feel too similar to really provide good examples of what the author means here.   Frankly, when I see templates in use, I immediately suspect weak testing is in progress.  a larger percentage of the bugs I've found on highly scripted projects (which leverage templates) approaches 80% or more found away from the script.  IE ideas I had while executing a 'check list' if you will, that are inspired and demonstrate something that wasn't captured there in its intent.

The fallacy here is that these prewritten, or long kept test cases are some kind of God that will save us from all testing problems.  But alas they are false gods.  They may help to some small degree, but they cannot save you from weak thinking about software.
Cancel
I think a shopping list works for some people. but it presumes people never have that aha moment and realize, I forgot to put milk and butter on the list. People often will go off list, and only the most disciplined of people will ever stick to their lists.
Cancel
@Veretax, thank you for your comments on each of the sections and discussions of this article. I’m answering this one first because the issue you raise is different from those in the other article and discussion sections and warrants being addressed right away. I’ll address your other comments as I’m able to get to them.

A lot of the value that TechTarget.com provides comes from their very capable editors who turn copy from authors like me into the finished articles you read online. 99 percent of the changes editors make improve the articles. Everyone once in a while, though, an editorial change has an undesired effect.

My term is “independent guidelines,” which has a considerably different meaning from “software testing guidelines.” Independent guidelines generally relate to content, as opposed to relating to testing. They usually exist already for purposes other than guiding testing, so you look for them in the content domain space. Sometimes they are templates or checklists, or just lists of important things that the business needs to know to do whatever it does.

As a by-product, they can guide testing to help spot things that have been overlooked. That doesn’t mean there can’t be additional things not currently identified in the independent guideline. If things change, or new or different things are identified, update the independent guideline.
Cancel
I think it may be important for the testing team to develop and manage their own list of areas ripe for testing that may have been overlooked. This can be drawn from marketing materials, development meetings like standups, feature reviews, and discussions with support. All these sources of information can be used to draw up a list of testing topics that may not be explicitly addressed by feature/requirements documents.
Cancel
Hello,
It looks like I owe some apologies on this one. The phrase "Software testing guidelines" was added to make this page more search-engine friendly. Out of context, "Independent guidelines" could refer to guidelines for education, healthcare, manufacturing, or a number of other topics that are not related directly to software testing. My sincere apologies if any meaning was lost or altered.
Cancel
Guidelines and templates are not necessarily a bad thing. I use them, myself. I've created them from experience, because when testing, there are just so many things to keep in mind. Obviously, you are going to do a quick smoke test and you are going to verify the functionality. But there's so much more - inefficient database interactions and environmental issues (tiny differences in QA environment vs production environment) are things that I have to admit have slipped by us more than once and gone on to cause headaches in production. So they go on the checklist. Live and learn.
Cancel
CarolBrands, I think that's an excellent suggestion and that is exactly what we do. I have my own personal list, and it comes from experience and having dealt with the consequences of missed problems in the past. I have been in QA on my team the longest, and so I share my experiences and my lists with the other testers as well, though they have no obligation to follow them exactly. I encourage them to come up with their own lists based on their own ideas and insights. 
Cancel
Thanks for the Update Robin, I think that helps me better understand what you were meaning.
Cancel

-ADS BY GOOGLE

SearchMicroservices

TheServerSide.com

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchHRSoftware

SearchHealthIT

DevOpsAgenda

Close