Buyer's Guide

Choosing the best software testing tools for your business

A collection of articles that takes you from defining technology needs to purchasing options
Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Exploring the three major types of software testing tools

Application testing tools make enterprises' app development more efficient. Learn more about automation, coverage and bug tracking tools.

Software testing tools exist to help staff members conduct the most effective tests possible and do more with less. Additionally, these tools help to eliminate repetitive operations -- replacing the human element -- and do what might not be possible otherwise, such as complementing or cataloging, searching, and combining information in ways that are common for test and software development organizations. Application testing helps organizations find issues in their product before the customers do. The number of combinations one has to test for -- even the most trivial of programs -- can be staggering. A pair of nested for loops, for example, can have unique test cases that number in the millions.

Software testing tools themselves do not perform actual testing. Humans test with attentive minds, as well as the ability to discern differences and interesting details based on the information they receive. Testing tools can be programmed to run a series of operations and check for expected results. In a skilled person's hand, these tools can extend the reach of the tester. In this feature we talk about three major categories of test tools: automation, bug tracking and coverage.

The distinction between quality assurance and software testing

Before covering the major categories of application testing tools, it is important to make the distinction between quality assurance (QA) and testing to give you a better idea of what these tools should and should not be doing. QA is building it right. Testing ensures you built the right thing. QA means ensuring that the steps of a manufacturing process are followed correctly and in the right order to prevent problems, resulting in the same product every time. Testing is mass inspection of all the parts after going through the manufacturing process. It's a distinct difference in the two, and a distinct difference in the tools used to perform both functions.

QA ensures that no code is created without a requirement; that all code is reviewed -- and approved -- before final testing can begin; and that the tests that will run are planned upfront and are actually run. The company defines its work process model and someone in a QA role either checks off each step, or, perhaps, audits after the fact to make sure the team performed each step and checked the right boxes.

If software QA tools make sure the product was built right, software testing tools help ensure that the team built the right product. Because each software change request is different than the others, software QA tends to fail -- it can make sure that a requirements document exists, but not that the requirements were done well.

Application testing tools can help the software team determine the actual status of the software as it is built.


The most well-known kind of software application testing tool is automation, which attempts to replace human activities -- clicking and checking -- with a computer. The most common kind of test automation is driving the user interface, where a human records a series of actions and expected results. Two common kinds of user-interface automation are record/playback -- where an automated software testing tool records the interactions and then automates them, expecting the same results -- and keyword-driven -- where the user interface elements, such as text boxes and submit buttons, are referred to by name. Keyword-driven tests are often created in a programming language, but they do not have to be; they can resemble a spreadsheet with element identifiers, commands, inputs and expected results.

Nearly every program that runs in a browser now has a mobile counterpart. Because of this, mobile test tooling is quickly becoming as important, if not more so, than testing in a web browser. Sometimes this automation takes control of the mobile device by launching an app or mobile browser and performing some actions. Other times this testing happens just below the surface by working at the API level.

Automation tools perform a series of preplanned scenarios with expected results, and either check exact screen regions -- in record/playback -- or only what they are told to specifically check for -- in keyword-driven. A computer will never say "that looks odd," never explore or get inspired by one test to have a new idea. Nor will a computer note that a "failure" is actually a change in the requirements. Instead, the test automation will log a failure and a human will have to look at the false failure, analyze it, recognize that it is not a bug and "fix" the test. This creates a maintenance burden. Automated testing tools automate only the test execution and evaluation.

Another term for this kind of automation is something Michael Bolton and James Bach call checking, a decision rule that can be interpreted by an algorithm as pass or fail. Computers can do this kind of work, and do it well. Having check automation run at the code level -- unit tests -- or user interface level can vastly improve quality and catch obvious errors quickly before a human even looks at the software.

Bug tracking

For very simple software, the bug reports might be tracked with sticky notes or spreadsheets. But when the software is more complex, these become unwieldy, and companies need to turn to software designed for the task. Typically, professional bug trackers report on bug severity, priority, when the defect was discovered, exact reproduction steps, who fixed it, what build it was fixed in, as well as searching and tagging mechanisms to simplify finding a defect. These tools don't just assist programmers and project managers; customer service and existing users can use these tools to find out if an issue is known, if it is scheduled for fixing, escalating known issues and entering unknown ones. Bug tracking tools can also help with the workflow, because bugs can be assigned to programmers, then to testers to recheck, then marked to be deployed, and then, after the release, marked as deployed.


When we discuss coverage in software testing, we are looking at two specific ideas.

The first area is code coverage, which focuses on the percentage of software that is exercised by tests. The most common type of code coverage is statement coverage, which is the percentage of statements that are run through during the test process -- manual, automated or both.

The second area, application coverage, looks at the test process from other directions -- typically, the percentage of the requirements that are "covered." One common application coverage tool is a traceability matrix -- a list of which tests cover which requirements. Typically, test case management software records all the planned tests and allows testers to mark that a test case "ran" for any given release, which allows management to determine what percentage of tests were "covered." This is a sort of "quality assurance" look at the test process, which should ensure that each part of the application is covered, along with a management control.

Alone, each of these three categories of tools can help a software team manage issues and code changes. When they are combined, that team has a fairly robust suite of tools that can help with finding bugs, debugging the code and freeing up the team to think about areas that need to be tested.

Infrastructure and support

There is a section of testing tools that should be addressed but is too varied to fit under one category. Test automation assumes the latest version of the application is installed on the computer or web server. It still needs to be compiled and installed, the automation needs to be started, and someone needs to be informed to check the results. All of these secondary tasks fall into support -- and they can all be automated. Continuous integration tools are support tools that notice a check-in of new code, perform a build, create a new virtual web server (or update a staging server), push the new code to the target machine, run the automation to exercise the program, examine the results, and email relevant team members about failure.

Support includes the tools that testers use to move faster or extend their reach. Software to generate random names to use for input, or test data in general, falls into this category, as well as software to create screen captures and videos. This type of software exists to record all of the interactions that a tester has had with various fields, simulators for mobile devices, and developer environments that blend into the background and pop-up on command to record notes.

Monitoring also plays a large role in supporting software testers. These tools provide real-time information about what is happening in production environments, notification when problems occur and guidance on how to improve testing and development in the areas where customers are discovering problems.

Next Steps

Consider Agile software testing principles when planning your next tests.

How do you tackle system integration testing? It's simple

Here are four testing tips to ensure success.

This was last published in March 2017



Find more PRO+ content and other member only offers, here.

Buyer's Guide

Choosing the best software testing tools for your business

Join the conversation


Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What kind of application testing tools does your company need?
Well, it's not exactly a "tool", but the article mentions infrastructure, and we are sorely lacking in that area. We do have a full QA environment for nearly everything that we work on, which is a vast improvement from the past. However, We have no staging/UAT/pre-production environment that mirrors production more closely, and it has caused us problems in the past. 
Hrms, the article went a slightly different direction, so I'll propose some other types of tools that may not fall into the four categories here.

Debugging Tools: IDEs like Eclipse, RubyMine, or Visual Studio
Reverse Proxys: Charles, Fiddler, JMeter
Performance Testing: Tools like JMeter, LoadRunner, Grinder, etc.
Developer Tools: Chrome Dev Tools, IE Dev Tools, Firebug, SysInternals: Process Monitor, Process Explorer, etc.

Note that among the most important tools listed in this post.  CI (Deploy) Tools, and Coverage may be very important for both Devs and Testers
Thank you for your comments, abuell and Veretax. There are many other avenues we could have explored with this and yes, dozens of other possible tools. For now, to keep this a bounded effort, we chose the areas we did and the tools. There is always the option of future series with more tools and different specialties :). 
I'd be remiss if I didn't say: Don't forget about security testing! has a lot of resources in this area, not to mention general application quality/testing content. Here are direct links to some pieces I've written on the subject of application security testing as well:
Why yes, add accessibility testing! While "checker" class tools like WAVE toolbar tend to catch obvious problems, they're fast and effective. For complete accessiblity testing combine skills and tools, learn to operate a Screen Reader.
I believe the more common (though still not necessarily correct) expression is that Quality Assurance concerns building the right thing whereas Testing is confirming it was built right. Also, I hope the coming articles distinguish functional from structural test automation and distinguish both of them from the types of tools that developers use for test-first development.
I'm not sure this is true: "QA ensures that no code is created without a requirement; that all code is reviewed -- and approved -- before final testing can begin; and that the tests that will run are planned upfront and are actually run. "

I do not believe that QA, the process type of QA people, look that deeply at code.  If they do they may be looking at outputs from style automation checkers like PMD or StyleCop for example, but they are often more concerned with how checkins are made, commented, whether processes for building software are followed in building the thing 'right'.  

I don't believe you see that kind of sign off happening in many companies, if it is, its probably not happening with QA Professionals. (I See this sort of QA person more as an auditor in an environment with CMMI, or such implemented.)

"Two common kinds of user-interface automation are record/playback -- where a tool records the interactions and then automates them, expecting the same results -- and keyword-driven -- where the user interface elements, such as text boxes and submit buttons, are referred to by name."

Correction. It is keyword driven because of action keywords: Type, Click, etc.

Referencing by logical name is a benefit of having GUI mapping component.

Overall, I classify GUI automation in the following way.


Seems like my comment was too long and the rest was truncated.

Front-End Test Automation Practices – Data-Driven Framework:

Front-End Test Automation Practices – Keyword-Driven Framework:

Front-End Test Automation Practices – Model-based Hybrid Keyword/Data Driven Framework: