Application testing tools combine the man and the machine, eliminating repetition, speeding up the work itself and automatically creating reports and summaries as people perform the work. Many of these tools allow a paper trail for determining what has been done, what needs to be done and what should be prioritized next.
Determining a need for application testing tools is like determining any need for your enterprise. You need to assess the level of pain that these issues cause. You then need to weigh the costs of the software and running it versus the manual labor. There are three main categories of software testing tools: automation, bug tracking and coverage. This article discusses the problems these tools solve, helping you decide if those problems are important to your organization, and which tools might be worth implementing now.
Note that while software testing tools accelerate or aid the work, they do not perform actual testing in and of themselves. Testing is performed by people who are looking to gather information from the product -- many of which do so with the tools described below. The team uses that information in order to determine what they need to do next: to go-live with known issues, to fix the issues, change the expected behavior or take another course of action.
Do you need automation?
Several of the teams we've worked with in the past have found themselves with a six-week test/fix/retest cycle. During that time, the technical staff was producing no new features. With three releases a year, the technical staff was testing 18 weeks -- which is more than a third of the time. Long retest cycles make rolling out experiments essentially impossible. Test automation is generally a natural fix for this; have the computer run automated checks, at least overnight, and you could release every day.
At the user interface (UI) level, whether in a web browser, mobile browser or native mobile app, there are many reasons to use automated tools for software testing. Creating a small set of checks that runs frequently -- sometimes as often as every hour -- building the system, checking if any major path of functionality fails and emailing the team on failure are all capabilities that automated testing can provide. This tightens the feedback cycle, so programmers who introduce a major bug can find and fix it in the same day. Having these smoke tests in place can reduce the amount of effort the testers are spending on routine things, add confidence, and vastly reduce the cost of a test cycle without requiring years of automation work.
With mobile software testing, there are often more combinations of operating systems, operating system versions, hardware platform, and screen size than anyone will ever have time to test. Automated mobile testing relieve some of that pain, because a single test can be run on multiple real or emulated devices fairly quickly.
Once the automated checks are running at the GUI level, teams often find a different problem: their tests find too many bugs. When software is breaking too often, it's a sign that the team needs automated unit tests -- very small, technical tests at the code level that programmers can put in place. Or even testing just below the UI at the API level. Programmers who run a unit test suite before check-in can prevent defects from escaping to the build.
The big problems test automation addresses are compressing test time, finding defects faster, and, in the case of unit tests, preventing regressions, where a feature may have worked a day ago but not after a new check-in. If the product tends to fail in ways that are different and unpredictable, or the UI is undergoing a massive change, checking the same things may have limited value. For example, a new UI that adds new required buttons will cause failures of the test suite because the button was not checked. If the success factors are less functionality and more usability -- if the product needs to be viral -- then focusing on test automation might not be the right approach. In these areas, where automation is not needed, direct interaction with humans is more important.
Do you need bug tracking software?
There are projects that can quickly grow to a dozen bugs, then a hundred, then a thousand and so on. At that size, sticky notes and Excel spreadsheets won't do. There are also projects where bug fixes might be placed and need to be tested in different branches, or on very specific platforms. For example, when projects are supported in a browser like Internet Explorer, eventually support for an earlier version of the browser will end. Bugs are created in a bug tracker, so support can search and find issues when customers call in. When teams stop supporting the earlier version of the browser, the test manager can run a simple search for bugs that were only in that earlier version.
Those are some incredibly powerful features. Bug tracking software solves the problem of tracking, searching for and structuring bugs, while making sure required fields are filled in. Many organizations find it valuable to keep tabs on areas like severity, priority, target release and time to fix. Additionally, bug trackers allow organizations to see patterns of issues over time. By examining the descriptions of bugs, or seeing which files are getting modified regularly, sometimes referred to as application heat, programmers and testers can get a sense for where problem areas are, or might be.
Organizations tend to add bug tracking tools when the cost of tracking manually exceeds the cost of installing and running the tool. To see if your enterprise needs bug tracking software, look to the decisions the team is making (i.e., Are we fixing the right bug?), the existing cost to maintain the bug spreadsheet, the time lost searching for bugs, and cases where a known bug is not fixed because, for example, a sticky note fell to the floor and was picked up by a cleaning crew. Hey, it has happened.
Do you need coverage tools?
Some companies implement continuous integration, unit tests and user interface tests. With a robust testing environment like this, a vast majority of problems can be found before an app hits production. Many of these issues can be found through unit tests alone. Programmers use code coverage tools, not for the percentage of coverage lists that are generated, but to find out what code was not covered by unit tests, so they can add additional unit tests over time. This allows the unit tests to find more bugs, which means higher first time quality and fewer retest cycles.
Coverage is generally split into two areas: code coverage, or what lines of code the unit tests exercise, and application coverage, in which customer-facing tests are checked to see if they hit all the requirements. By having a corresponding unit test for every function and a path through each of the options, it is possible to get a high percentage of code coverage. Code coverage tools analyze what is missing, which can help programmers add tests or find new, unconsidered paths through the software. It's also possible to track if all the code is exercised during customer-facing testing, to see what lines of code have not run, and to change the test strategy to address them. Setting this up can be expensive, but it is particularly popular for avionics and medical systems, where code that is not tested could be life-critical.
Application coverage asks if all features in the system were tested. It is possible to track this manually, with a traceability matrix that lists every requirement and the tests that exercise that requirement. Those tests could be stored in wiki or Word documents. The pain happens when an auditor asks if the tests have been run, by whom, and when and against what version of the code. Testers track what they do in those tools and, at the end of each release, the organizations can determine if the coverage is high enough, if the test cases they have compiled have been run and so on.
Like other tools, the main question is how much pain is the team experiencing from the problem, and what is the potential gain. In the case of test case management, the pain might be losing customers -- both existing and potential -- or a finding from an internal audit. The gain might be reducing risk -- but beware. Because software is a complex system, there is rarely one root cause. Coverage tools can improve the chance that a problem is found or, at least, that management has the information it needs to make an informed decision about how much testing is good enough.
Monitoring tools can provide clues about what types of coverage are missing. Each error discovered in production, or resource problem, point to something the team may not have thought about before software was shipped.
If management doesn't have the tools to make the decision on what to test and how much, the decisions are made ad hoc by technical staff, which is not optimal. Coverage tools can add transparency to help guide decisions.
Mapping requirements to test cases
Using bug tracking tools remotely