Coverage tools when run with the application under test will tell you how much code is covered by the executed...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
test cases. These tools will perform runtime analysis and provide different statistics about the line coverage, branch coverage, statement coverage, etc. with the given set of tests executed.
How do they work?
The coverage tools will first instrument the application under test dlls and exes. Instrumentation is a process of inserting additional code in to the compiled program for the purpose of collecting measurement data while the program is running.
Running the use cases (test cases) on the application
Once the instrumentation process is over, the coverage tool will bring up the application under test. The next step is to execute all the different use cases on the application under test. These use cases can be either automated or manual. While the user runs different use cases on the application, in the background the coverage tool will analyze the coverage of the application code w.r.t of the use cases we ran.
Getting the statistics
Once the above step is over, the application needs to be closed. Then the coverage tool will populate the different statistics as a result of executing the give tests on the application.
Coverage tools can give different views of the coverage for analysis. They can provide both high-level coverage information and detailed line-by-line coverage information. They also provide information for single runs and for multiple runs. If the use cases are run multiple times, tools can be used to merge the different runs and get the consolidated output coverage.
Where to use coverage tools?
The coverage tools can instrument and give the statistics for .NET, Java, Visual C/C++ and Visual Basic applications.
Why use coverage tools?
The advantages of using the coverage tools in the project include the following:
- Identifying the dead code: After running all the defined use cases, if the coverage output shows that if some functions are not called, after analyzing we can identify whether the code is un-touched as no required use case exists or code is dead code (i.e. not required)
- Identifying the missing test cases: Coverage output can also be useful in identifying the extra tests (exceptional cases), which we were missed out earlier after analyzing the coverage report.
- Function and line coverage: Coverage output reports how many functions and how may lines have been covered in the application code. This also gives "hit" of the functions, i.e. how many times a particular method is been called. This gives additional information such as which code is been accessed more.
What should be the coverage target?
Ideally the entire code base should be covered with one or more use cases, i.e. one should achieve 100% code coverage. But due to some exceptional input conditions that can not be simulated, the entire code base cannot be reached.
However, 100% code coverage does not mean that all the code is 100% correct. Coverage only gives an indication that there is no existing code that is not exercised, but there could always be a situation where some functionality is missed out while implementation. So, we can keep reasonable target of greater than 80% function coverage and 100% line coverage. Our practical experience as shown below also illustrates that these coverage targets are acceptable.
We have used the coverage tool from Rational, named Rational-Pure Coverage, as part of testing many of the MR applications. The below figure shows the coverage report for the Control Parameter Editor module developed as part of the SCSD project.
The achieved coverage for the CPE module when all the use cases are executed is as follows:
- Function coverage -- 100%
- Line coverage -- 82%
With the above achieved coverage targets, we concluded that maximum methods that can be reached with the possible inputs are reached. And with 100% function coverage, we concluded that all the code is covered, i.e. the test cases are complete w.r.t the code base. As indicated earlier, this coverage output shows only that there is no dead code and no required test cases are missing to cover the existing code.