Requirements-based function testing is a powerful and effective testing approach that can significantly reduce the number of undetected software defects released into production. David W. Johnson explains how it works.
The IEEE standard alludes to evaluating the functionality of the system against a required condition to determine the readiness of the system under test. The required condition is often described as a business or functional requirement and the analysis process as testing. The IEEE provides these definitions for testing:
- The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component (IEEE Std 610.12).
- The process of analyzing a software item to detect the difference between existing and required conditions and to evaluate the features of the software items (IEEE Std 829).
The first phase of testing that your testing organization owns is usually functional testing -- often referred to as
What is function test?
The objective of function test is to measure the quality of the functional (business) components of the system. Tests verify that the system behaves correctly from the user/business perspective and functions according to the requirements, models, storyboards or any other design paradigm used to specify the application.
The function test must determine if each component or business event performs in accordance to the specifications, responds correctly to all conditions that may be presented by incoming events/data, and transports data correctly from one business event to the next (including data stores). It must also make sure business events are initiated in the order required to meet the business objectives of the system.
What is a requirement?
A requirement is a capability or function that must be delivered by a system component or components. A functional requirement is a specific business need or behavior as seen by an external user of the system.
Test cycle for requirements-based function testing
An effective test cycle must have a defined set of processes and deliverables. The primary processes/deliverables for requirements-based function test are as follows:
- Test planning
- Partitioning/functional decomposition
- Requirements definition/verification
- Test case design
- Traceability (Traceability Matrix)
- Test case execution
- Defect management
- Coverage analysis
What processes and deliverables apply to any given testing situation depend on available resources (people, source materials, time, etc.) and the mandate of the test organization.
During planning the test lead, with assistance from the test team, defines the scope, schedule and deliverables for the function test cycle. The test lead delivers a test plan (document) and a test schedule (work plan). (Those often undergo several revisions during the testing cycle.)
Functional decomposition of a system (or partitioning) is the breakdown of a system into its functional components or functional areas. Another group in the organization may take responsibility for the functional decomposition (or model) of the system, but the testing organization should still review this deliverable for completeness before accepting it into the test organization.
If the functional decomposition or partitions have not been defined or are deemed insufficient, then the testing organization will have to take responsibility for creating and maintaining the partitions.
There are several commercial, shareware and freeware products available that aid in the functional decomposition of a system and the formal delivery of the functional partitions.
Requirements definition is often the weakest deliverable in the software development process. Many development shops go directly from software concept to functional specification or worse -- from software concept to code -- without any preliminary software design deliverables. The testing organization needs these requirements to proceed with function testing. That means if the development team is not going to deliver the requirements for verification by the testing team, then the test team must create its own set of testable requirements. These requirements need to be itemized under the appropriate functional partition.
Test case design
The test designer/tester designs and implements test cases to make sure the product performs in accordance with the requirements. These test cases need to be itemized under the appropriate functional partition and mapped or traced to the requirements being tested.
Traceability (Traceability Matrix)
Test cases need to be traced/mapped back to the appropriate requirement. Once all aspects of a requirement have been tested by one or more test cases, then the test design activity for that requirement can be considered complete.
A common misconception made during this process is that all test cases that exercise a particular requirement should be mapped to that requirement. In fact, only those test cases that are specifically created to test a requirement should be traced to that requirement. This approach gives a much more accurate picture of the application when coverage analysis is done. Failure of a test case does not mean failure of all the requirements exercised (as opposed to tested by) the test case.
Test case execution
Like all phases of testing, the appropriate set of test cases needs to be executed and the results of those test cases recorded. Which test cases are to be executed should be defined within the context of the test plan and the current state of the application being tested. If the current state of the application does not support the testing of one or more requirements, then this testing should be deferred until it justifies the expenditure of testing resources.
As in all phases of testing, any defects detected during test execution need to be both recorded and managed by the testing organization. (See "The role of a software test manager.") During function testing each defect should be traced to a specific requirement or requirements that are not performing to specification.
During a function test a periodic progress report should be delivered by the test organization to the project team. The report will provide coverage analysis of the requirements against test cases and outstanding defects. The objective of this analysis is to determine the percentage of the requirements that are deemed to be untested, performing to specification (executed successfully) and not performing to specification (defects).
There are several commercial, shareware and freeware products available that can be used to expedite the creation of all these deliverables while streamlining the testing process.
Managing function testsFunction (integration) testing can be an overwhelming task for an inexperienced testing organization. To assure success at the test organization and project level, the scope of the testing effort needs to be rigorously defined and followed. The definition of the scope needs to be understood by the test organization and the project team. If the scope of the testing effort needs to be redefined, then this must be communicated. A realistic work plan with clear deliverables and dependencies needs to be drafted and updated when any event occurs that impacts the work plan. The key to success is to manage the expectations of the testing team and the project team while clearly communicating the current status of the testing effort on an on-going basis.
About the author: David W. Johnson is a senior computer systems analyst with over 20 years of experience in IT across several industries. He has played key roles in business needs analysis, software design, software development, testing, training, implementation, organizational assessments and support of business solutions. David has developed specific expertise over the past 10 years on implementing "Testware," including test strategies, test planning, test automation and test management solutions. You may contact David at DavidWJohnson@Eastlink.ca.
This was first published in July 2007