By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
A continuous quality process ensures quality tasks are not only deployed across every stage of the software development life cycle (SDLC), but also ingrained into the team's workflow. It can be achieved by taking a policy-based approach that embeds automated policy monitoring "sensors" across the SDLC. This involves defining policies that capture the organization's expectations around quality and security, then leveraging automation as a sensor that checks first if the policies are applied correctly and second if they are achieving the expected results.
Rather than constantly incurring the costs of testing quality and security defects out of the software (auditing), organizations invest in establishing a system that helps the team build quality and security into the software.
With such an infrastructure in place, team productivity increases dramatically. By following clearly defined expectations for building quality and security into code, development is freed from the constant interruption of having to review, reproduce, and remediate defects reported by QA. Moreover, with so many defects being prevented, QA resources can be reduced or reallocated into tasks that deliver increased business value, such as performing a more extensive high-level "functional audit" of the application and helping the team monitor and improve its continuous quality process.
Reduce defects and debugging
Writing code without heed for quality and security and then later trying to identify and remove all of the application's defects not only is resource-intensive, but it's also largely ineffective. To have any chance of exposing all of the defects that may be nested throughout the application, you would need to identify every single path through the application and then rigorously test each and every one.
On average, there is one branch for every five lines of code (LOC). If you had 1 million LOC, there would be approximately 200,000 branches — and just exercising all of those paths once would require 2^200,000 paths through the code. Often, detecting all of the available defects requires multiple paths through the code, testing with different parameters and data.
Moreover, any problem found at this point would be difficult to fix, considering that the effort, cost, and time required to fix each bug increases exponentially as the development process progresses. Most important, the bug-finding approach fails to address the root cause of the problem. As other industries figured out long ago, the key to quality is implementing and enforcing a quality process that builds quality into the product — not searching for better ways to find and fix defects as the products come off the "assembly line."
Building quality and security into an application involves designing and implementing the application according to a policy in order to reduce the risk of defects and security vulnerabilities, then verifying that the policy is implemented and operating correctly.
For example, establishing a policy to apply user input validation immediately after the input values are received guarantees that all inputs are cleaned before they are passed down through the infinite paths of the code to wreak havoc. By implementing policies that enforce immediate validation, there is no need to search for SQL injection vulnerabilities throughout each and every path in your code. And there is no risk that this type of vulnerability will pass through your testing efforts, exposing the organization to litigation and/or penalties.
Another example: If you establish and enforce a policy that prohibits developers from modifying the index of loop inside of a loop, you don't need to look for loop deadlocks related to index modifications — they simply can't occur.
Traditionally, QA time follows development time and is a rather lengthy process. This significantly handicaps the organization's ability to deliver new and modified software efficiently. Compounding the problem, every time a defect is discovered, the team returns to this resource-intensive QA process.
By implementing the following software verification methods as part of a continuous quality process, QA time can be significantly optimized:
- Static analysis: QA time is largely wasted chasing after simple defects — defects that could easily be prevented by ensuring that developers write code according to the team's policy for establishing code security, reliability, performance, and maintainability. By adhering to the policy of conducing static analysis during development, you guarantee that many categories of defects will not occur, and you free resources that would otherwise be needed to identify, diagnose, and resolve these defects later in the process.
Enforcing a coding policy through static analysis is like ensuring that each stage of the production line produces parts to spec. If you do that, you can rest assured that you have quality parts without having to inspect each and every part. On the other hand, if you don't ensure the consistency of production, QA needs to spend a tremendous amount of time trying to identify and resolve defects as each product comes off the production line — and some defects are liable to be overlooked.
Static analysis is the shortest path to implementing proper, consistent group behavior. With appropriate implementation and training, team members will come to accept the policies and will adopt policy adherence as a natural part of their day-to-day workflow.
- Code review during development: Considerable QA time is also spent trying to identify functional defects — instances where the application does not do what it's supposed to. The only way to identify such defects is with the human brain. However, having the human brain try to find those defects after development is difficult because so much disparate information must be consumed at once. It's better to conduct peer reviews during development, which is when functional defects are fastest, easiest, and least costly to identify and resolve. Again, by reducing the amount of defects that need to be addressed during QA, you reduce the length and cost of the QA cycle.
- Automated regression testing: The ability to create an automated regression test suite and run it completely automatically is essential for overlapping QA with development. Such a test suite should leverage technologies including static analysis, unit testing, and protocol testing. Moreover, it should be driven by an automated infrastructure so that the test suite runs on its own each night (after the build) and immediately alerts the team if modifications introduced have an unexpected or negative impact to the existing functionality. With that type of automated regression testing, QA can focus on overseeing the execution and extension of this test suite. If it's not running automatically, they get it back on track. And as application functionality is added or intentionally modified, they ensure that the test suite remains in sync with the application. Essentially, the QA role morphs from "product inspector" to "quality supervisor."
About the author: Wayne Ariola, vice president of strategy at Parasoft, oversees the company's business development team as well as the SOA/Web solutions team. He has 15 years' strategic consulting experience in the high tech and software development industries. Prior to working at Parasoft, he was senior director of business development at Fasturn and a principle consultant for PricewaterhouseCoopers where he was recognized as leader in the strategic change practice. He has a BA from the University of California, Santa Barbara and an MBA from Indiana University.
Dig Deeper on Software Security Test Best Practices