By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Internet applications are the new target of choice for criminals to obtain restricted information and unwarranted access to companies' protected assets. The most effective way to combat these threats is to develop secure applications. This requires developers who are trained in security and is definitely the primary longer-term solution to this problem. However, these types of people are in short supply and difficult to find.
What other options are open to security managers? The number and type of assessment and protection measures for these applications is growing. However, the selection of an appropriate application security risk management solution should take into account business requirements and return on investment (ROI) considerations.
An effective approach to securing applications entails:
- Performing some level of vulnerability assessment
- Implementing a Web application firewall (WAF) to address common and newly found vulnerabilities
- Remediating the source code to eliminate the vulnerabilities
The appropriate combination of these options will give the best result for mitigating risks. Those responsible for the security of their environments need to understand the limitations and capabilities of each option within these elements. Armed with this knowledge, an appropriate risk management strategy can be developed with prioritized action to reduce these threats.
What you don't know can hurt you
What application security assessment methodologies are available?automated and manual analysis from an external (black box) and internal (white box) perspective
External Web application scanning
Application scanning involves interacting with a running application (essentially using and attacking the application) as a black box to identify points of vulnerability.
The strength of external application testing is that because the application is actually attacked, the resulting proof of vulnerability is usually quite concrete and compelling. If you can see another user's account data or display the structure of the database, it is hard to argue against the existence of the vulnerability. Also, some external testing tools integrate their results with WAF configurations to address those vulnerabilities found in the scan, making the WAF that much more effective.
The weakness of external application scanning is that it identifies only a limited range of vulnerabilities and requires a highly skilled practitioner. Since the application user interface is the attack vector, the approach is ill-suited to examining business component, back-end, or external service vulnerabilities. For example, if sensitive data such as Social Security numbers are not encrypted, if third-party services operate without proper protection, or if critical security events such as failed logins are not adequately logged, these vulnerabilities are likely to go undetected. Also, these types of analyses are of limited time frame -- usually one to three weeks. A motivated attacker will have the time and patience to keep up attacks until he finds a way in if vulnerabilities exist.
Automated static analysis of source code
Static analysis involves a tool-based review of the application code for vulnerabilities. For most tools, this usually refers to the source code but less frequently refers to the binary code. This would be considered a white box assessment, as nothing is hidden from the analyst. The application code is a much larger and richer analysis target than the user interface addressed by external, or black box, application scanning. Therefore, a broader range of vulnerabilities can be identified.
The best-of-breed static analysis tools utilize sophisticated compiler technologies such as data flow analysis, control flow analysis, and pattern recognition to identify security vulnerabilities. The results of automated analysis generally include a high degree of false positives, requiring a highly skilled security engineer to analyze the results with the source code in hand to distinguish between the truly and the falsely reported vulnerabilities.
Static analyzers are best at identifying vulnerabilities that can be represented as identifiable patterns. Examples of these risks include the following:
- A missing entry in an XML configuration file
- The use of a dangerous function, including unvalidated user input data in Web page output (cross-site scripting vulnerability)
- The inclusion of unvalidated input data in the construction of a database query (SQL injection)
Most static analysis tools can also identify a range of poor programming practices, such as the use of uninitialized variables or the lack of error handling.
The main strength of automated static analysis is that the analyzers reliably identify candidate issues (which could turn out to be false positives) and can do so in the face of highly complex application structure and control flow that might daunt most humans. For the software expense and the skilled labor required, the results can be quite cost-effective.
The main limitation of these automated tools is that they currently can find only approximately 30% of the types of security vulnerabilities that should be evaluated in a security assessment to provide a comprehensive view of risks present in an application. With the current state of the technology, automated analyzers are generally not capable of testing algorithms, security policy adherence, and issues that may be derived from the application domain. Examples of these areas include the following:
- Disclosure of confidential data
- Audit logging
- Cross-site request forgery (XSRF)
- Identification of application "backdoors"
Manual static analysis
Manual static analysis involves a review of the application architecture and source code by highly skilled software security engineers. The resulting analysis is comprehensive and is, overall, the most reliable of the approaches. Thus it has been the method of choice where application security is of paramount concern, such as most financial services.
The strength of manual analysis is the level of depth and thoroughness of the assessment. The full range of security vulnerabilities can most readily be identified with high reliability. Specific attributes of the application domain (credit card numbers, account numbers, classified data, etc.) can be taken into account.
The main drawback of manual analysis is that engineers with the necessary skills and experience -- both extensive enterprise application development experience coupled with deep security knowledge -- are scarce and in high demand. The time required and the level of effort involved makes this approach more costly than other options.
Combined automated/manual source code analysis with external testing
The most effective assessment combines manual and automated source code analysis with some level of external vulnerability analysis. As noted above, there are some things automated tools do well and some that require manual methods of assessment. Combining this with the ability to verify vulnerabilities from the outside will ascertain if something found in the source code is truly a vulnerability, or it will allow the analyst to see if there are mitigating controls in place to reduce the risk in the source code.
Of course this would require that the application be set up and accessible in a "production-like" test environment, which is not always possible and can be expensive.
What about WAFs?Payment Card Industry Data Security Standards
While code reviews and external scanning of applications can provide some level of risk understanding, WAFs can provide actual risk mitigation defense. The most advanced WAFs use statistical analyses to "learn" appropriate requests and responses to and from target applications. Once these patterns have been learned, the firewall can be told to block or alert when anomalous requests occur.
However, the cost of procuring, implementing, and maintaining these firewalls can be significant. The low-end per-box cost may be around $5,000 with the high end being close to $100,000 for firewalls used to protect large, high-traffic financial portals. When considering the personnel requirements to maintain logs and configurations, you may be looking at an additional $100,000 per year or more.
Of course, there is no "one size fits all" approach to application security. A sound risk management strategy will make the most appropriate use of any available technology or process.
The best approach to application security is to develop code that does not have security vulnerabilities. This is the way of the future. Developers are being trained to program securely and recognize security issues in applications. Until this is normal practice, or if an application is already in production, many agree that the most effective risk mitigation strategy is to assess the source code in the most comprehensive fashion, implement a WAF and configure it to address the risks found in the assessment while code is remediated to eliminate these vulnerabilities. Costs and business drivers may result in lower levels of assessment and protection, but that is a business decision. And business decisions are best made by each individual company.
About the author: Greg Reber is the CEO of AsTech Consulting, a 10-year-old information security consulting firm based in San Francisco. The company's clients include nationwide financial institutions, identity theft assistance organizations, and online retail service providers. AsTech's main focus is to assess the security posture of Internet applications and develop security strategies to mitigate risks. Greg has an engineering degree from the University of Maryland.