By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
|Ryan Berg, Ounce Labs|
Software is generally created from a "functionality first" perspective, with quality as a part of the standard software development lifecycle (SDLC) but with security as a distant third (or nonexistent) priority. This is an unfortunate reality. Designing an application is an exercise in meeting a business objective. The application design and development stage is the ideal time to consider how security requirements and business needs intersect. Building security into the SDLC is a sound business decision -- there may be a cost in securing your vulnerabilities, but allowing yourself to be exposed to malicious activities has costs as well. Prevention is a more reasonable cost to justify and ultimately a much lower cost for an organization to absorb.
Studies have repeatedly shown that detecting and preventing code flaws early in the software development life cycle leads to significant cost savings. Unfortunately, the path to securing an application too often begins with rigorous testing for vulnerabilities, to ensure the application will not compromise, or allow others to compromise, data privacy and integrity. This is already too late.
Developing secure code must begin during requirement definition and continue throughout design and development, as well as during testing and deployment. If you wait until testing you are almost guaranteed to find insecurities, and all too often, you will not find all of them or even miss the most critical flaws. Secure coding is, admittedly, a cultural shift for many organizations, because it is such a fundamental, nonstrategic area, yet it has the most intrinsic relationship with data privacy and integrity and is the most effective way to verify that the security requirements set forth during design have been met. The best way to ensure code security is through a secure development process that includes source code review and accomplishes three things:
- Consistency: It's important to create consistent processes and policies for a culture of improved security.
- Multidimensional analysis: When it comes to dangerous vulnerabilities, large-scale design flaws can be more dangerous than the individual coding errors that are more traditionally associated with application vulnerability, such as buffer overflows. Fixing individual vulnerabilities will have little effect if data is not encrypted, authentication is weak, or there are open backdoors in an application.
- Mitigation prioritization: When reviewing existing code, developers must identify all vulnerabilities in the code, prioritize and triage those vulnerabilities in the context of the organization, and then remediate the greatest risks first.
Developing secure source code requires vigilance in examining all of the places vulnerabilities may exist, not just those where we expect them to exist -- for example, through penetration testing. Even with the use of automated tools, the development community needs to validate implementation and design practices, including native code and code-reuse practices, and whether or not they could result in vulnerabilities. Along the way, to effectively measure the risk posed by any given application, security analysts or developers should watch especially for the two primary categories of errors:
- Coding errors: These types of "quality-style" defects are usually minute and will usually stand alone when identified and remediation is applied. They are characterized by "loose" program practices such as buffer overflows and call-timing mismatches.
- Design flaws: This category includes security mechanisms that, when defined properly from the outset, can be part of the positive security in an application, as opposed to an area of risk. These include authentication, encryption, the use of insecure external code types, and validation of data input as well as application output. However, if poorly implemented, they can open up the application to just as much risk as a buffer overflow.
Finding these kinds of vulnerabilities in your applications is one part process and nine parts detective work -- it is not just about finding a better way to define the need for security in the development process, but about looking at all of the places where vulnerabilities of all types can lurk and identifying the potential risk to your organization if those vulnerabilities were to be exploited.
Paging Sherlock Holmes
The most common approaches to vulnerability detection are manual code review and penetration testing. Each method approaches the analysis in a different way. Manual code reviews can thoroughly analyze an application across a wide matrix of criteria, but are time-consuming and expensive and do not scale to be a part of the development process given the complexity of most software code. Often manual code review is only performed on those areas in an application that are believed to present the greatest risk, leaving large areas of the application wide open. Vulnerabilities have no prejudice, however, and can exist anywhere.
Penetration tests, when automated, are easily repeated, but must by necessity fall at the end of the lifecycle when the application is complete, rather than being a tool employed from the start. Additionally, they have a more narrowly defined set of vulnerabilities than source code analysis, which identifies a broader array of potential vulnerabilities beyond the expected ones.
Automated source code analysis, while a comparatively new tool in the security analyst's detective kit, arms organizations with the ability to evaluate every application -- both existing applications as well as code under development -- against critical classes of code vulnerabilities, including:
- Security-related functions
- Input/output (I/O) validation and encoding errors
- Error handling and logging vulnerabilities
- Insecure components
- Coding errors
Following the path of security-related issues through the source code of an application can dramatically reduce the vulnerability of the application and the critical data it processes and protects.
Companies today must treat every existing and under-development application as a security risk until it is proven otherwise, simply because of the risk such vulnerabilities can pose to the business. No single tool will be the silver bullet to make all software secure, simply because the breadth of legacy code in use is so vast, but tools do help developers and code reviewers assess applications to quickly identify the most potentially damaging vulnerabilities and triage those applications for remediation. Taking a risk-based approach to remediating the code base, starting with the most critical problems first, is the most effective means to developing secure applications. Companies able to efficiently and effectively integrate this analysis into their software development lifecycle practices will not only improve their own security state but reap substantial business benefits for themselves and all those that rely on their software.
About the author: Ryan Berg is a co-founder and chief scientist for Ounce Labs Inc. Berg is a popular speaker, instructor and author in the fields of security, risk management and secure development processes. He holds multiple patents and has patents pending in multilanguage security assessment, kernel-level security, intermediary security assessment language and secure remote communication protocols. Prior to Ounce Labs, he co-founded Qiave Technologies, a pioneer in kernel-level security, which was later acquired by WatchGuard Technologies.
Dig Deeper on Building security into the SDLC (Software development life cycle)