Any non-trivial software has vulnerabilities. It's a sad and daunting fact of life for IT practitioners. What's even more frustrating is that no matter how much money we spend making code more resilient, fortifying the network, or throwing tomatoes at developers that write bad code, we just don't know how to make software that's 100% secure. And most vendors and businesses don't want to make "near perfect" software because it takes too long and costs too much.
The most direct way to reduce risk and minimize vulnerability is to seed security into the development process through security education, threat modeling, source code audits and penetration tests. These preventative measures address most of the obvious and some of the more insidious vulnerabilities software harbors.
Unfortunately, as an industry we don't take enough responsibility for the inevitable remaining vulnerabilities that elude our best efforts at prevention. Combating these vulnerabilities requires building "patchability" and maintainability into the software development life cycle.
As early as product inception and requirements gathering we should be asking tough questions about post-deployment patching. Some of these questions will drive developers to create a product that is patchable and maintainable. The others will help build an incident response process that handles vulnerabilities and disseminates patches in a way that minimizes operational risk.
There are a number of questions that any development organization committed to building a resilient software product should ask.
Patchability design questions
These questions should be asked early in the development process. The answers should be factored fundamentally into the choices that are made during development and design.
- What patch deployment mechanisms/tools will we have in place?
- Are these tools scalable and centrally manageable?
- Do they integrate with common frameworks?
- What facilities do we offer to help customers test patches in their deployment environments?
- What impact does the patching process have on the running application? (reboot, application restart, down time)
- Can incremental patches be sent or do we need to send completely new binaries/images?
- Are we concerned about patch reverse engineering to discover vulnerabilities (for closed-source application) and what mechanisms are in place to prevent/deter this?
Response process and infrastructure
These questions should be factored into the creation and refinement of the incident response process.
- What types of vulnerabilities are we committed to fixing if they are discovered?
- How are we connected with the user/research community to encourage disclosure to us versus the general public?
- Do we provide enough details in vulnerability bulletins so that users/operations can make informed choices about urgency of patch deployment?
- How do we provide severity rankings for operations/users, and do we use common frameworks such as the Common Vulnerability Scoring System (CVSS)?
- Do we provide traceability for public vulnerabilities to users (using CVE IDs, for example)?
- Do we conduct regular dry runs for the response process?
- What internal testing is done to verify that there is no (or minimal) functional impact from security patches?
By thinking proactively about the patching process throughout the software development life cycle, it is possible to implement policies and procedures that limit the damage of the most costly vulnerabilities in software --the ones that actually get discovered and exploited during operations! This type of failure planning is de rigeur for development groups so that when disasters happen they won't be so disastrous.
Some companies have staved off potential stock value calamities with proper incident response planning. Consider the infamous Windows Metafile Vulnerability that played out publicly in early 2006. While Microsoft and its customers both wish the vulnerability never existed, what could have been one of the worst publicity disasters for the company instead became a media-praised showcase of Microsoft's incident response process.
The key is that good risk management for software means having a mix of security in the software development life cycle with a well thought out, tested and planned-for incident response process. It's really the only effective and practical way to deal with that most elusive and frustrating property of security. No one cares about the 99 vulnerabilities you prevented when the 100th becomes public and is poorly handled.
About the author: Herbert H. Thompson, Ph.D., is chief security strategist at Security Innovation Inc. and chairman of the Application Security Industry Consortium.
Reader Feedback: Share your comments on this article