|Dr. Herbert H. Thompson, chief security strategist, Security Innovation Inc.|
Baking security in means integrating security into each phase of the software development life cycle. It all starts with management buy-in; without the backing of key stakeholders in the business, any security improvement is bound to be sporadic and unsustainable. The next step is awareness; you're likely to have a smart and dedicated development organization that would make more security-savvy decisions if they understood what the right things were to do for security.
Each stage of the development lifecycle needs to consider security.
In [the] requirements [stage], we need to understand not just the functional needs of customers, but their security needs as well. Some of these needs may be driven by legislation; others may only be uncovered by probing customers on what their biggest risks are. Different types of customers may require different security qualities, and it may come out in requirements analysis that you need "tunable" levels of security.
One of the best things you can do at the design stage to improve security is to build threat models that help identify what the risks are, based on the design decisions made for that software. Instituting a security design review process can yield huge benefits as well and can point out issues like, "We've got this entire 40,000-line component running as a high-privileged user when only 20 lines require high-privilege." Separating that component into two components -- a big one running as an unprivileged user and a tiny one running as "administrator" or "root" -- could substantially reduce risk with little effort.
At the development stage, security is again largely an awareness problem. It requires developers to have some minimal level of security awareness through training, as well as basic secure coding guidelines.
During testing, it's critical for testers to incorporate "abuse case" testing, as well as traditional "use case" and specification-driven testing. They need to ensure that the system behaves according to the specification and that behavior is constrained.
Finally, during deployment, baking security in means providing secure deployment guidance for the product or application, as well as establishing a vulnerability response process that is efficient, effective and minimizes user exposure. Making software that is 100% secure is impossible, and making it even 90% secure may lead to an unusable product or one that is cost-prohibitive to develop. To compensate, development organizations need to plan for security failure and make sure the plumbing is in place to respond to inevitable vulnerabilities.What are the key benefits of baking in security, compared with how software security has traditionally been handled?
By baking security into the process, vendors gain a strategic advantage in the marketplace. In many ways security is becoming a key discriminator, and right now it's evaluated somewhat anecdotally. As metrics surface, though, it will be one of the key buying criteria for customers.
For corporate development, baking security in means reducing risk, plain and simple. Traditionally security has been "handled" by adding stuff to the network. IT spending has been on defenses that are understood, not defenses that reduce the most risk. For example, many people understand what a firewall is; the problem, though, is that these devices only mitigate certain threats. With the proliferation of service-oriented architecture, and the fact that more and more data is moving through port 80, the threat of a weak application is huge no matter how many firewalls you put in front of it.Does it require developers being trained in secure software development best practices? Are universities and/or corporations addressing this yet in an adequate way?
Training developers at least minimally on software security can have ridiculously high returns in terms of improving the security quality of applications. Developers are generally smart people who want to do the right thing, and training just arms them with the ability to make better decisions.
Unfortunately, universities have been slow to address the problem of software security. Most undergraduate degree programs in computer science or software engineering do not include a class, module or even an elective on secure coding. This leaves a big knowledge gap for developers and many vendors/corporations are creating secure coding training programs to fill those gaps. At Microsoft this type of training is mandatory and many other vendors are heading that way too.Will secure software development slow the time it takes to get an application out?
Integrating security into the development life cycle does increase time to market as does quality, reliability and all of the other characteristics that customers expect from software. When you get right down to it, security is about mitigating risk at a cost. The question then becomes: Is improving the security of the development process worth more than it costs?
Each development organization needs to evaluate that question individually. While the cost of a security-improving activity may be known (such as the cost of taking two hours of time from each developer to attend a security awareness seminar), the "benefit" part has been tricky to compute. With legislation-imposed audits and penalties, though, and real, visible costs stemming from security breaches at companies, we may not know how to calculate the benefit directly, but we know that it can be substantial. Just asking the question, "Would this bug have been here had the developer who wrote this code known what SQL injection is?" can start an organization down the path to justifying the cost of improving security.What can companies/developers do now to begin baking security in? What guidelines should they follow?
The best first step is an awareness campaign in the organization. This may take the form of a seminar, an e-learning module or a class on security that "influencers" -- the more senior members of development teams -- are required to attend. Security knowledge can be infectious in an organization, and once there is buy-in from influential technical folks and management, you're on your way to secure coding standards, security testing and threat modeling! You've written a novel in addition to many technical articles and books. How much fun was it to write the novel? Will there be a sequel?
It was great! When Spyros Nomikos and I wrote The Mezonic Agenda we wanted to give people a great story but also teach them about software security and expose them to the underground culture of "hacking" along the way. It was so much fun and a great change of pace from the technical books on security I usually write. I'm sure there will be more to come! In the next novel there's a big twist right at the end where the Linux kernel actually turns out to have a huge ...
This is the second part of a two-part interview with Dr. Herbert H. Thompson. The first part, "Software buyers forcing changes in application security" discusses how regulations and customer expectations are affecting application security.
Dr. Herbert H. Thompson is chief security strategist at Security Innovation Inc., a provider of assessment and training services in Wilmington, Mass., and world-renown expert in application security. He has co-authored or edited 12 books, including How to Break Software Security (Addison Wesley, May 2003), and The Software Vulnerability Guide (Programming Series) (Charles River Media, June 2005). Thompson is chair of the Application Security Industry Consortium Inc. (AppSIC), an association working to establish cross-industry application security guidance and metrics. He also co-wrote the cybercrime novel The Mezonic Agenda: Hacking the Presidency (Syngress, October 2004).