We've made a lot of progress. I'm really optimistic about the trajectory we're on as an industry. In the beginning we had a hard time convincing anybody that software was a big problem in computer security. Now everybody believes that. Then we had a hard time convincing people about who should work on this problem, and now everybody is coming around to realize that the only people who can work on that problem are software people. So both of those are excellent trends. What about progress in terms of building security into the software development life cycle?
You can characterize organizations into three phases of maturity. The first stage is people who don't have as much of a clue about the problem. In an organization like that, there's one guy whose job it is to do software security and he has budget of like 30K, and he's looking at this organization with maybe 20,000 people and he's trying to figure out how to make software security happen. That's a level 1 or a level 0 organization, and that's very common.
The next stage is a place where there's already what I call a "software security fire department." So there's maybe 20-30 people working to address the problem, and what they're doing is running around performing code reviews using some tools and helping people while they're building software to mostly look for bugs. That's a good thing, but it's pretty much reactive. In a fire department situation, it's very important to think about how you teach the developers to handle security themselves instead of running around throwing water on their fires.
The next level of maturity is organizations that begin to adopt best practices for software security throughout the development organization and actually adjust their software development life cycle to use the best practices, like the ones in my new book. I think it's very rare to be at that stage, but it's also pretty obvious that everyone needs to get there.
I tried to identify things that don't require you to go out and completely change your software development life cycle. Software development processes are kind of like religion, and the last thing you want to do is to try to convince people that first they need to change their religion and then they need to add more stuff. The reason they're touchpoints is it implies lightness -- they're able to be applied no matter what religion you're following. Your first touchpoint is code review with a tool. Let's talk about the tool space.
There are two basic tools for software security that are widely available now. Security testing tools can help show you are in trouble. They have these canned black box tests, but if you run them and you don't find problems, it doesn't really mean you're secure. It just means you didn't find any problems. I like to refer to those tools as "badnessometers" -- they go from big trouble to who knows. I wish everyone was using them because most people are not aware that they're in deep trouble and that their software needs to be fixed. The problem is, if you treat those badnessometers like securityometers -- which they're not -- they might get a false sense of being done.
The other category of tools is code scanning tools that do static analysis, looking at your code itself. What they do is help developers while they're writing code and compiling code to find and remove common software security bugs. My belief is if you are not using a tool like that, you are, in fact, negligent.What other touchpoints are easy to implement?
The other one that's going to be easy, not for developers but for software architects, involves architectural risk analysis. We need to get architects to understand that their architectural decisions make a huge impact on software security, and so they need to think about their architecture from a security perspective, using things like Microsoft's Stride model [a framework for general security concepts] or our architectural risk analysis touchpoint, to try to understand how certain attacks might be carried out given their very architecture.
They will have to make some changes in how they think, but the way software architects already work should be amenable to doing architectural risk analysis. They're already creating the right sorts of artifacts that can be looked at. And my touchpoints are all about looking at the artifacts you're creating.
Touchpoint three is penetration testing. One of the problems with penetration testing as it's practiced today is people are treating it as a badnessometer. The problem is, penetration testing is only as effective as the people doing it. So if you're just using a canned tool, it's not going to be very sophisticated. It turns out pen testing is the most useful way to help make sure you don't screw up the environment in which you put secure software. Unfortunately, a lot of people use penetration testing as a feel-good exercise in security.
Another kind of testing, touchpoint four, is risk-based security testing. Let's say you've done a risk analysis. You can use the results of that to drive an effective security testing program that goes beyond just testing your security mechanisms, but also probes your system like an attacker would probe it based on its architecture. You need to stick on a black hat and really go after the software like a bad guy would.Is this is easily incorporated into the way folks work today?
The first three [touchpoints] are really no-brainers. From then on it gets a little trickier, and there's going to be more work involved. We're really talking about pushing left into the software development life cycle; and it's going to be easier for people to start at the right and move to the left, through code and architecture, all the way through requirements over time. I recommend starting with code review with a tool, followed by architectural risk analysis. I think those are the first two that everybody should be doing today. So if you're only going to do two, do those two. What is another more difficult touchpoint?
Abuse case development is what's going to come in the future. [Use and abuse cases are touchpoints 5 and 6.] That means knowing who your enemy is. A lot of people who build software don't think they have any enemies. Abuse cases can help you think about those kinds of possibilities ahead of time. And the last touchpoint?
Security operations, something that normal security guys are good at. It's about firewalls, environment, about patching your system, it's about setting up intrusion detection stuff right, knowing your enemy, and monitoring in vigilance and feedback loops. All that stuff is really important for software security, too, it turns out. It's just that we can't solve the problem by doing just that. What makes a successful security initiative?
You need to marry two things. One is real leadership at the top, so that you have buy-in and clarity and budget. And from the bottom you also need grassroots supports, so you need to have some developers that are excited and psyched to get this going. You put those together and you build a very powerful program.
Gary McGraw, Ph.D., is the chief technology officer at Cigital Inc., a software quality management consulting company in Dulles, Va. McGraw consults with major software producers and consumers, and he functions as principal investigator on grants from Air Force Research Labs, DARPA, National Science Foundation and NIST's Advanced Technology Program. McGraw is co-author of five best-selling books, including Exploiting Software (Addison-Wesley, 2004), Building Secure Software (Addison-Wesley, 2001), and Software Security: Building Security In (Addison-Wesley, 2006).