I found a great article earlier this week on static analysis tools by Mary Brandel. In the article, “How to choose and use source code analysis tools,” she cites some statistics on the static analysis market, including:
- “The entire software security market was worth about US $300 million in 2007”
- “The tools portion of that market doubled from 2006 to 2007 to about $180 million”
- “About half of that is attributable to static analysis tools, which amounted to about $91.9 million”
In the article, Brandel also offers some evaluation criteria for when you start looking at source code analysis tools. These include language support and integration, assessment accuracy, customization, and knowledge base. She also provides some dos and don’ts for source code analysis. I think the most valuable tidbits from that list include:
- DO consider using more than one tool: The article provides a good story about Lint vs. Coverity, and I’ve found that static analysis tools will find different issues as well. Each vendor will have its own specific focus on vulnerabilities and warnings.
- DO retain the human element: While I’ve yet to work with a team that thinks adding automated tools like this will allow you to remove people, there’s certainly the feeling from the marketing materials that the results are intuitive. That’s typically not the case. You often need to know what you’re looking at or you’ll miss the subtleties in the data. I agree with the “truly an art form” quote. This stuff is hard, and while tools make it easier, it’s still brain-engaged work.
- DO consider reporting flexibility: At some companies this is a big deal. When working with smaller software development organizations, it doesn’t matter what the reports look like. The only people looking at them are the people working in the code. However at a larger company, Fortune 500 for example, information like this normally needs to be summarized and reported up.