I recently read the article “Penetration Testing: Dead in 2009” by Bill Brenner. In the article Mr. Brenner follows a small debate around the idea that over time penetration testing will be largely replaced by preventative checks.
The debate opens with some quotes from Brian Chess from Fortify Software. Fortify creates code analysis tools that scan for security concerns and adherence to good secure coding practices. That potential bias aside, I suspect that Mr. Chess’ statement — that “Customers are clamoring more for preventative tools than tools that simply find the weaknesses that already exist […]. They want to prevent holes from opening in the first place” — is absolutely true. I know I clamor for those tools, and I’m just a lowly test manager.
I’m a big fan of the work companies like Fortify, IBM and HP are doing in this space. If my project team can find a potential issue before we deploy the code, I’m all for it. It can save us time and helps us focus on different and potentially higher-value risks. However, I’ve yet to see a tool that can deal with the complexity of a deployment environment (setup, configuration, code, etc…) and while I’m a big believer in doing everything you can up front (design, review, runtime-analysis, etc.), I believe there will always be a roll for a skilled manual investigation of what gets deployed.
Testing (penetration or other) is about applying skill and judgment to uncover quality-related information about the product. That’s not just code — it’s more than that. Your typical penetration tester today covers more than today’s automated tools can cover. While there are different tools to test various components (some that focus on code, some that focus on the network, etc.), and they should absolutely be used, those tools will never be able to uncover all the potential issues with a system. And, what’s sometimes worse, is they can lead to a false sense of security.