This article can also be found in the Premium Editorial Download "Business Information: Advanced analytics, in-memory technology push information limits."
Download it now to read this article plus other related content.
I was shocked the first time I heard the words "security" and "software lifecycle" used in the same sentence. Wasn't security something that happened after the development process, not during it? My understanding of what security entailed changed forever 10 years ago, when, as a reporter covering software development, I was assigned the brand-new application security beat.
And what a great beat it was. Venture capital firms were investing serious money in security startups such as Fortify Software. Along with SPI Dynamics and Watchfire, among others, Fortify advanced an idea most software professionals hadn't heard before: Instead of waiting for the security team to erect a fortress around Web applications and data, developers and testers could rely on tools -- source code analyzers and dynamic pen testers -- to help create code that was inherently harder to attack. Surely this new approach of building security into the software lifecycle could help stem the tide of high-profile data thefts that kept making headline news -- or so the thinking went.
There was great optimism and excitement around emerging application security tools and techniques in 2003. It seemed like just a matter of time before they would take hold, and secure development and testing practices would become standard operating procedure for software teams. Analysts predicted that secure development and testing practices would find a formal place in the software lifecycle once the big development toolmakers snapped up the application security startups.
We were wrong about all of that.
People and corporate culture, not technology concerns, derailed application security efforts.
IBM acquired Watchfire in 2007. HP bought SPI Dynamics that same year, and in 2010 HP also bought Fortify. But none of these acquisitions got much attention, and there is no discernible evidence they had any impact on software teams.
Here we are in 2013. Data breaches still make headline news. Software teams remain unsure about where application security fits into the development lifecycle and who is responsible for testing. Why are we still talking about these same old things at a time when software professionals should be focused on the new security challenges posed by mobile applications and those that run in the cloud?
Lack of leadership, training, time and money -- each has hindered the widespread adoption of application security development and testing practices. People and corporate culture, not technology concerns, derailed application security efforts.
In this first installment of Quality Time, I offer four reasons why app security hasn't worked out the way we thought it would.
Reason 1: The answer to the question 'Who is responsible for security testing?' is complicated.
Should the security group run source code analysis and dynamic pen testing scanners? These tools didn't align well with the skill sets of security pros, accustomed to building walls around networks to keep intruders out. Developer skills were well-suited to source code analysis, but their initial experience with the tools didn't go well (see Reason 2). Quality assurance testers appeared to be the right fit for dynamic scanners, but QA testers weren't accustomed to dealing with security issues and they weren't properly trained to run the tools. Add to this a leadership void, where no one stepped up to tell developers, testers or security pros what to do about application security. So, they didn't do anything.
Reason 2: App security tool makers made marketing mistakes.
Source code analyzers were initially marketed to software developers. And developers hated them from the get-go. They rightly claimed that running the tests took too long because they turned up so many false positives. The false-positive issue has long been fixed, but among many developers, the poor impression of app security tools still remains.
Reason 3: Source code analyzers pinpointed poorly constructed code -- and developers took it personally.
Aiming to sell developers on the power of source code analysis, toolmakers ran demos on developers' own code. It was a terrible idea. Developers -- who could blame them? -- didn't appreciate being told the code they wrote was vulnerable to attack. And that got application security off to a bad start with the very audience that had the most to gain from adopting these tools.
Reason 4: Application security tools matured just before the economy's 2008 downward spiral.
Security testing was just one more task on developers' and QA pros' to-do lists. Many teams were already struggling to do more with less following staff cutbacks. With no sign of new hires on the horizon, application security was not a priority.