Does this sound familiar to you? The team has worked through three milestones, building new functionality on top
of older functionality. With all of the new features tested and accepted, it's time to start a regression pass. The release date is just four weeks away, and the pressure is mounting. Suddenly someone from the sales organization asks your director, "What are we doing about application security?" and everyone looks like a deer in headlights.
Teams often save their security work for the last phase of a release and end up paying a high price in last-minute, drawn-out efforts. But the key to approaching security is to consider it in each milestone or phase of the lifecycle.
Start with planning
Let's start with the planning milestone. This is a phase of complete optimism (sometimes illusion, but that's beyond the scope of this article) -- the sky is the limit and planners are in seventh heaven. But are product planners considering key security questions such as privacy, secure feature design or security feature work? Or is the team focused solely on checking features off the queue? Many security issues can be solved with good up-front planning. Privacy issues, for example, can be completely mitigated in the planning phase. A simple rule of thumb: if the company doesn't need a piece of private information, don't collect it. Are you building a B2B application that shares sensitive information with a third-party (perhaps a shared shopping cart)? Bring in security experts to discuss how to encrypt information in transit, how to limit the information shared, and how to share session management. An afternoon of planning can prevent weeks of throw-away work during the development phase. Bringing in a tester to evaluate product plans from an OWASP Top-10 perspective (yes, some exploits can actually be detected in planning) will also save time in the long run.
Not many people think about the security testing activities that happen during the development phase, but there is one key task which should be performed frequently while developers write code: static code or binary analysis. Numerous commercial tools exist which can scan a codebase searching for common security flaws. The best tools go beyond simply grepping the code base, evaluating the execution path and making sure a flaw is actually exploitable. These automated tools can be configured to reduce false positives and to provide frequent feedback on the security of new and existing code. In my experience, the lion's share (well over 75%) of OWASP Top-10 flaws can actually be discovered by automated tools while developers are writing code.
I'm going to break the testing phase into two parts: integration testing of new features and regression testing. Our nightmare scenario above leaves all security testing to the regression phase, but a team can gain confidence and time on the schedule by front-loading a lot of those tests to the integration phase. All new functionality can afford to undergo an OWASP Top-10 review-- especially pages which have been around for a long time but are being updated in this release. Threats change all the time, and what was acceptable a few years ago may actually present a significant vulnerability today. As a manager, I try to schedule at least a day of deep security testing during each integration milestone (a day may not seem like much, but my most recent organizations have all been agile, with more frequent, shorter milestones). That day-by-day approach makes possible a lot of testing that seems overwhelming. For instance, you may have a site with 30 different pages. How will your team ever cover each OWASP Top-10 test? But if your organization touches each page once a year, throughout the course of ongoing development, and you depth test each page during the integration phase, over the course of a year you will evaluate each page at least once.
There's nothing special about the security-related testing that occurs during this phase. Standard OWASP Top-10, SANS-25 and other checklists are common activities which add high value to the security testing effort. If you are testing a more mature application (or have tools which automated much of the OWASP and SAN testing), you can invest in fuzzing and other more advanced efforts.
The next test phase is, of course, the regression phase. During this phase that application is complete and the team is focused on fit and finish work -- making sure the application is stable after new functionality has been implemented. This is an excellent time to run through a series of checklists -- spot testing pages for OWASP Top-10 exploits (not every page, but representative samples of core page classes), validate deployment configurations and team best-practices, etc. In our worst-case scenario, security testing is concentrated in this phase and it takes days, if not weeks, to test and stabilize. In a more rational approach to testing, this testing can be completed in a day or two.
Deployment and maintenance
A final phase in the lifecycle is the deployment, stabilization and support phase. Many core vulnerabilities are actually introduced in the deployment process -- ports left open, default passwords are left in place, SecureCookies configuration settings set back to default, etc. It's critical that the team's security experts are available and are given time to secure the deployment, in conjunction with the company's IT security administrators.
Many look at a security lifecycle scattered throughout the entire product lifecycle and think it's just too costly. However just like testing itself, the mantra "the earlier the better" could be rewritten "the earlier, the cheaper." By discovering security flaws in the planning and development phases, teams reduce lost time and increase effectiveness. This distributed testing approach serves to reinforce a culture of security, which in and of itself cuts down on flaws and poorly planned features.