Ten steps to better application security testing strategies

Address app testing strategy concerns at each stage of the application lifecycle and learn about tools and techniques to boost security.

Most software and test professionals believe security should be addressed after, not during, the application development process, according to industry experts. While developers and test professionals are familiar with app testing and security concepts, most work for organizations that lack comprehensive application security strategies. Without a security mandate from management, developers and testers fall prey to business pressures to deliver apps faster. "Just get something out there, and we will take care of security later -- that is the mind-set," said Brian Bertacini, president and founder of AppSec Consulting, an application security consultancy in San Jose, Calif. "It is rare to see people design security in all the right areas of the application," added Kevin Beaver, owner of independent information security consultancy Principle Logic LLC in Acworth, Ga. "Software developers and QA professionals don't have the time or resources to dedicate to application security. So they wing it, do the best they can and they fix it on the back end," he said.

SearchSoftwareQuality.com asked application security experts to identify and address security concerns at each stage of the application lifecycle and to suggest tools and techniques to boost security. Here is the advice they offered.

1. Conduct threat modeling at the outset on an app development project. Threat modeling refers to the process of figuring out how many different ways an attacker could harm an application before that application is actually developed, said Wendy Nather, research director for the enterprise security practice at 451 Research LLC, a research firm based in New York. "Can you break into it, commit fraud, steal from it? That is what you are trying to answer," she said. The best threat models graphically depict things such as how data will flow and how it will be stored, said Dan Cornell, a principal at security consultancy Denim Group Ltd. in San Antonio. "The idea is to proactively determine what kinds of security things can go wrong." It's crucial to understand these issues at the outset of the development process because it's cheaper to address security concerns when an app is "just a drawing on a whiteboard," he said.

2. Define basic requirements that address security. Developers today -- even those without specialized security training -- do a decent job of dealing with the rudimentary aspects of application security: role management, authentication, password-based access control. But there are still things to watch out for, Beaver said. Managing user roles -- who can see what and what they can do with that information -- is particularly complicated for multi-tenancy and cloud applications, he said. "You have to architect them correctly." A good security practice is to do what Bertacini calls "instill a philosophy of least privilege." When specifying who can see what info and what they can do with it, it is best to grant privileges only on an as-needed basis. "Your starting point is no privileges," he said. Even security concepts as basic as password access can get you in trouble if you don't design them properly from the outset. Beaver said he has seen apps that work with any password as well as ones that don't allow the use of special characters in passwords. "Really? In 2012? You've got to be kidding." These things are hard to go back and redo later on, he said.

3. Come up with abuse cases. Abuse cases, or possible attack scenarios, are at the heart of the requirements phase, and yet many companies today overlook this step. "Teams are accustomed to coming up with a list of functions an app should carry out, but a key aspect of security is specifying what an app should not do," Cornell said. To compile a list of abuse cases, he advised companies to think about how an attacker could misuse functionality. For example, Amazon.com account holders can cancel their own orders, but they cannot cancel those created by other account holders," he said. Nather offered another example: "A customer can look at the bank balance in his accounts but cannot look at balances for other peoples' accounts." Your job is to make sure none of these abuse scenarios come to pass, Cornell said. "These are things you can find only in manual testing -- if you wait until later to fix them, it's usually an attack."

4. Define rules for input validation. Nather views this process as figuring out the trust zones in your application. "What you want to know is which parts of the system trust each other, and should they trust each other?" Once you figure that out, you can define rules such as the following:

Don't trust data that is coming in from the Internet.

If you pass data inward to a second tier of architecture -- from the Web server to a database, for example -- check the data before accepting it.

Validate all data moving in both directions, in and out of the application.

5. Use source code analyzers. Source code analyzers scan apps as code is written, looking for vulnerabilities that an attacker could exploit to steal data. The idea behind them is to help developers write apps that are inherently more secure at the outset, in addition to addressing security concerns later in the application lifecycle. Early on, this class of tools, available from commercial software vendors as well as open source projects, got a bad rap because they tended to produce false positives. Source code analyzers identified as flaws things that weren't flaws at all, Bertacini said. "For instance, a source code analyzer might flag code that implements a routine for input-output filtering," he said. "When things like this happened, developers got overwhelmed and wondered whether they were wasting their time." But today these tools produce better results. "They are especially useful if you are just getting into secure development," Nather said. "You need to see the types of problems source code analyzers find, and, of course, couple this type of analysis with other types of testing." Cornell noted that source code analyzers are incredibly powerful tools. But they work most effectively when "you tone down the verbose rule set to produce less comprehensive but more manageable results."

6. Guide developers to write secure code. Another way to boost application security at the coding stage is to provide pre-existing libraries that implement common tasks in a secure fashion, Cornell said. Essentially, you are supplying code templates that model "here's how we do database access; here's how we build webpages that avoid cross-site scripting errors," he said, referring to a well-known vulnerability attackers use to steal data. "Developers have broad latitude to do all sorts of things when writing an application, and secure coding libraries can help keep things in check." In addition, providing libraries can help ease the often adversarial relationships between security professionals and software developers. "Security can say, 'Use these libraries, and I'll leave you alone,'" he said.

7. Use dynamic scanners to simulate attacks during the QA cycle. Also known as black box testing tools, dynamic scanners "attack" an application in much the same way a hacker would, in order to pinpoint code that could be exploited. Commercial software vendors, as well as open source projects, offer these tools, which are designed to identify code that is vulnerable to SQL injections and other known security vulnerabilities, Cornell said. A SQL injection can occur when an attacker includes portions of SQL statements in the entry field of a Web form, instructing the database to dump the contents of that database, for example, to the attacker. The job of QA professionals using dynamic scanners is to pinpoint the vulnerability, not pull thousands of records from the database, Cornell said.

8. Test the application against the deployment environment. The test environment should mirror the environment in which the app will be deployed as closely as possible, Nather said. A key thing to look at here is whether access to data sources is secure.

9. Test for general resiliency. Can the app recover easily when the connection is disrupted? Or when part of your cloud goes down? Or a batch job fails? These are things you need to look at, Nather said. "Things will go wrong, so make sure the system can recover."

10. Retest apps in production on a regular basis. Even when an app gets the go-ahead from security experts, keep on testing, Nather said. "New security vulnerabilities come up all the time." In addition, older components of an application can get redeployed as part of a build, potentially introducing vulnerable code, she said. Application security professionals need to work closely with operations on a continual basis, Cornell added. "Even if an app with a SQL injection vulnerability, for example, makes it into production, some compensating control -- such as secure access to the database -- can prevent an attacker from getting in."

Dig Deeper on DevSecOps and automated security