Why should QA professionals play a bigger role in security testing? They understand how an application fits together and where the connection points are. This big-picture perspective is crucial for carrying out security testing basics, said Jeffery Payne, CEO of software consultancy Coveros. "Testers are good at figuring out at which points an application may be at risk," he said. They are well suited to looking at an application and asking: "Where are the crown jewels hidden and what path will an attacker take to reach them?"
It's appropriate for testers to develop some basic test scenarios around security, said Gerie Owen, QA lead for a utility company. "They can look at a user interface and ask, 'What happens when you enter your credit card number? What happens when you start the transaction, put in your credit number and then exit the application?'" Testers are well suited to coming up with questions like this, she said.
Having test professionals assume some responsibility for security testing basics is important for two reasons. First, application security is a growing concern for all software and test organizations as security breaches continue to make headline news. Second, getting testers involved can help solve a problem that plagues most software development organizations today, said Payne. "Where in the application lifecycle does security testing fit?"
Many of the default error messages contain information that can aid an attacker.
Payne did not suggest that testers assume sole responsibility for application security or that they conduct static analysis testing, which involves looking at the code itself. "It's difficult for testers to play a huge role in code review," he said. Owen was quick to agree. "I would hesitate to sign off on security based on testing carried out by my team," because aspects of security, such as finding code vulnerabilities, fall outside the teams' collective skill set.
Code review remains the responsibility of developers or the security team, said Payne. "But once you get out of the code and into the interfaces, the APIs, the design characteristics, testers can play a lead role in security," he said. "Testers are holistic thinkers; they see the big picture."
In this article, security and QA experts offer advice for getting testers involved in application security.
Get on board early and work to improve requirements from a security perspective
"Testers are not always invited to the table when requirements planning gets underway. But that shouldn't stop you from getting involved," Payne said. "Make your way into the meeting and say you are there for test planning purposes."
Your job is to help developers and business stakeholders specify security requirements with enough precision that they can be tested. Requirements are often described in such broad, vague terms that they are subject to interpretation, said Payne. "'A strong password shall be chosen when registering the system,'" he said, offering an example. "But what is the definition of 'strong'? How many characters; how many special characters? And is the password case-sensitive?" he wondered.
By nature, testers are good at insisting on specificity because they understand that vague requirements can't be tested. But developers have a different mindset altogether, said Payne. "They think they know how to code the requirement anyway and believe it is up them to determine exactly what 'strong password' means," he said, noting that he sees this happen all the time.
Start by testing the user interfaces
User interfaces are the doors that attackers attempt to open when they set out to steal data from an application, said Payne. "You can't let them gain access." The key way to prevent access is to make sure that the application validates all data entered by a user. "Data must be clearly designated as data, so it cannot be executed and used to steal information," said Payne. According to the Open Web Application Security Project (OWASP), failure to test for data validation is the cause of almost all major vulnerabilities in Web applications, including cross site scripting errors, SQL injections, and buffer overflows.
Figure out the where the crown jewels are and secure all possible pathways
Payne said QA professionals should approach security testing with a risk management mindset. This enables you focus on and test the areas of an application that are most vulnerable, he said. "If this application is attacked, what is the attacker looking for? What are the crown jewels and where are they hidden?" Crown jewels are things like passwords or credit card and social security numbers. More often than not, they are hidden in older databases that have been around for a very long time, he said. "They live in legacy COBOL applications, which were never intended to be exposed to the Web." That means it's important to do dynamic pen testing on this older code, Payne said. "A lot of vulnerabilities are being discovered there. Some were known and not corrected. They weren't deemed important enough and nobody got around to fixing them."
Dynamic pen testing falls squarely in the domain of QA team, noted Dan Cornell, CTO of the Denim Group , a Texas-based software consultancy that focuses on security. "These tools produce results that most testers are able to understand with some training." The tools often run automatically when code is checked in to a repository, so testing teams responsible for running and maintaining a continuous integration environment can help with security by including security testing tools into their tool chains, he said.
Understand the strengths and weakness of each developer and test accordingly
When Payne conducts one-day security assessments for his consulting clients, the first thing he does is talk with the developers on the software team. His goal is to size up each developer's broader sense of the application and apply that knowledge to the security testing strategy. "You're trying to figure out if they understand all the interfaces, all the connection points," he said. "You ask questions, get them to talk about their work, and you can quickly determine which developers know what they are doing and which ones don't."
Payne recommends that test professionals take a similar approach, even though they don't have the same authority as an outside consultant engaged by the company. "But testers know their developers -- which ones are strong and which ones are weak. And they know which parts of the application each has written," he said. "You map that knowledge to the application and scrutinize parts written by weaker developers more closely."
Test what the application can't do and pay close attention to error messages.
QA pros are accustomed to testing functional requirements -- what the application can do. But it's also crucial to consider what the application can't do, said Payne. "Security is all about 'the system shall not do this.'" He offered some examples. "The user can access the application with a valid ID and password. After three attempts, [for example], the user is locked out," he said. The system should not allow the user to try forever. If it does, an attacker can apply an automated program to it, trying out different passwords until the code is cracked, he said.
Also important for testers to evaluate are the error messages the application produces when a login fails. "You never want to say, 'Your password is invalid,' because you are giving away information," said Payne. A better approach is, "Your user ID or password is invalid," he said. "Don't ever present information that is useful to an attacker."
Error messages crop up in other parts of an application as well. A key place to look is at connection points with databases and servers. "When an application links to a third-party system that is not available, an error message will appear," said Payne. Many of the default error messages contain information that can aid an attacker." For example, Apache has been known to provide the name of the server, the version of the software, the trace of calls it followed and could not resolve, he said. "That information can help an attacker get in."
This was first published in January 2013