There are two basic potential issues here: security controls slow down your application, or flaws in your application...
allow an attacker to unacceptably degrade application performance. The first scenario is that your security controls are causing application performance to degrade, which is a common issue.
Whenever I think about potential performance issues, I'm reminded of a quote from the great computer scientist and mathematician Donald Knuth: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil." Most of the time, being proactively concerned about the performance impact of security controls is unwarranted. Instead, teams need to design security controls first and make optimization decisions later. Security controls should be optimized only if needed and should be based on quantitative testing rather than vague "rules of thumb" and developer intuition.
Encoding is a security control that should be applied in a context-aware manner where appropriate. Identifying where inputs cross these boundaries is critical for designing proper security controls. Selective application of encoding helps avoid "over encoding," a problem many applications have. Encoding is actually a pretty quick operation for most modern applications and shouldn't be considered a concern unless performance issues are observed.
Encryption is another common area of concern. It might be mandated, so teams may not have a choice in how they encrypt. Developers should be selective about what they encrypt, and these decisions should be based on the organization's data classification policy. Encryption can be supported by hardware if it creates performance issues.
The second scenario is application flaws allow an attacker to degrade the performance, an event known as a denial-of-service attack. Confidentiality, integrity and availability are the CIA triad for security, but many teams focus on confidentiality and integrity at the expense of availability -- until they find they have an attacker-exploitable performance problem or some other attacker-inspired outage. Many application performance and availability issues caused by such attacks are out of the hands of development teams.
Distributed denial-of-service attacks often target weaknesses in infrastructure and protocols that developers have no control over. Addressing these issues often requires specialized networking equipment and other services.
What should teams look for?
Generally, teams want to identify where attackers can consume an asymmetric amount of resources where it costs them very little to consume a lot from your application. There are a number of specific examples of this. Conditions that are expensive for systems to deal with, such as error handlers that send emails to alert system administrators, could potentially overload servers if it's easy for attackers to trigger the error condition.
There are also situations where attackers can use inexpensive and renewable network bandwidth to consume non-renewable and finite resources such as server memory. A common example of this is systems that accept XML documents from the network and then process them using document-based methods, such as constructing an in-memory DOM rather than using event-based parsing, like SAX. Another example would be where attackers cause "expensive" queries to be executed against the database with simple requests.
Security testing procedures for applications should include checks for application-level denial-of-service vulnerabilities. How much effort is spent on this particular area is up for debate. Teams with particular concerns should make sure this sort of testing is included. This will help identify potential issues early on and allow for optimizations or other countermeasures, such as rate limiting, to be applied.
The interaction of application performance and application security can be complicated. Teams should include proper security controls and avoid premature optimization while still planning to test applications for security and performance issues. This allows IT to focus resources on areas where there are actual issues to be addressed rather than relying on guesswork, which can lead to premature optimization and performance blind spots.
Requirements management process: Security and application performance
Dig Deeper on Software Security Test Best Practices
Related Q&A from Dan Cornell
Is it safe to move from on-premises application lifecycle management tools to cloud-based tools? Read this expert answer to find out. Continue Reading
As our developers incorporate more and more third-party software components and partner APIs that we don't have direct control over, how do we test ... Continue Reading
Software systems security expert Dan Cornell discusses the challenges and processes that come with the integration in smart process applications. Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.