Our business is relying more and more on smart process applications to manage business-critical processes. This means application security is even more important than ever. What's your advice on ensuring not only that the code is secure, but that business logic is also secure? I want to ensure that the functionality isn't exploitable as designed.
A smart process application typically requires the integration of a variety of disparate systems. Threat modeling is critical for applications like these because vulnerabilities tend to crop up based on the interaction and communication between different systems.
Application- and code-level automated scanning tools are typically powerless to identify security problems in an application's business logic and its interactions. Therefore, a program of manual inspection and review of the business logic is required to identify what could potentially be serious issues. Threat models can help inform this assessment process by explicitly enumerating the various components and data flows in the system to identify potential areas of weakness. In addition, creating "abuse cases" of potential attacks against a system is also an effective technique for augmenting threat models via out-of-the-box thinking.
These assessments can be challenging because they have to be performed by teams with both an understanding of the underlying business purpose of the system as well as experience viewing systems in an adversarial manner. This combination of background and skillset can be very difficult to find in a single individual. There is typically a learning process involved in getting security testers up to speed about possible failure models for the system.
Manual testing also tends to be time-consuming and does not scale as well as automated tools. Development teams working on a smart process application need to account for the time needed for these activities in order to include them in the development lifecycle.
If collaboration on business rules is a big part of a smart process application, it is critical to control who is allowed to collaborate and what impact their partnership might have. Collaboration that is not properly regulated can be particularly dangerous because it indicates a situation where an adversarial "bad actor" has to be given some level of access to the system and potentially some level of ability to modify the behavior of the system.
In addition to instituting a testing program for smart process applications, these systems should be designed so that their behavior can be audited to detect potential malicious activity. The system should also retain a forensic record for post-incident analysis. Building these systems with auditability and forensics in mind is key and possibly more important than pre-incident security assessments. Development teams need to assume that complicated systems like this, controlled by a variety of factors, are going to be subject to fraud and other misuse.
Security logging is also a challenge because it is different from typical application logging. Developers normally set up logging with a view toward capturing the state of the system at a given point in time so that they can debug errors. However, security analysts typically need to watch how the system changes over time, and under whose direction, in order to identify malicious trends.
Web application threat modeling
Threat modeling: A crucial early step in software development
Smart process applications: A new method in business processes
Smart process applications help business agility
Dig Deeper on Topics Archive
Related Q&A from Dan Cornell
Is it safe to move from on-premises application lifecycle management tools to cloud-based tools? Read this expert answer to find out. Continue Reading
Can security impact application performance? What security vulnerabilities might be slowing us down? Continue Reading
As our developers incorporate more and more third-party software components and partner APIs that we don't have direct control over, how do we test ... Continue Reading