Manage Learn to apply best practices and optimize your operations.

Software development cycle best practice: Threat modeling

Early in the software development cycle ask, Who might attack the application? How would they do it? What are they after? This is threat modeling.

Matthew HeusserMatthew Heusser

Early in the software development cycle, it's important to consider who might attack the application, and how they might do it. This process is known as threat modeling. Most teams are familiar with the concept. Yet for many the nuts and bolts of threat modeling remain elusive and hidden, the work of experts in locked rooms.

Threat modeling does not have to be that complex or formal. In this tip, we'll cover what threat modeling is and how to do it, and also explore what to do next.

The basics

For our purposes, a threat is something an attacker, or adversary, might steal from our system. When we amass a list of threats against a system, they collectively create a threat profile. When we take that list of threats and consider the existing safeguards and protections in place, that constitutes a threat model. Threat modeling is that process of figuring out where we stand, where our vulnerabilities are. Once we understand that, we can address them with other techniques.

Let's step through this process, using a fictional health insurance provider as an example.

The process

"Think like a hacker" is a great sound bite, but it doesn't actually provide much concrete guidance. Many hackers, including Kevin Mitnick, the infamous social engineer, simply want to play, to get access to something they shouldn't. But other hackers are after valuable business information.

With threat modeling, you start by identifying what information is valuable, what might the attacker want to steal. You can do this for the entire organization or just the current project. In the case of a bank, the valuable information might be actual financial transactions. For an insurance company, it might be personal health information (PHI) that could be used in a court case.

"Think like a hacker" is a great sound bite, but it doesn't actually provide much concrete guidance.

The second thing to consider is the attack vector: How is the attacker is going to get into the organization? Another term for this is the entry point. For physical security, a thief would do this by "casing the joint" -- walking around looking for unlocked doors, open windows, unattended gates. Back at our health insurance company, the entry point could be the internal network, the wireless network, the VPN, the website, any FTP or SSH ports your company has open. Or it could be an employee, someone in the call center or a systems administrator.

Now, each potential entry point is a piece of information that exists for a purpose; someone should be able to get to that data. At the insurance company, claims administrators need access to claim records to make corrections; customer service workers need access to claims and personal information. Production support needs read access to production, and IT development will want some sort of test system, which is a model of production, to do their job. In threat modeling, it's important to look at how security could be breached by an insider.

A final consideration when looking at the threat model is what are the potential exit points. In 2006 an employee of the Veterans Administration walked out of the building with an unencrypted hard drive full of personal information on 26 million veterans, including social security numbers, full names and dates of birth. Today the unencrypted hard drives are long gone, but an insider could, for example, download key data on a USB stick, which holds gigabytes of data and costs just a few dollars.

Another potential exit point to consider is employees stealing data when they leave the company. For instance, a sales rep might walk out the door with a customer list. If you find this is a credible threat, entry point analysis won't do much, but exit analysis might.

Take it in STRIDE

Once we know the threats, we want to assess how much risk they create for the organization. One list of measures is DREAD, a mnemonic that stands for Damage potential, Reproducibility, Exploitability Affected users and Discoverability. For example, if you have a small, internal application only employees use, it is not likely to be exploited by a denial of service attack.

A second mnemonic, called STRIDE, was introduced in the book Writing Secure Code, by Michael Howard and David LeBlanc. STRIDE is short for Spoofing,Tampering, Repudiation, Denial of service and Escalation of privilege. It helps measure the effect of the threat or what the breach would allow the attacker to do.

What to do next

The nuts and bolts -- the basic how-to -- of threat modeling are straightforward. At the outset of the software development cycle, find what the attackers might want (that's the threat in threat modeling), figure out how they might get in (entry points) and out (exit points). From there, we assess the risk with DREAD and STRIDE analysis to determine if the threat is credible. Put all of these steps together, and we have a risk model.

That model will focus on the dangers, not how to manage or mitigate them. Once you've created the risk model, other methods take over to address those risks. These are things like code inspection, penetration testing and system hardening.

This tip covered the basics, but there's certainly room for more. The Microsoft Press Book on threat modeling has some excellent details, including examples and a detailed process based on data flow analysis. Another Microsoft book, Improving Web Application Security, also has a chapter on threat modeling; it goes into more depth and is available free online.

We welcome your comments and updates on how it turns out, and good luck! Let us know and follow us on Twitter @SoftwareTestTT.

Next Steps

Car engineering principles help build better software

Dig Deeper on DevSecOps and automated security