Should I be writing security requirements?
Security is a big deal for software. Companies providing software and services to consumers have to worry about someone breaking into their systems and getting their users' private information. Teams developing software for their employers have to worry about that too, and they have to worry about protecting software from competitors hacking their systems and gaining information that could hurt their employer or help the competitor. So how do you write requirements to prevent this?
If you're lucky, as a business analyst or product manager, you have an easy answer. Some security expert somewhere has decided specifically what security protocols and policies have to be enforced. All you have to do is copy, reference or translate the policies into requirements that drive what your team builds and how your team tests. You get an easy tip today: Your challenge is about completeness. You need to make sure that everything the team builds is compliant with the defined security procedures and policies.
If you're not lucky, then you don't have an expert telling you what to do. You have a vague mandate to "make it secure" from one of your stakeholders.
Ultimately, your choices are (1) find an expert, (2) become an expert, (3) wing it or (4) stick your head in the sand and cross your fingers. Part of the problem is that security defense and penetration is a cat-and-mouse game. You build better defenses, and they devise craftier attacks. Sticking your head in the sand and not requiring security measures is like leaving your car unlocked with the windows down -- you won't stop anyone. If you try and wing it with the security that "sounds good to you," all you're doing is locking the doors.
You can become an expert, or find one. If you are able to become one, great. You might even really love it. One question: Who's going to be doing your job while you're busy "experting"? There's a saying: "Make your own luck." That's what you're doing when you go find an expert to tell you what needs to be done.
What about another approach-- can you make sure there's nothing worth finding? Many companies refuse to store your credit card information. Maybe they do it in order to avoid the cost of complying with PCI requirements. Regardless, it seems to make sense, because if they don't store the information, and they let everyone know that they don't store it, then the companies don't have to worry about crackers trying to penetrate their systems in order to get it.
Credit card numbers aren't the only valuable information you may have about your users. Users' email addresses and passwords are implicitly valuable. Recent reports have found that 73% of users reuse the same password for multiple systems. Even if users don't put any (other) valuable information into your system, the password that they used is likely to be useful to a cracker who is trying to get into some other system. Just as you outsourced the transaction-processing (and credit card number handling) to a third party, you would have to outsource authentication services, to make sure you don't have people's user names and passwords either.
Apply the same thought process. If you can't eliminate everything that might be useful (and you can't), then you need to find a security expert, and get busy thinking about completeness in protecting software.
Dig Deeper on Building security into the SDLC (Software development life cycle)
Related Q&A from Scott Sehlhorst
Application performance monitoring fixes a system before it breaks. IT strategist Scott Sehlhorst offers insight into preventive performance testing. Continue Reading
'Continuous development' is still a relatively new and confusing term. Find out what it means beyond the hype. Continue Reading
Scott Sehlhorst offers strategic guidance on how to approach application portfolio management with a focused vision. Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.