Medical apps that store sensitive health care data in the cloud pose steep challenges for software testers. They need to make sure that cloud-based data is secure, while also proving compliance with HIPAA requirements around data privacy.
How do software test managers ensure that their teams are adequately prepared to test security in the cloud? What layers do they need to cover? Security testing is more complicated than just testing usernames and passwords, and it's significantly different from traditional testing. For example, your software may function flawlessly and meet all its specifications and requirements. However, that doesn't mean your application is secure. How can your test team use their experience to help validate your application's security effectively?
In this tip, I discuss the basics of the cloud and explain a security-testing approach for medical apps that store health care data -- "attacking" those apps through the user interface.
What is the cloud?
For most companies, the cloud is a set of services, data, resources and networks located outside of themselves. In other words, another entity provides the organization with a data center, servers, networks and a working infrastructure. Rather than building and maintaining their own data centers, organizations enlist cloud services to provide that infrastructure for them. There are different types of service offerings in the cloud, including Software as a Service(SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). Examples of SaaS applications include salesforce.com, Google Apps and Facebook. Each of these is individually a full-service application that can be accessed from anywhere on the Internet. PaaS examples include Windows Azure and Google App Engine, which are distributed development platforms for building applications, webpages and additional services within the cloud. Finally, IaaS providers are companies that offer the building blocks that make up the cloud. The building blocks include virtualization layers, databases, Web servers, firewalls, server load balancers, routers and switches, among others.
Many medical apps exist in this space. And as a result, test team managers typically must provide some level of security testing. So, what testing can your team perform to help ensure that your application is as secure as possible in this shared server space?
Medical apps: Checking for overflow input buffers
The team's focus is on testing the application user interface. Testers are comfortable with this approach, so this is a good place to start. Most security bugs originate from undocumented or unexpected user behavior, or behavior outside the scope of the requirements. Unexpected input can compromise the application, the platform it runs on or the data it's using -- resulting in data access for an unauthorized user. Testers of medical apps must test for these types of data and unauthorized access vulnerabilities.
First, develop tests for overflow input buffers. Testers need to verify that the application constrains input lengths on fields. Honestly, this may not sound like much of a security threat, but it is. An unconstrained field may allow malicious code to be executed if it's interpreted as code. Create a series of tests for each field in your application that allows alphanumeric input from users. As testers, we expect users to enter certain data, like strict textual comments. For example, a physician orders a medication for a patient with instructions to the nurse on the drugs administration. The application is designed, for example, to expect the physician to enter alphanumeric characters of no more than 250 characters, but often the character limit is not enforced. In that case, there's nothing to stop someone from copying and pasting code that the server then interprets and executes. When that happens, the application or its data is executing on an uncontrolled server. That's a data breach incident.
If the application under test allows the user to enter an unconstrained amount of text, the next thing to do is test for error messages. When that app returns error messages that identify the offending text string, that's a sign the data is being stored. Continue testing by entering longer strings. If the application still doesn't crash, then it's likely the data is being truncated internally and no buffer overflow security hole exists.
Check for character sets and commands
The next step is to get team members using input data again, but this time with escape characters, character sets and commands. The goal is to find out which characters your application treats differently. Here, testers try to force the application to process characters like command strings. For example, SQL commands are commonly used (known as "SQL injections") to steal data as it's passed through the application. Basically, testers insert SQL commands into user input fields. If these inputs are not filtered by the application, data used by your application may be overwritten or written out to an external source. That's another data breach. How would you determine if your application security is compromised? Usually, bad data input running to a remote server eventually causes the application or the entire system to crash or freeze.
Examining defaults and options
Oh, options -- don't we all love options? Customizable configuration options are wonderful for user satisfaction but detrimental to a test organization because it's impossible to test all configuration combinations with every functional part of an application. However, security-testing an application's obscure configurations often yields security defects.
Focusing on nondefault configurations offers two advantages. First, the team is testing code that is rarely or never tested. The test team can develop and execute tests that check these areas for the same basic user interface failures noted previously -- overflow input buffers and character handling.
You can improve the security of your application by testing these gaps to ensure that known security holes are secure regardless of how often that code is actually accessed. After all, a security hole is a security hole no matter where it's found.
Security testing through the user interface is just the tip of the iceberg. But it's typically fruitful attacking an application through the user interface; it's the most common method available to malicious users. Security testing is challenging, but it's important for a test professional to conduct security testing through the user interface to keep health care data found in medical apps secure. As software testers, we must do the best we can to ensure the integrity of these apps.