You’ve heard of black-box testing and white-box testing. What about completely out-of-the-box testing? In a session
with embedded software expert Jon Hagar in the half-day tutorial at SQuAD Conference 2011, attendees learned that testing embedded software requires creative testing since traditional tools and resources may not be available. In a session where we were able to gain some hands-on experience by testing a hand-held 20-questions game, we learned some techniques for testing, or better yet, “attacking,” embedded software, in an effort to uncover critical defects.
Hagar started by giving groups the hand-held games with the task of “testing.” “Where do you start?” he asked. The group had various answers. “Happy path testing,” “Turning the device on,” “Pushing all the buttons to see what happens,” for example. The answer Hagar was looking for? “Ask questions. Find out what documentation is available. Is there a user guide? What is the device supposed to do? Is the source code available?”
When the question was asked if documentation was available, Hagar produced the instruction manual for the groups. With this available, testers are able to do what’s referred to as “black-box testing.” Without looking at source code, we see what the user documentation says the device is supposed to do. With this, the tester is able to do “happy path” testing. Does it do what the documentation says it’s supposed to do?
The instructions, worded as though the game is personified, state, “I ask a series of questions before I guess what you’re thinking.” Hagar did not produce the source code for us, so we were not able to see the programming logic of this device. Our test group started by thinking of an object and answering the questions given to us by the device to see whether or not the game would come up with the right answer. (It did.) But without knowing the code paths or anything about the database we had no idea if we were exercising the various code paths or fully testing the data to come up with a high percentage of “right” answers.
One could argue that white-box testing should happen at the developer level, hopefully in an automated fashion. By using unit tests, emulators and APIs, there should be the ability to use automation to run the massive amount of data and variety of code paths to validate the logic. It wouldn’t be realistic for testers to manually step through each object in the database to test the accuracy of the logic.
However, having insights into the code may give the embedded software tester some good ideas of scenarios or attacks to apply to test the device once the software has been integrated with the hardware.
Hagar is a big proponent of exploratory testing; that is, looking for ways to “attack” or break the software. I would best describe this as “out-of-the-box thinking.” Some of the ways Hagar suggested going about coming up with exploratory tests included examining risks, looking at users, looking at environment and looking at heuristics.
Risks might include things like the risk of the application being too slow or getting the wrong answer. Also, what is this application going to be used for, and who is going to use it? The group spent some time brainstorming about typical users. Junior high student, party-player, bored adult were some suggestions. There was some discussion about finding users in these categories and observing their reactions to the game.
One suggestion that was controversial was the potential blind user. Since the game required the ability to read and there was no audio, a blind person would not be able to play. Does this mean a bug was found? Not necessarily. Hagar suggested that the question should be asked to the product group. Should this product be available to those who can’t see (or read)? What about language considerations? Should it be translated? Hagar said that it’s good to ask the questions, but if the answers come back that these things aren’t important, then the tester needs to let it go.
Hagar reminded us that the “user” didn’t necessarily have to be a human. Either hardware or software may integrate with the embedded software, creating an embedded system. When considering your tests, think about what users, even if those aren’t human, may interface with the software.
If a tester only uses tests to validate requirements, then unexpected behaviors like system crashes or a hung system will never be caught. No one writes a requirement that says “Bad things that make the users mad shouldn’t happen.” Part of exploratory testing is to figure out what might make “bad things” happen and then see if the system can survive through “break-it” testing. We aren’t talking about physically destructive behavior. Obviously, if you drive your car over the hand-held game, it’s going to break. Most developers will not accept that kind of breakage as a bug. However, if you use the device in an unusual way and it breaks, that might be a valid defect.
Part of determining good attacks is understanding heuristics or the taxonomy of common problems that are found in certain domains. In embedded software, two areas with a high failure rate are timing issues and issues related to the integration of the software and hardware.
When determining attacks on the hand-held game, one idea was to push the various buttons as quickly as possible or in unexpected combinations. Even if a typical user would not do this, if something “bad” happens, it may help uncover a defect with timing or logic that may happen to a typical user and needs to be corrected.
In many ways, testing embedded software is similar to testing traditional software. Black-box testing, white-box testing and exploratory testing are all techniques that can be used regardless of whether the software is run on a computer or is embedded in a device. However, when working with embedded software, you need to think more creatively about how you test. Consider the risks, the users, the environment and the product history. Test out of the box.