Configuration testing continues to grow more challenging due in part to what IBM Rational's Brian Bryson calls "the iceberg problem."
"A couple of years ago the problem started and ended with making sure the application worked on different browsers and operating systems. Today applications are more interconnected; that's why I call it the iceberg," said Bryson, Rational Quality Management solution manager. "You see stuff on the surface, but it's the whole underlying infrastructure that makes [configuration testing] challenging."
Expense and time are two big challenges to configuration testing, which is the testing of an application on all supported hardware and software platforms. "If you want to validate all the different permutations and combinations, you need to invest in a hardware infrastructure," Bryson said. "You can reduce that expense by using a virtualization strategy, but even with that, having a virtual machine reduces hardware expenses but it doesn't reduce the cost of running the tests. You have the execution time, then you get into the management and overhead costs of reporting and managing these different infrastructures. Even with a virtual system, you're keeping track of who uses what, when, for which test."
He continued, "It takes long enough to validate functionality on your primary configuration, but to repeat the tests with different combinations can be a time vampire. And then if you get down to the level of patches and security fixes, I don't think configuration testing can be done manually; there needs to be some automation in there."
The rising popularity of new mobile apps is another challenge for configuration testers—it's making that "iceberg" even bigger with the various smart devices and their OS versions and interfaces.
"It's hard to get your hands on a lot of devices, and it's expensive," said Karen Johnson, an independent tester. "I think the market is very siloed, and I don't know if the mobile development group is as focused on making sure something works in multiple environments as they are [with applications] on the Web. People care, but it's hard to get there and expensive."
Industry experts agree that it's impossible to test for every configuration, but there are some strategies to cut as wide a swath as is practical. Joseph Ours, national software testing and quality assurance lead with Centric Consulting LLC, and a tester for uTest, suggested identifying what the mainstream user will be using and make that the primary focus of testing.
For Web applications that he's tested for uTest clients, he'll ask about the target audience, and start with primary platforms like IE with XP. "Then you go down from there. You can have Windows XP with Service Pack 1, 2, or 3; you can have different results with IE. There are all the combinations and nuances between browsers and the OS, and then you have to take into consideration things like 32- and 64-bit Vista, and the same issues with Windows 7. Basically you need to do an old school equivalency class approach, so test IE with XP SP3, then Firefox with XP, then Chrome, then the dominant version of Vista. Then you have to have some combinations that aren't that prominent—or not do that. You make a knowing choice; otherwise there are too many combinations.
Bryson agreed that testers have to make a choice, and recommended a risk-based strategy. "Every test team can't test everything. Is the risk of not testing greater than the cost of testing? It may not be optimal, but it's the type of decision you have to make in the business environment."
With a risk-based strategy, the team identifies priorities, Bryson said. "Most test teams identify key platforms or targets and build test cases for that in a risk-adjusted manner. From that you tag a subset of those tests to run across multiple configurations."
After testing the priorities, it becomes what Ours calls the "scream process—you wait until someone is using the application and screams about an issue and then you work to address it, or maybe not. It's possible you might irritate a customer, but the cost of changing the application may be too much for a single customer. It's a business decision to make … do you want to spend time on an arcane browser like Opera? If it's not a revenue-generating customer you have to ask yourself if it's worth it."
In the mobile apps arena, Johnson said developers today tend to be focused on a singular smart device that the organization is standardizing on. "In some ways that makes testing easier, but I don't think people have got the mobile thing necessarily figured out. It reminds me of when cross browser testing was new and we were trying to figure that out. We'll figure out this frontier too."
Today the majority of testing for smartphones is done through emulators accessed through Web browsers, Bryson said. "You won't pound the keys on an iPhone, but you will drive it through emulators." IBM Rational partners with DeviceAnywhere, maker of a mobile application testing platform, which has an interface to Rational Quality Manager.
Johnson said she has a client with a DeviceAnywhere account that she has been working with. "You're accessing real physical phones with distinct phone numbers and you can see and interact with them through a Web browser. It's a little slow and awkward but you do have an opportunity to see what the Web site looks like on different phones."
She also recommends weighing the risks/benefits, though. She said there are some differences in some cases when you switch from different types of phones, say a Nokia to a Samsung. However, "in other cases you've gone to a lot of expense [to configuration test] and the differences are not that tremendous."
Another way to ease testing for mobile apps is to utilize a development/deployment framework like Titanium, said Frank Cohen, CEO of PushToTest, a test automation product. Titanium Mobile, for example, allows the development, testing and deployment of both iPhone and Android apps from one code base. "Say you test on an iPhone; the same test will run on an Android and other Titanium supported devices," he said. Cohen said PushtoTest used Titanium to deploy a new PushToTest feature built with Ajax, and PushtoTest also supports the Titanium platform.
Increasing communication across the whole software development lifecycle—development, QA, operations—is also a strategy to help with configuration testing in general, Bryson said. "Often with configuration testing there's knowledge in the developer's head about where a piece of code or a certain service might fail based on configuration information. If you can extract that [information from the developer] and get it to the QA team it can help with the risk-based strategy."
Also, getting back to the iceberg, "you got a whole hardware infrastructure underneath [an application]. For a lot of test teams that's inaccessible; you have to work with the ops people to get at that."
Ours agrees that the hardware aspect adds another dimension to configuration testing. "On client-side devices, processor speed and memory have a huge impact on end-user performance, which complicates load and performance testing."
Testers need to go through a similar process as they did configuration testing for identified browsers and operating systems. "If you have minimum requirements to support, and this user base is using XP, then tackle the minimum hardware specs and performance they should expect to receive. You develop the lowest common denominator and go from there."