Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Mobile Web applications: Monitoring test triggers

Learn how to build out a system for monitoring test triggers for mobile Web application and its supporting systems.

When a large company takes on an initiative to deliver its first mobile app to its users in the field, the testing...

problem space can get large in a hurry. There are also testing consideration for the infrastructure supporting the mobile app. How do you manage this much testing demand with limited resources? This three-part article outlines the mountain of testing that was requested and the cost-effective, flexible strategy that was developed in the face of constrained resources.

The first installment described the problem space. The second installment outlined an initial cost-conscious strategy for shrinking the problem space and establishing feedback mechanisms. This final installment will build out a system to monitor test triggers for the application and its supporting systems.

Now that we've come up with a mobile application testing strategy to help us keep the scope of testing manageable from a device perspective, we needed to come up with a way to manage testing requests and expectations. Without a plan, we might not be aware of changes that needed to be tested and miss critical testing events. The worst case scenario from strictly a cost and resource perspective would be the test organization testing "just in case" for every change, no matter how small. (Fear-based testing is a reality, but not much of a strategy.)

Clearly, we needed a plan to make the rest of our plan work.

One of the five key points in the mobile testing strategy discussed in Part 2 of this series was to develop a list of specific thresholds and events that trigger testing. Having a list like this means you can get the right people in a room in advance of events happening in the real world to discuss trigger events and the risk they represent.

The great thing about having this stakeholder discussion away from the events themselves is that people predisposed to extreme caution (the "Test everything!" mindset) are removed from the immediacy and pressure of the moment, and can be part of a more rational discussion of risks. At the same time, members of the group who see little risk anywhere (the "What could go wrong?" mindset) get to hear real concerns raised and discussed. All in all, the things that trigger testing get a lot more buy-in when all of the key players have the discussion together.

Before you have the discussion, the first step is to think through the application from the ground up. In Part 1, we spent some time looking at the problem space of the mobile devices themselves, so we'll just hit the high notes here:

  • New device hardware (and all their variations in display and user input approaches)
  • New OS versions
  • New browser version (if updated separately from the OS)
  • New versions of any software your Web application expects to be present on the device to function properly (e.g. a PDF reader, integrated email client, etc.)

After that, how do we manage the infrastructure that supports our Web app? If we're using a fully hosted service like Amazon's EC2, then we've outsourced that risk--we might expect the host to manage most or all of that. In our case, the company was hosting the solution at its own data center, meaning we needed to consider things like:

  • Application servers: updating OSs with new versions or service packs, and perhaps provisioning new hardware (either to replace an old app server or add a new one to a server farm)
  • Web servers: same concerns here as the application servers
  • Database servers: same concerns here as the application servers, plus concerns around updates to the database engine
  • Routers, firewalls, load balancers: connectivity issues could arise as these are updated or replaced
  • Supporting platforms: in our case, we had a third-party CMS system that served as the backbone to content, user-specific data, and role-based security. There was also Active Directory and Group Policy to consider in this particular Microsoft-based environment.

Quite a list of things to consider. To organize this information I collected it into a table something like the following:

Having a document like this is helpful in three ways.

  1. It gives everyone an easily-understood overview to begin the discussion. Instead of having to hold everything in memory or wonder if a critical system will be mentioned, a table can be scanned. If there's a miss, it stands out.
  2. It gives everyone something to shoot at. In my case, it allowed me to set initial thresholds for every trigger I could think of, which meant having the ability to set the tone for the, um, frugality of the approach in a uniform way. (You'll recall that one of the five points of our strategy was to start small.)
  3. It gives everyone an indication of the scope of testing for the different events. Stakeholders need to know that everything doesn't get tested every time. The test organization demonstrates, at a high level, that coverage is a lever to control costs as well as manage risk.

The first draft of the table was not perfect. I missed some systems that needed to be included. Some things were eventually considered not worth addressing and were removed. I discovered during the meeting that there would be some process issues to work out around establishing monitoring for some of the hardware and network elements. Ultimately there was work to be done on the table itself and among the groups that would be most familiar with some of the systems being monitored.

The additional work didn't become a roadblock, though. One of the other five key points to the strategy was to establish a quarterly review process to adjust the strategy as needed. This made some uncertainty around thresholds, triggers, monitoring and notification a little easier to accept. Great didn't become the enemy of good.

After updating the table with points from the meeting (and follow-up discussions with a few decision-makers and SMEs), we had a starting point that everyone could live with. An email distribution list consisting of people with monitoring responsibilities was created to support notifications, and the first version of our full plan was ready.

In conclusion

In our situation, we were able to effectively trim scope from “all mobile devices” to something that made sense in our immediate context. Risk, triggers, motoring, notification, and testing coverage were all acceptable and didn’t break the budget. Periodic review would ensure that as those factors evolved over time, the business would have the ability to adjust the strategy, keeping it scaled to only what the facts on the ground dictate.

 

This was last published in September 2011

Dig Deeper on Software Usability Testing and User Acceptance

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchMicroservices

TheServerSide

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

Close