The Web 2.0 vision promises many wonderful things, including rampant social networking, grassroots content creation and broad-based collaboration. On the tech side, Web 2.0 promises access to all the riches of the Internet with the blazing performance of the fastest desktops. Consider Google Maps. You can "speed drag" satellite maps of your neighborhood, city, state or country at will because Ajax anticipates your movements and makes server calls behind the scenes.
But as organizations strive to deliver exhilarating experiences like those, they may be relinquishing control of the Web experience. Earlier in this decade, a company's IT organization owned its Web site experience and had complete control of the infrastructure, presentation logic, business logic and data tier. In 2007, ownership is distributed and constructed in the browser. There are a million ways for an application to break in the Web 2.0 era.
Times were simpler a few years ago when the vast majority of Web users used Internet Explorer (IE) on Windows. This single delivery platform led to few surprises when applications hit production. Since developers were using the same platform as their users, problems showed up sooner rather than later. However, the overall Web experience -- a function of reliability, appearance and performance -– was mediocre (but not bad for a first attempt). Since I like to give grades, here's a report card that explains why:
Yesterday's Web Experience
- Reliability: B+ (one consensus platform was easy to troubleshoot)
- Appearance: C- (quite gray)
- Performance: B (expectations were lower and applications were simple)
- Overall: Mediocre
Although tomorrow's Web 2.0 experience promises wondrous things, we are currently going through a painful "rock bottom" trough first. Users are on a host of different browsers (IE, Safari, Firefox, Opera, iPhone, BlackBerry and more) on myriad operating systems (Windows, Mac, Linux, and mobile OSes).
More logic than ever runs inside the browser. Meanwhile, more content than ever is beyond the host organization's control -- advertisements, analytics and content delivery networks, for starters. It's a composite world, but companies typically haven't figured out how to test anything besides their own stuff. The Web experience report card now looks like this:
Today's Web Experience
- Reliability: F ("You forgot to test for my particular browser and OS!" "There's a hole where that Web service should be!")
- Appearance: A- (pretty slick, I must say)
- Performance: C (expectations are up, and so are numbers of parts that break)
- Overall: Poor
So what is it going to take to get these reliability and performance grades up, deliver a consistent user experience despite all the component pieces, and deliver on the promise of Web 2.0?
Since we are dealing with an entirely new development and delivery paradigm, we must update our software development and testing strategies. Here are some suggestions:
- Engineer Web experience quality into your product. Don't just makes changes on the fly like we all did in the past. (If you've ever built a Web application in Perl, you know what I'm talking about.) Take the entire experience into account up front, from the moment you conceive the application. Ensure that your release criteria include specific performance and reliability metrics that you can measure often during and after development.
- Know (and manage) what feeds into your customer's experience. Many organizations understand the concept of third-party Web services but still test only the content they themselves serve up. Keep tabs on every factor that affects your customers' experience, including third-party data and services. And remember: just because a third-party Web service works well for some of your customers doesn't mean it will for all of them.
- Know your customers, their profiles and their usage patterns. What kind of browsers do they use? What kind of machines? How do they connect to the Internet? Where in the world are they located? What are their usage patterns, (e.g., days, nights, weekends or certain paths through the application)? All of those factors affect the customer experience, so make sure your application will work well for your customers. And just because your third parties deliver well in one geographical area doesn't mean they will in all of them. You can't assume your third parties are as consistent as you are.
- Create a browser compatibility lab consisting of all the possible browser operating system combinations users could have, including cell phones (I plan on being one of the first in line for an iPhone) and the BlackBerry. The open source Selenium testing tool is a good way to automate tests on any browser/OS combination. Selenium Remote Control lets developers code tests in their favorite language and operate these browser/OS combinations remotely. An alternative tool to Selenium is Watir. Both are hosted at OpenQA.org. And Firebug, an open source Firefox extension, is the Swiss Army Knife for Web 2.0 developers and QA engineers alike.
- Capture screenshots and movies of actual tests on those platforms so you can gain real insight into any problems, their impact and how to fix them. This is an emerging functionality available in very few commercial testing tools, but it can really help when trying to determine why an automated test failed.
- Capture logged activity in the browser during automated tests and during production. There's too much application logic in the UI to ignore. Firebug provides this support for Firefox, and Safari has built-in support. Consider Firebug Lite for IE and other browsers. The hardest trick is transferring logs from the browser to a persistent storage, but the payoff is well worth it.
- Incorporate the browser into your continuous integration (CI) processes. Most implementations of CI typically test server code, but they don't account for the increasing amount of activity occurring in the browser. Even though incorporating the browser takes a bit more time and resources, it ensures you test on the real end-user experience, which is critical these days.
- Consider "on-demand" testing. Testing on real multiple browser/OS combinations and capturing gigabytes of performance data require much more testing infrastructure than most organizations want to invest in. Using on-demand testing (Software as a Service) lets you leverage someone else's testing horsepower, architecture and setup investment. Then you can just rent a browser lab as needed.
- Refactor tests as Web applications evolve. Ajax has changed how Web applications are built, and in turn it has made their automated tests more tightly coupled to the code. The tighter the coupling, the more attention to data consistency and test fixtures that is required. In the old days, defining an automated test on a Web application was easy: every step in a use case (Joe Surfer visits www.xyz.com and clicks on the log-in link, etc.) corresponded to a new page view. With Ajax, things are more complicated, so refactoring is critical.
Adopting these strategies will ensure that your QA testing is sophisticated, thorough and agile enough to keep up with the evolving complexity of Web 2.0 applications. Good, clean, reliable, high-performing code will help transform the Web 2.0 vision and its enormous potential into a reality that raises the value of the Internet.
About the author: Patrick Lightbody is QA solutions product manager and chief open source evangelist for Gomez Inc., provider of on-demand web application experience management services. You may contact him at firstname.lastname@example.org.