Tip

The state of performance testing

Some might say 2007 was the year the software industry started taking performance testing seriously. In this month's Peak Performance column, Scott Barber takes a look at what happened in the performance testing field in 2007 that made that so, and offers some predictions for 2008.

Scott Barber, software tester
Scott Barber

Since I started writing a monthly column almost three years ago, I've made a habit of writing an annual "year in review" piece that summarizes the trends of the previous year and offers some thoughts on what the next year is likely to bring for performance testing.

Throughout most of 2007, I was thinking that I wouldn't have much to say this year, but by the end of the year, there were plenty of significant events to discuss. In fact, 2007 might just end up being remembered as the year that the software development industry as a whole started taking performance testing seriously (similarly to, but not nearly as dramatically as, Y2K, when testing seemed to become a standard part of Web development). Let's take a look into why I'm willing to pose that possibility.

Public awareness
Even with the extremely high-visibility issues related to TurboTax as the tax season closed and the ticket purchasing issues when the Colorado Rockies advanced to the World Series, 2007 had a fairly typical number of newsworthy performance-related failures. What was different from previous years is that, for the most part, the companies experiencing those failures had a completely reasonable-sounding performance testing and/or capacity and scalability planning program in place. So if these companies had programs in place, why were there still such noteworthy performance issues? Based on the data points I have, there were four main reasons.

Decision makers continue to do the following:

 

  • Put faith in performance data extrapolated from not-very-production-like performance test environments.

     

  • Stop testing performance when systems are promoted into production, instead of using production information to validate or improve upon the assumptions made during performance testing and subsequently improving the accuracy of the performance test results.

     

  • Trust numbers obtained from somewhere other than studies conducted with actual users of the system under test as a basis of comparison for determining whether or not end users will be satisfied with their perception of the application's performance.

     

  • Believe that the back end is the source of the majority of user-perceived response-time-related issues.

Facts that these decision makers seem to consistently fail to take into account:

 

  • Subtle differences between test and production environments and usage continue to be at the root of a large percentage of performance failures.
    This past year proved to be one of increased interest and awareness in the area of performance testing, and 2008 is poised to experience more of the same.
    ,

     

  • Successful performance testing and management programs continue testing performance, with much of the core testing and tuning team in place, through release into production. They do not complete the hand-off to the application maintenance team until after the first production patch release.

     

  • The only way to know if your end users will be satisfied with application performance is to ask them.

     

  • The back end is the source of the majority of capacity- and scalability-related issues. The front end (UI design, content delivery distribution, effective use of browser caching, etc.) is responsible for many more user-perceived response time issues than the back end.

Of all the high-profile performance failures during 2007 that I have reliable information about, only one was not instigated by one (or more) of those four items. In that one case, the application experienced a growth of users that was nearly an order of magnitude greater than its creators' seven years' worth of adoption rate trend data (plus a safety factor of 2) predicted. I guess even the best performance testing and management programs can't protect applications from being victims of massive, unanticipated success.

Availability of information and training
Last year will probably be best remembered by performance testers for an explosion in the availability of non-vendor-centric, tool-independent, process-neutral, relevant information and training for performance testers.

Ever since I started looking beyond the people in the company I was working for when I started to think of myself as a performance tester, I've been extremely disappointed in the lack of publicly available information and training directly relevant to testing software system performance during the software development cycle. In fact, with the exception of a few fabulous articles by Alberto Savoia, the only information and training I found back then that was directly related to what most performance testers do was created by performance test tool vendors. Naturally, vendor-centric books and courses were focused on teaching someone how to use the vendor's tool and were heavily biased toward making the tool look good as opposed to actually trying to teach people how to do useful performance testing.

Some of the most significant new books and courses for performance testers that became available during 2007 are listed below. To be fair, I was significantly involved in creating many of these books and courses. While that may make me more excited about them, it does not change the fact that they exist, that they are ne, and that over the past 10 years (at least) no single year has introduced nearly this much new material related to software performance testing.

New books

 

  • High Performance Web Sites: Essential Knowledge for Front-End Engineers by Steve Souders.
    This is the best single reference I have found to assist software testers in learning what to look for in terms of what will have significant negative impact on the user-perceived response time of a Web site.

     

  • Performance Testing Guidance for Web Applications, a Microsoft patterns & practices book by J.D. Meier, Scott Barber, Carlos Farre, Prashant Bansode and Dennis Rea. (Also available as a free PDF download and in Web format.)
    Don't let the Microsoft patterns & practices branding fool you; this is a tool-, technology- and process-neutral book that provides sound guidance for performance testers of all experience levels.

New training courses not affiliated with vendors

 

Tools and vendors
Overall, despite a reliable industry interest in the performance test tool market, 2007 was a slow year for tool vendors, as most of the major vendors were still recovering from mergers and purchases, but a few events are worthy of mention. The situation can be summarized as follows:

 

  • Over the past two years, many of the major performance test tool vendors were bought by or merged with large software companies.

     

  • We are still at least a year out from knowing who will emerge as the "premier" enterprise-grade performance testing tool vendor.

     

  • Microsoft eased into the performance test tool market in 2007, but it is still about year out from becoming a significant player outside of existing Visual Studio development shops. (Its tool is currently sound, but feature poor.)

     

  • JMeter make a lot of headway as a widely applicable, credible free tool.

     

  • Radview launched a free version of WebLoad.

Software development trends to watch for in 2008
Outsourcing and the spread of agile software development will affect many in IT in 2008. In light of that, software testers and requirements engineers will need to find their places in this environment. Read the full story.

As far as I can tell, LoadRunner retained the top spot in the performance test tool market during 2007 mostly due to inertia. That isn't a judgment of whether or not it belongs in the top spot, but rather an observation that so little has changed in the enterprise-grade performance test tool market this year that it seems most likely that whoever led the market at the end of 2006 would continue to lead it today regardless of actual qualifications.

I believe that the landscape of the performance test tool market is in a state wherein it's equally likely that LoadRunner could solidify its position or that any of several vendors could dethrone LoadRunner with the next round of major releases. Be that as it may, I don't expect to see any major releases before late Q3/early Q4 2008, and I don't anticipate that the market will understand their impact before Q2 2009.

Looking toward 2008
This past year proved to be one of increased interest and awareness in the area of performance testing, and 2008 is poised to experience more of the same. As noted above, a valuable wealth of new books and training is available, and pending events emphasize this interest, such as the tenth meeting of the Workshop On Performance and Reliability, which has announced that its spring 2008 theme is "How can we teach performance testing?" I am hopeful that we'll see significant advances in the state of the practice of performance testing by the close of 2008.

By late 2008/early 2009, I strongly suspect that one or more of the performance test tool vendors will get its act back together, likely resulting in a battle for top spot in the tool vendor market. Once the vendors get settled in with their next releases, I will be interested to see whether they will start training people to be effective performance testers, thereby acknowledging that their training heretofore has been about their tools and not about teaching people to be effective performance testers. Or will they keep doing what they have done in the past, leaving those of us who actually care about helping people become effective performance testers to go back to trying to figure out how to get our message heard over those of vendors with multi-million-dollar marketing budgets.

Whether the particulars of these predictions come true or not, 2008 is unlikely to be a boring year for folks involved in the software performance testing industry.

----------------------------------------
About the author: Scott Barber is the chief technologist of PerfTestPlus, vice president of operations and executive director of the Association for Software Testing and co-founder of the Workshop on Performance and Reliability.


Dig Deeper on Topics Archive

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close