Software Quality.com

The state of performance testing

By Scott Barber

Scott Barber, software tester
Scott Barber

Since I started writing a monthly column almost three years ago, I've made a habit of writing an annual "year in review" piece that summarizes the trends of the previous year and offers some thoughts on what the next year is likely to bring for performance testing.

Throughout most of 2007, I was thinking that I wouldn't have much to say this year, but by the end of the year, there were plenty of significant events to discuss. In fact, 2007 might just end up being remembered as the year that the software development industry as a whole started taking performance testing seriously (similarly to, but not nearly as dramatically as, Y2K, when testing seemed to become a standard part of Web development). Let's take a look into why I'm willing to pose that possibility.

Public awareness
Even with the extremely high-visibility issues related to TurboTax as the tax season closed and the ticket purchasing issues when the Colorado Rockies advanced to the World Series, 2007 had a fairly typical number of newsworthy performance-related failures. What was different from previous years is that, for the most part, the companies experiencing those failures had a completely reasonable-sounding performance testing and/or capacity and scalability planning program in place. So if these companies had programs in place, why were there still such noteworthy performance issues? Based on the data points I have, there were four main reasons.

Decision makers continue to do the following:

 

Facts that these decision makers seem to consistently fail to take into account:

 

Of all the high-profile performance failures during 2007 that I have reliable information about, only one was not instigated by one (or more) of those four items. In that one case, the application experienced a growth of users that was nearly an order of magnitude greater than its creators' seven years' worth of adoption rate trend data (plus a safety factor of 2) predicted. I guess even the best performance testing and management programs can't protect applications from being victims of massive, unanticipated success.

Availability of information and training
Last year will probably be best remembered by performance testers for an explosion in the availability of non-vendor-centric, tool-independent, process-neutral, relevant information and training for performance testers.

Ever since I started looking beyond the people in the company I was working for when I started to think of myself as a performance tester, I've been extremely disappointed in the lack of publicly available information and training directly relevant to testing software system performance during the software development cycle. In fact, with the exception of a few fabulous articles by Alberto Savoia, the only information and training I found back then that was directly related to what most performance testers do was created by performance test tool vendors. Naturally, vendor-centric books and courses were focused on teaching someone how to use the vendor's tool and were heavily biased toward making the tool look good as opposed to actually trying to teach people how to do useful performance testing.

Some of the most significant new books and courses for performance testers that became available during 2007 are listed below. To be fair, I was significantly involved in creating many of these books and courses. While that may make me more excited about them, it does not change the fact that they exist, that they are ne, and that over the past 10 years (at least) no single year has introduced nearly this much new material related to software performance testing.

New books

 

New training courses not affiliated with vendors

 

Tools and vendors
Overall, despite a reliable industry interest in the performance test tool market, 2007 was a slow year for tool vendors, as most of the major vendors were still recovering from mergers and purchases, but a few events are worthy of mention. The situation can be summarized as follows:

 

Software development trends to watch for in 2008
Outsourcing and the spread of agile software development will affect many in IT in 2008. In light of that, software testers and requirements engineers will need to find their places in this environment. Read the full story.

As far as I can tell, LoadRunner retained the top spot in the performance test tool market during 2007 mostly due to inertia. That isn't a judgment of whether or not it belongs in the top spot, but rather an observation that so little has changed in the enterprise-grade performance test tool market this year that it seems most likely that whoever led the market at the end of 2006 would continue to lead it today regardless of actual qualifications.

I believe that the landscape of the performance test tool market is in a state wherein it's equally likely that LoadRunner could solidify its position or that any of several vendors could dethrone LoadRunner with the next round of major releases. Be that as it may, I don't expect to see any major releases before late Q3/early Q4 2008, and I don't anticipate that the market will understand their impact before Q2 2009.

Looking toward 2008
This past year proved to be one of increased interest and awareness in the area of performance testing, and 2008 is poised to experience more of the same. As noted above, a valuable wealth of new books and training is available, and pending events emphasize this interest, such as the tenth meeting of the Workshop On Performance and Reliability, which has announced that its spring 2008 theme is "How can we teach performance testing?" I am hopeful that we'll see significant advances in the state of the practice of performance testing by the close of 2008.

By late 2008/early 2009, I strongly suspect that one or more of the performance test tool vendors will get its act back together, likely resulting in a battle for top spot in the tool vendor market. Once the vendors get settled in with their next releases, I will be interested to see whether they will start training people to be effective performance testers, thereby acknowledging that their training heretofore has been about their tools and not about teaching people to be effective performance testers. Or will they keep doing what they have done in the past, leaving those of us who actually care about helping people become effective performance testers to go back to trying to figure out how to get our message heard over those of vendors with multi-million-dollar marketing budgets.

Whether the particulars of these predictions come true or not, 2008 is unlikely to be a boring year for folks involved in the software performance testing industry.

----------------------------------------
About the author: Scott Barber is the chief technologist of PerfTestPlus, vice president of operations and executive director of the Association for Software Testing and co-founder of the Workshop on Performance and Reliability.


20 Jan 2008

All Rights Reserved, Copyright 2006 - 2024, TechTarget | Read our Privacy Statement