Manage Learn to apply best practices and optimize your operations.

When the flag drops, will your software perform?

As with race cars, once you get the green flag to release software into production, there's no going back. Now's the time when you see how good your testing and preparation was.

Scott Barber, software tester
Scott Barber

Countless hours of development are now in the past. Testing indicates that everything is ready for the big day. The whole team is on hand, and the world is watching. It's the moment of truth; time to find out if all of the hard work is going to pay off. Anticipation builds until the command is given…

"Gentlemen, start your engines!"

The cars come to life. They take a few pace laps and at last, the green flag drops. In fewer than 90 seconds the cars are back on the front stretch approaching speeds of 200 mph -- the pinnacle of stock car performance.

This summer I worked on a project in Indianapolis. Usually when I travel to remote client sites I fly home on the weekends, but there was one weekend that I chose to stay. I chose to stay for two reasons. First, the flights for that weekend were insanely expensive and second, I have some friends in Indianapolis whom I'm always happy to have an excuse to visit. As luck would have it, the flights were expensive because that was the weekend of the Brickyard 400, and one of the friends I wanted to spend time with had a spare ticket, which I shamelessly accepted when he offered.

The dropping of the green flag is delivery. In [software development and car racing], there is no going back. If there are performance issues that weren't found, everyone is going to know about it.
,

During the pomp and circumstance leading up to the start of the race I realized what a fabulous example the race was of one of my most-quoted sound bites related to performance testing: "Don't confuse delivery with done."

Take a moment to think about it. These cars are designed, built and tested in excruciating detail from the ground up. In some cases, a new car is built explicitly to be used on a particular track. In other cases, a car is built with enough variability to be competitive on several tracks with similar characteristics. The cars are built and tuned based on historical data about the track, this year's weather predictions for race time, driver preferences, which tire Goodyear is providing, etc.

I don't know how many hours are spent by how many people to prepare a car to qualify for a race such as this, but I suspect it's a lot. I also don't know how far in advance preparations begin, but I do know that the teams have to have cars ready for races the week before and the week after any given race, so I suspect that preparation during the week leading up to the race is rather intense.

I imagine that engine crew, for example, must feel the same kind of pride and relief when their car qualifies well and makes the field as we do when our application performs successfully during our series of performance tests. I also imagine that they feel the same kind of nervous excitement leading up to the green flag as we feel as our application is promoted into production and users start to access it for the first time.

The dropping of the green flag is delivery. In both cases, there is no going back. If there are performance issues that weren't found, everyone is going to know about it. The difference is that on many software development teams, the moment of delivery into production equates to the developers and testers being done. By done I mean that they are assigned to another development project. They may be called back for a future release or if there is a significant enough failure, but in most cases the team is truly done with the project.

For race teams, that simply isn't the case. All of the people who designed, built and tuned the car for the race go right back to work as soon as that car's engine is fired up. They go to work collecting, processing and interpreting data they are receiving from all of the monitoring tools that are built into the car.

The race team has about 20 minutes to calculate additional adjustments for the car, because when the car comes into the pits there is an opportunity to further tune for better performance. For example, virtually every pit stop involves changing the tires on the car. If you are not familiar with the sport, you might be surprised to find out that a difference of one-half of one pound of air pressure in one tire can have a dramatic impact on the car's performance on the track. With that kind of precision in play, these folks must be absolute masters setting up and configuring various monitors and collecting and interpreting that monitoring data in real time.

For the software development team, that would be roughly equivalent to the following scenario:

The application is promoted into production late Sunday night. At 8 a.m. on Monday, the first production users begin accessing the system. Between 8 and 9 a.m. the team is told to monitor production performance closely because between 9 and 9:05 a.m. there will be a pause in production usage while they adjust configuration settings on the production system to further optimize performance.

While that, of course, simply isn't the way software development works, some teams do have very advanced production monitoring strategies that feed into periodic, scheduled hot-fixes to optimize production performance. Teams who do this well tend to avoid the crashes and early retirements of their applications (or cars for that matter) that tend to plague applications that are not well monitored and optimized while in production.

All of the race teams that ran in the Brickyard 400 understood that "delivery" occurs at the start of the race and that "done" is the checkered flag, and each of those teams employed performance monitoring and optimization strategies between delivery and done. Can your software development team say the same?

----------------------------------------
About the author: Scott Barber is the chief technologist of PerfTestPlus, vice president of operations and executive director of the Association for Software Testing and co-founder of the Workshop on Performance and Reliability.


This was last published in September 2007

Dig Deeper on Stress, Load and Software Performance Testing

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchMicroservices

TheServerSide

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

Close