For many businesses the festive season is far scarier than Halloween.
Infrastructure and systems may be pushed to and past peak capacity. Web sites, victims of their own popularity, slow to a crawl or simply don't load. Order tracking, automated response, help and call centers can easily be swamped and become inefficient.
Of course from a technology standpoint these are all understandable, if regrettable and avoidable, issues. But few, if any, consumers will bother to come back to a site that doesn't load quickly or that crashes in the middle of their shopping session. And no one will ever forgive a business that doesn't deliver its holiday gifts on time, no matter what the reason.
The good news is that these reputation-killing issues can be thwarted with effective performance and quality testing. Here are 10 tips and suggestions to help ensure your system delivers the capabilities you need, when you need them.
Holiday prep processes
1. Plan ahead
The holidays are rapidly approaching, and while now may not the best time to urge you to plan ahead for the seasonal deluge make a note on your brand-new calendar to remind yourself not to leave your holiday scalability testing and planning to the very last minute next year. Later is better than never at all, but ideally you should start thinking about how to handle the most demanding business peaks a few months ahead. If your business has slow periods, use those times to prepare for the busy times.
2. Apply triage
If you haven't taken any precautionary measures yet, take the triage approach. Determine the areas that are most likely to be hit hard by increased demand -- yes, a basic risk analysis -- and focus your attention on getting those parts of your system shored up first.
3. Help from outside
Consider bringing in contractors, if necessary, to help staff cope with the holiday rush. Don't let your people (or you) take on so much that they become totally overwhelmed, exhausted and resentful.
4. The customer's perspective
Before you can conduct useful tests you need to really understand how your customers are using your applications and Web site. All tests should be engineered from the customer's point of view. Quite often performance tests are engineered based on best guesses and/or inappropriate behavioral patterns of their customers.
For instance, incorrect end-to-end performance tests may have the automated test virtual users performing feature-based tasks, such as continually adding products to the shopping cart, updating user profiles or continually performing searches in very distinct silo-like "workflows," rather than taking a transactional approach that would have the test virtual users navigating the entire purchasing workflow. As a result, the subsequent test results do not reflect the appropriate metrics from a line-of-business point of view, and improper decisions can be formulated based upon this flawed testing approach.
5. Sync up with business initiatives
The marketing and sales groups should be working in tandem with the application and IT groups in order to properly size infrastructure in relation to the marketing and sales initiatives intended to drive customers to the application.
As an example, during last year's holiday season, Apple's iTunes music store was swamped by online shoppers attempting to redeem their iTunes gift cards and add music to their new iPods. Downloads of a single song took up to 20 times longer than usual, and users who were able to access the store -- many couldn't -- received a barrage of error messages. Similar slowdowns and shutouts were experienced by users of Amazon.com and Wal-Mart's Web site, when users lured by online sales crushed the companies' servers. And just last week Yahoo's shopping site experienced outages as shoppers took to the Web during "Cyber Monday."
All companies survived the onslaught and likely won back disgruntled consumers, but these are insanely popular sites, ones that virtually always offer good user experiences. Consumers may not be so forgiving elsewhere.
6. Keeping it real
Be sure to perform real world end-to-end tests. Testing in a lab environment can provide great value for achieving and understanding the best-case performance of an application. However, the lab environment and the target production environment are usually quite different -- not only in regards to the application's core infrastructure (i.e., hardware and software configurations), but also due to the existence of other networking devices and shared network links that may impact the performance of the target application.
Ancillary networking devices may have quality-of-service polices or traffic shaping policies that prioritize certain applications and/or protocols. Security devices may block certain messages. Shared load-balancing devices may not be configured in the manner in which the application is expecting to manage user sessions. Underpowered network devices may have limitations in the number of concurrent connections that can be managed. All of those factors can impact the availability of the application by creating a logical brick wall in front of the application, even though the application's core infrastructure has not reached its limitations.
7. Swimming downstream
End-to-end testing should also take into account the downstream applications that may be providing services, such as order fulfillment or customer service. It's quite plausible that the customer-facing application was able to support the increase in consumer demand of the holidays, but the downstream applications were not validated. As a result the order distribution process may become overwhelmed or the rise in customer service-related calls might overload the customer support application or phone system.
8. Talk the talk
When choosing a quality assurance and testing company, the ability to utilize most mainstream automated performance testing tools should definitely be a pre-requisite. But it should not be the key qualifying characteristic, since it is very easy to use these automated solutions improperly. Solution providers need to demonstrate knowledge from a business perspective and translate the needs of the business into realistic automated solutions that properly emulate its end users, as well as collect key performance data that is applicable to the line of business.
Proper testing cannot be achieved within 30 minutes, which some advertisements have claimed. Proper testing needs to be carefully planned and implemented weeks or even months in advance of the particular holiday season in order to properly test and implement changes if necessary.
9. Follow the trends
Performance management should not stop after the application has been released. Synthetic business transactions should be continually executed against the production application in order to identify when there are potential performance-related issues. These metrics can also be collected and analyzed to understand if there are any patterns or trends related to business cycles, data growth, consumer demand, etc.
10. The elusive consumer
Ongoing analysis of application usage can assist in further understanding consumer navigational patterns and help to fine-tune any ongoing performance testing of the application. The ultimate goal is to provide an optimum experience year-round for every single one of your users.
About the author: Matthew Adcock is a certified performance testing expert and team lead in the Performance/Scalability Division of RTTS. Matt has more than 11 years of experience in modeling and executing scalability tests for Fortune 500 firms, including a history of helping firms effectively prepare for the holiday rush.