Manage Learn to apply best practices and optimize your operations.

Test automation: Investing in performance testing

Performance test automation can require special tools and skills. Often organizations don't know where to start. In this tip by Agile Testing co-author Lisa Crispin, you'll learn the steps to analyze your needs, evaluate solutions, build the environment and test your application for quick performance before it’s deployed to production.

Adding the word “automation” to the phrase “performance testing” is probably unnecessary. It’s difficult to do...

performance testing without automation. “OK, everyone, at 3:00 pm we’ll all log into the application and do something” isn’t usually a very good approach.

Test automation is a scary topic for most people by itself, but most testers have even less experience with performance testing. Some well-known performance and load test tools are expensive. Some companies offer specialized performance testing services. Where do you even get started?

Analyze your needs

Any project has to start with requirements. What problems do you need to solve with performance testing? You need clearly defined expectations from your business and customers. My team’s web application had few concurrent users in the early years, but we hoped one day our company would grow enough to make system performance an issue, so we tried to stay ahead of that curve.

An internet retail site has different performance testing needs than an internal batch job. Ask your stakeholders for the performance goals. How many users will be logged in at any given time? What response time is required? How many items need to be processed in a batch job, and how fast must it complete?

Consider the technology you’re already using. If you have a Java application, it might make sense to use a performance test tool whose scripts can be coded in Java. You may want to be sure your performance test tool plugs in easily to your continuous integration system.

Who’ll be responsible for actually writing and executing the performance tests? The needs of non-technical testers are different than for programmers. Does your company have a team of specialists to do the performance, reliability and scalability testing, or does each development team need to do their own? If there is no in-house expertise, you’ll also have to plan time for learning necessary skills to accomplish the performance testing.

Also consider the life span of the project. Is this a one-shot effort, or will you need to do performance testing as part of your development process for years to come?

On many projects, performance testing is left until just before release. This is usually a mistake. It’s too late at that point to do anything to improve performance.

Lisa Crispin, SSQ Contributor

 

There’s a lot of mystery around performance testing that stems from lack of general knowledge about it. A competent programmer can write her own performance test harness to create a system load via existing unit tests. However, being able to test an internet application with a realistic load of users from various parts of the country or the world requires specific infrastructure and expertise.

Evaluating solutions

You need lots of lead time to choose a performance testing tool, learn it, build a performance test environment, get a baseline of performance, and start running tests and evaluating results. Start early. My team wrote a user story to evaluate performance test tools months before we planned to do any testing. We budgeted time to research tools and get recommendations. When I posted a request to a testing mailing list, I not only got helpful tool references, but some people even offered to help us learn the tool.

Choose the top two or three tool candidates and plan time to give them each a trial. Once our team weighed in and chose two tools that looked most appropriate for us, we again wrote a user story and one tester took responsibility to do a trial run with each tool. He gave us a demo of each tool, listed the pros and cons of each, and we agreed on the best fit for our situation.

Building the environment

The trickiest part of performance testing is that it must be run in an environment where the results will be meaningful. When I worked for an internet retail company, the only meaningful place to run the test was in production. We had a specialist performance testing provider run a test one night per year, during which we closed our site off to all customer traffic. At my current financial services company, we had to invest in hardware and software (including a database) to replicate our production environment. We needed a lot of database tuning, so it was essential to have an exact copy of production. This performance test environment doubles as a mirror backup for production in case of dire emergency.

This is likely to be a significant investment for your company. That’s another reason you need lots of lead time: hardware, software, training and other needs must be included in the budget.

Doing the testing

The best place to start with performance testing is usually to get a baseline of your current production application performance. This gives you a benchmark against which you can compare performance of new or updated code in the future. Getting a benchmark generally involves writing test scripts that carry out realistic user activities, and executing them with ever-increasing numbers of “users” driven by the tool. Performance metrics for a Web-based application usually include data such as maximum time per transaction at each load level, maximum number of busy connections, and page load time. For batch jobs, we measure the maximum and average time per transaction or per item processed.

Profiling and monitoring tools can help identify and investigate performance issues, both in production and in test environments. For example, you can monitor application memory use and watch for leaks. You can watch the database pools and connections and see if tuning is required.

On many projects, performance testing is left until just before release. This is usually a mistake. It’s too late at that point to do anything to improve performance. If your product is functionally perfect, but the response time is so slow that customers abandon it, nothing else really matters.

If your team is implementing new code architecture, start by doing a “spike,” writing code for a potential technical solution. This code won’t be kept as production code, at least in the form it takes for the spike, but you can deploy it and test its performance to make sure it scales well enough. When my team starts a new, complex project, we usually have a pair of programmers work on a spike for one or two iterations and verify the performance before we start working on the “real” code.

Other events may require performance testing. Last year, we decided to implement a different application server in production. We ran our performance test scripts.

Performance testing isn’t inherently difficult. You may not need a pricey tool or outside service. However, to truly predict production performance, you need a dedicated environment that produces results that can be extrapolated into what will happen in production. An investment in hardware and software may be unavoidable, but if the company’s bottom line depends on good performance, the investment will pay off.

This was last published in February 2011

Dig Deeper on Stress, Load and Software Performance Testing

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchMicroservices

TheServerSide.com

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

DevOpsAgenda

Close