Manage Learn to apply best practices and optimize your operations.

Load testing with Microsoft Visual Studio Team System

Expert introduces new technical tips for users of Microsoft's Visual Studio Team System, making load testing far more efficient for launching VSTS and load testing web applications.

Microsoft Visual Studio Team System 2008 has become a mature load testing tool for web enabled applications. The combination of an integrated development environment and a testing framework that supports load testers that may or may not be .NET developers certainly makes VSTS; a candidate for any organization moving into load testing of web enabled applications, especially when Visual Studio .NET is the development platform of choice.

There are several papers available on the internet that addresses the process of creating a VSTS .webtest and using .webtest (s) to create one or more VSTS .loadtest (s) I will not attempt to duplicate the content of these articles. I will begin by providing a brief overview on how to create a VSTS .webtest followed by an overview on how to craft a VSTS .loadtest. During this overview we will introduce VSTS technical tips that will help you use VSTS more effectively or at least provide alternative routes for successfully employing VSTS for load testing web applications.

VSTS webtest 

Recording a .webtest
VSTS Web Testing supports recording activity against an existing website. This is accomplished using a traditional play-and-record process that allows us to record new web tests in a new or existing test project. Recordings are saved once the browser window is closed we can then instrument the recording from within the VSTS.

Recording Tip 1: Comments

Insert comments during the recording process to capture the intent of each event as it occurs. The intent may not be so obvious once you are looking at a stream of http requests without an interface to use as a reference.

Recording Tip 2: Think Time

Think Time is the property that defines the user wait time between requests. Always try to approximate typical user think time these values can have significant impacts on the run characteristics of future test runs. If you don't believe the think times are appropriate adjust them after recording.

More articles on Load testing

  • Web 2.0, RIAs push load testing to the max

Instrumenting a .webtest

The recorded .webtest certainly provides the raw .webtest framework. This framework needs to be appropriately instrumented, modified, and customized to meet your load testing needs.

Instrument Tip 1: Transactions

Transactions allow you to package http requests into logical packages. Transactions are used when reporting on a .loadtest run using a consistent self-sorting naming convention can make future reporting across several .webtest (s) extremely easy. For example:

  • <application><Business Event><Step><Page>
  • HelpApplication_Login_01_Homepage

Instrument Tip 2: Validation Rules

Validation rules examine the HTTP response and check that it is performing as expected. There are several validation rules provided with VSTS, but the most useful one, in my experience, validates the existence of a text string in the given response (ValidationRuleFindText). If a validation rule fails, the request will be marked as failed - the Details tab will provide an explanation for what failed.

The validation rules provide several mechanisms to ensure the expected behavior has occurred it is important to remember that you are not functionally testing the application you are testing the architecture. To date the most useful and consistent are:

  • Maximum Request Time to validate the response occurred within X ms.
  • Response URL to validate the URL is the one that is expected (if this can be predicted).
  • Find Text to validate text contained within the response.

Instrument Tip 3: Extraction Rules

Extraction rules capture a response value to be used later usually within a request. VSTSautomatically adds an ExtractHiddenFields rule when data is posted back in the second request. While this ability certainly helps support later executions of the .webtest it does not address (catch) all correlation issues fortunately VSTS comes preloaded with several extraction rules and the ability to extend the extraction rules.

Instrument Tip 4: Binding Test Data

Extraction and validation rules provide for data capture/entry within the Properties window. Text can be pulled from a database making it possible to create a data driven .webtest that can be leveraged when creating a .loadtest. Parameters associated with the data source are associated with a column in the data source on consumption of the data row the value of that row/column will be assigned to the parameter. This is a very powerful tool when creating a data driven .webtest that becomes even more important when the .webtest is included within a .loadtest.

Instrument Tip 5: Consuming Test Data

There are several ways to consume Test Data: random, unique, and sequential. Unique is very useful for standalone execution of the .webtest to detect and remove data issues before a performance run. Sequential supports formal performance runs, if the data set is large enough to support the user load you should not encounter data collisions - remember the setting in the ".webtest" is what is used by the ".loadtest".

Executing a .webtest

Executing a .webtest as a data driven test will enable you to detect issues before they are presented during .loadtest execution. It should be noted that the .webtest is the framework upon which a .loadtest executes (transactions, extractions, validations, and data). It is important to ensure the .webtest is stable before incorporating it into your .loadtest.

Execution Tip 1: Verification

Verification occurs only as it has been defined within the .webtest this means that a .webtest with no verification rules will always pass not matter what the actual behavior of the application. Thoroughly test the behavior of the .webtest verifying requests and responses are behaving as expected and that adequate verification rules have been applied.

VSTS .loadtest 

VSTS .loadtest is a Performance / Load / Stress testing tool. Performance Test and Load / Stress Test determine the ability of the application to perform while under load. During Stress/Load testing the tester attempts to stress or load an aspect of the system to the point of failure - the goal being to determine weak points in the system architecture - the tester identifies peak load conditions at which the program will fail to handle required processing loads within required time spans. During Performance testing the tester designs test case scenarios to determine if the system meets the stated performance criteria (i.e. A Login request shall be responded to in 1 second or less under a typical daily load of 1000 requests per minute.). In both cases the tester is trying to determine the capacity of the system under a known set of conditions. The same set of tools and testing techniques can be applied for both types of capacity testing - only the goal of the test changes.

Constructing .loadtest 

The primary components of any .loadtest are the .webtest (s) that simulates the behavior of the business. The .loadtest packages these .webtest (s) into scenarios and then allows you to change the load profile of the group of .webtest (s) contained within that scenario. Within the test project that contains your .webtest (s) you can add a new .loadtest this request invokes the .loadtest wizard. The .loadtest wizard walks you through the process of Instrumenting the Scenario, Counter Sets, and Run Settings.

Construction Tip 1: Wizard

The .loadtest wizard is extremely useful when constructing a .loadtest the first few times, and does an excellent job of describing the various instrumentation options available but, I found it more useful to view the .loadtest as a whole using the standard tree view.

Instrumenting a Scenario

Scenarios act to package .webtest (s) into user groups that share a common load profile. Each scenario can be instrumented in terms of: think-time, load pattern, test mix, browser mix, and network mix. Once again, the .loadtest wizard will walk you through the actual instrumentation of the scenario and all instrumentation can be adjusted once the scenario exists.

Instrumenting Scenario Think time

In terms of .loadtest the think time is the time taken to navigate to the next web page. Each scenario can use the actual think time recorded by the .webtest, you can set the normal distribution of the think time between the requests, or you can choose not to use the think times between the requests. Certain types of test mix preclude (override) any think time instrumentation.

Instrumenting Scenario Load pattern

In terms of .loadtest the load pattern is one way of controlling user load during .loadtest execution the other is the test mix. The scenario can be instrumented to use a constant load or a step load. Constant load begins, executes, and ends with the same number of concurrent users. Step load starts with a minimum number of users, increases that number of concurrent users based on the step duration and count, and eventually reaches the specified maximum concurrent user count if the duration of the execution run allows for it.

Instrumenting Scenario Tip 1: Load pattern

Step pattern usually gives a more realistic load on the architecture and will allow you to observe the impacts of a growing load during one test execution especially useful if you have a limited timeframe for performing the load test.

Instrumenting Scenario Test mix model

The .loadtest test mix model is the second component that controls user load during .loadtest execution the first being the load pattern. There are three test mix models: "Based on the total number of tests", "Based on the number of Virtual Users", and "Based on user pace".

Instrumenting Scenario Tip 2: Test mix model

The "Based on user pace" option runs each test for the specified number of times per hour. This, in conjunction with the Step Load Pattern seems to deliver the most predictable number of user events per hour very useful when you want to simulate a particular production or stress load pattern.

Instrumenting Scenario Browser mix

The .loadtest Browser mix simulates the behavior of various browsers it must be noted that this is a simulation not the invocation of any particular browser. I have not found a way to verify the accuracy of the browser simulations beyond the fact that certain browsers (i.e. Firefox) seem to have a faster response time than other browsers (i.e. Internet Explorer 6.0 which coincides with my functional test automation experience with these browsers.

Instrumenting Scenario Tip 3: Browser Mix

For most situations, a browser mix that consists of 80% IE 6.0, 15% IE 7.0, and 5% Firefox 2.0 simulates the "normal" browser mix. These ratios will change based on the type of user community (corporate vs. consumer).

Instrumenting Scenario Network mix

The .loadtest Network mix simulates the type of network the end-user uses to access the architecture. The Network landscape should be well understood by the Network administrators - .loadtest allows you to simulate this network mix and therefore the actual network speed of the users.

Instrumenting Scenario Tip 4: Editing Scenarios

The scenarios contained within any .loadtest can be modified / edited at anytime. The .webtest (s) that make-up the scenario can also be edited at anytime all of these changes will take affect once the .loadtest is re-built (compiled).

Instrumenting a .loadtest

The overall analysis and execution profile of a .loadtest can be controlled by selecting appropriate counter sets, setting counter thresholds, and selecting appropriate run settings.

Instrumenting .loadtest Counter sets

The counter sets define the values / counter that will be captured by VSTS during the load test run - there are default counters that are always captured. These can be observed during execution and can be viewed under the counter sets within the load test. More importantly you can add machines / counters to the counter sets. This can be incredibly useful for observing the characteristics of architectural components under load.

Instrumenting .loadtest Tip 1: Counter sets

The number and variety of counter sets can be somewhat overwhelming focus on counters that will best address the architecture under load test. Partner with development, database administrators, and the architecture group when attempting to interpret the meanings behind these counters.

Instrumenting .loadtest Threshold rules

Threshold rules monitor the activity of counters within any given counter set and will raise an error when a particular threshold has been exceeded. VSTS comes with a predetermined set of thresholds (i.e. .NET garbage collection) and the ability to add additional threshold rules. Threshold rules either compare against a constant value (milestone value) or against other counter values.

Instrumenting .loadtest Run settings

VSTS supports several run settings, in fact one load test can have more than one configuration but only one configuration is active at any one time. The run settings for the .loadtest contain the counter sets selected for each system and the common run settings provided in the last wizard section. The purpose of these settings is fairly self evident but there are a few "surprises" that you should be aware of:

Validation Level the validation level impacts how .webtest Validation Rules "fire" during the load test run. To get "High" level validation rules to fire this parameter must be set to High.

Timing the timing of the run can impact how many unnecessary errors are created. Allowing the application space a warm-up and cool-down period where virtual users are slowly ramped up creates a more realistic and less error prone load test.

Executing .loadtest

VSTS executes .loadtest the same way it does .webtest (s) - the most significant difference being the type of real time information being supplied during execution. In the case of a .webtest the results focus on the request / response sequence while in a .loadtest the focus is on the metrics being captured and the load being applied. The test editor will show the progress during the test run and the results of the run can be opened anytime after the run is complete.

Analyzing .loadtest results

The results of a .loadtest run can be observed during execution and after execution is complete. These data can be viewed as a Graph, as Tables, and in and overall summary report.

Graph view

The graph gives a high-level view of the test result, but I find the supplied graphs to be somewhat misleading and difficult to read. If you are going to use Graphs to communicate the load test results, keep them simple and ensure the scales are clearly understood.

Table view

The table view of the load test results is much clearer and more precise than the graph view but the amount of information can be somewhat overwhelming. I suggest using the table view to identify specific performance issues to support but not directly communicate performance test findings.

Summary view

The summary view gives an overall performance report for the current .loadtest run. It reports on: Test Results, Page Results, Transaction Results, System under Test Resource consumption, Controller Resource consumption, and finally Agent Resource consumption.

Analyzing .loadtest Tip 1: Exporting to Excel

Many of the tables, all of the summary report, and the content of the graphs can be exported to excel. This gives you the option of creating a "smart" spreadsheet that displays the results of the load test in a single document that can be used to communicate to both technical and non-technical audiences in a consistent way.

VSTS Closing Comments

As you can see, with a little training and experience VSTS can become a useful performance testing tool. I anticipate with the continued evolution of this toolset and the growth in the Visual Studio User community that organizations moving toward development with visual studio will make this part of the overall performance assurance solution.

David JohnsonDavid W Johnson "DJ", A Senior Test Architect with over 22 years of experience in Information Technology across several industries having played key roles in business needs analysis, software design, software development, testing, training, implementation, organizational assessments, and support of business solutions. Developed specific expertise over the past 12 years on implementing "Test ware" including - test strategies, test planning, test automation (functional and performance), and test management solutions.

Dig Deeper on Topics Archive