.NET applications come in all shapes and sizes -- from a tic-tac-toe Windows application that runs on your laptop,...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
to a major website running in Microsoft Azure -- and everything in-between. Performance for those applications can be just as varied, from single-user response to a desktop app, to multiuser response for a database application. That's where performance testing .NET applications comes into the picture.
Using an environment created by a single vendor can be an advantage because the pieces work seamlessly -- but it can be challenging to find the right tools for performance testing .NET applications.
In this article, we'll talk about how to find the right tools, and how to use them.
The simplest explanation of performance testing can probably be found by looking at the performance testing of the Gossamer Condor and, later, the Gossamer Albatross, the world's first human-powered, sustained, controlled, heavier-than-air crafts.
While the first Gossamer Condor flight was only a few seconds long because the plane was too heavy, the Albatross would eventually fly the English Channel. The basic performance test strategy was the following: Throw it off a cliff. See what breaks. Reinforce that part. Everything that didn't break was weaken to reduce the weight. Only when it came time to create a model for the National Air and Space Museum did people realize that no actual blueprints existed for the craft.
Performance testing .NET applications (or, really, any type of applications) isn't that different. Start by modeling how the user will use the software, and then measure the time -- and wear -- of each of the possible bottlenecks. Create a script to simulate that behavior at -- or record and crank it up to -- various amounts of load. Then, track the actual response time, as well as the time through every subsystem, and find what breaks.
Do it with models
Imagine software that is incredibly simple -- a login form combined with the ability to create, edit, view, follow links to view and search wiki pages. Here are a few questions:
- What percentage of logins fail with a bad username or password?
- What percentage of pages are edited? Viewed? Searched?
- How long do users stay on each page?
The answers are probably in the web server logs, which can be found after they are parsed. If the software isn't live yet, manually test it for a day and parse the logs. Failing that, make the numbers up.
The best way to model performance is to write a program that understands the state of the software -- what is valid, invalid, throws random dice to figure out what to do next before moving to the next stage -- after waiting for the right period of time. Run these as tasks.
About simulating load
When it comes to testing .NET applications for performance, big problems can come when simulating the load, as it's really hard to do. A laptop from a cable modem will probably have an artificial bottleneck at the laptop's central processing unit or in the cable modem.
Even inside a data center, a single computer is unlikely to be able to simulate an enterprise-grade load. To get more load, either coordinate a series of computers using a coordinator or consider a performance tool that runs in the cloud.
Next, let's talk about Microsoft tools.
Tools for testing .NET applications
The Application Center Test and the Web Application Stress Tool are old Microsoft tools designed to simulate loads on a single computer. They have an installable load simulator for web applications that is slightly dated.
Windows Performance Monitor captures system performance, providing some of the instrumentation mentioned earlier. That can include simple data, like CPU and memory use, and collecting log files and aggregating them. So, for example, if web server performance drops when memory hits 90%, it may be time to investigate expanding memory, or to find out when virtualization of memory to disk begins.
Internet Information Server (or IIS) is the basic web server for .NET applications. IIS can create logs of which webpages are accessed, how long pages take to generate and so on, which can be useful to identify, isolate and fix bottlenecks. For example, if the web server takes 10 seconds to generate a page, and nine seconds of that are spent waiting for a database query, the performance bottleneck is the database, not IIS.
The primary database server for Microsoft, SQL Server, can also generate logs and be monitored for CPU, disk and memory delays.
Visual Studio performance testing in Azure allows you to use Microsoft's Azure cloud to generate load on an application running in Azure.
Going beyond Microsoft
The basic tools for performance testing .NET applications remain the same -- record, play back, scale up load over time, instrument, graph, observe, tweak and repeat. Microsoft's tools to do this tend to be older and out of date, as the company expects that performance testing will be done in the Azure cloud. That isn't right for everyone, so you may want to get proprietary tools for that.
There are certainly plenty of proprietary load and performance testing tools. The key is to have a set of tools that cover all the bases above, most commonly by collecting instrumentation, using a tool to capture load and using another to simulate and replay it.
The biggest challenge when testing .NET applications is generally identifying the right bottleneck, avoiding the problem of over-simulating on a laptop and doing a correct simulation. A typical project for performance testing .NET applications will try several tools before finding the right one.
In today's Agile environments, with continuous maintenance, performance testing projects are becoming less common, and deploying every two weeks is becoming the norm. That creates new challenges because a query that gets slightly more complex for every sprint could suddenly become too complex for the system. There are two common fixes for this.
First, some modern tools can run in the cloud automatically from the command line, and they fit well when combined with continuous integration; you would receive test results daily, hourly or even more often, instead of at the end of the project.
Second, companies increasingly see performance monitoring in production, or an incremental rollout combined with monitoring, as part of the performance testing story. That is a new wrinkle, but the basic building blocks for performance testing .NET applications are already in place. The challenge is to combine them effectively and use them correctly.
Getting started with .NET testing
.NET and the internet of things -- here's why you need Azure
How does .NET stack up against Java 8?