Using Stella for Web application testing

Although Stella remains in beta, it has proven to be a powerful Web application testing tool. In this technical tip, expert Mike Kelly walks testers through Stella use. Since documentation is scarce, this tip will provide you with valuable guidance on how to use this relatively new tool.

Mike Kelly, software tester
Mike Kelly
Editor's Note:
This article was written using Ruby v1.8.6, Stella v0.8, and leveraging Google search using data available late March 2010.
"Fancy a test?" That's the playful tagline for Stella, a handy little web application testing tool developed by Solutious Inc, a small software company based in Montreal. Stella is a very lightweight Ruby test tool for functional and performance testing. Like JMeter, Stella doesn't simulate a browser; it generates HTTP requests and parses the responses. Currently Stella provides support for automatic parsing of HTML, XML, XHTML, YAML, and JSON. In this article we'll take a look at a couple of simple examples to help you evaluate if Stella is right for you.

Use cases and test plans, the basic building blocks of Stella
Since Stella installs via Rubygems, I won't spend any time talking about setup. Instead let's jump right in and write a test. The first thing we need to do in Stella is setup a test plan. Test plans are made up of use cases. A use case is simply your test scenario laid out programmatically.

For example, here's what a simple use case might look like if we wanted to go a Google search:

use case do
  get "/search", "Search Results" do
    param :q => 'Stella Performance Testing'
    
    response 200 do
      quit "Found results" 
    end
    
    response 404 do 
      quit "No results"
    end
  end
end

Listing 1: Simple use case in Stella.

In the above code snippet, we search Google for "Stella Performance Testing" and we check the subsequent result. If we get a 200 HTTP response, results were returned from Google. If we get an 404 HTTP response, we know results were not returned. There are other responses we might check for (301, 400, 403, etc…), but you should get basic the idea with the code example above.

You'll notice that the basic structure of the use case is the "get" method. If we have multiple gets within a use case, it can be helpful to label them. You'll see in the listing that I labeled my search "Search Results." For the search we provide the search parameter (the "q" variable is specific to Google's implementation) and we also specify what we want to do for each response type.

If you'd like, you can add more detailed result checking and verification to your use case, you can parameterize test data, you can set cookies, etc… In listing 2 below, I show an example of verifying the results that come back from our search.

use case do
  get "/search", "Search Results" do
    param :q => 'Stella Performance Testing'
    
    response 200 do
      found = false
      doc.css('li.g').each do|l|
        results = l.css('a.l').first.content
        results.each { |result| 
          if result == "Stella -- Solutious, The Performance Company." then 
            found = true 
          end
        }
      end
      if found then 
        quit "Found result" 
      else 
        quit "Did not find result"
      end
    end
    
    response 404 do 
      quit "No results"
    end
  end
end

Listing 2: Verifying the results of our Google search use case in Stella.

In the above code snippet, I simply loop through the search results and check to make sure that the Solutions web page is one of the results returned. Again, you could add as much or as little complexity here as you like. You can store values to future test calls, save off HTML if you find errors you'd like to investigate, etc… Because you're using a full featured programming language, you have access to anything you can do in Ruby while you're testing.

To run a Stella functional test via the command line, you run the "verify" command. You can do this using the following syntax:

stella verify -p <<path to test scrip>>/<<test script>>.rb hostname:port

For my example, that looks like:

stella verify -p stellaTest1.rb http://www.l.google.com

The following listing shows the results if you run the code in listing 2:

←[7m Google Search  (5bafe7)                                 ←[0m
←[1m←[7m Use case #1  (e21e1d)                                ←[0m←[0m
  GET    http://www.l.google.com:80/                     302
  GET    http://www.l.google.com:80/search               200
  QUIT   Found result

Listing 3: Results from running the Google search code from listing 2 in Stella.

Your test plan can contain any number of use cases. For example, I could create a Google search test plan that tests different features of search. In your test plan, you'd just list them out one after the other. When you do that, it can be helpful to name them. You can see an example of multiple named use cases in listing 4 below.

use case "Google Search - Basic" do
  #use case code...
end
 
use case "Google Search - Weather" do
  #use case code...
end
 
use case "Google Search - Time" do
  #use case code...
end
 
use case "Google Search - Books" do
  #use case code...
end
 
use case "Google Search - Blogs" do
  #use case code...
end
 
use case "Google Search - Local" do
  #use case code...
end 

Listing 4: Multiple use cases in a Stella test plan.

←[7m Test plan  (93e5a6)                                           ←[0m
←[1m←[7m Google Search - Basic  (45335c)                           ←[0m←[0m
←[1m←[7m Google Search - Weather  (45335c)                         ←[0m←[0m
←[1m←[7m Google Search - Time  (45335c)                            ←[0m←[0m
←[1m←[7m Google Search - Books  (45335c)                           ←[0m←[0m
←[1m←[7m Google Search - Blogs  (45335c)                           ←[0m←[0m
←[1m←[7m Google Search - Local  (45335                             ←[0m←[0m

Listing 5: Results from running the multiple use case code from listing 4 in Stella.

This not only cleans up your output in your log files, it also creates the basic framework for performance testing with Stella.

Performance Testing with Stella
As with any load testing tool, Stella generates multiple virtual clients based on how you setup and run your test. Each client will run a use case based on percent allocations you set in your test plan. You create your percent allocations by simply adding the percent when you define the use case, as shown in listing 6 below.

use case 50, "Google Search - Basic" do
  #use case code...
end
 
use case 10, "Google Search - Weather" do
  #use case code...
end
 
use case 10, "Google Search - Time" do
  #use case code...
end
 
use case 10, "Google Search - Books" do
  #use case code...
end
 
use case 10, "Google Search - Blogs" do
  #use case code...
end
 
use case 10, "Google Search - Local" do
  #use case code...
end 

Listing 6: Setting percent allocation by use case in a Stella test plan.

In the example above, you can see that 50% of our users will do a basic web search and the other web searches will each get 10% of the virtual user traffic. You can confirm you have things setup correctly by running the "preview" command in Stella. You can do this using the following syntax:

stella preview -p <<path to test scrip>>/<<test script>>.rb

For my example, that looks like:

stella preview -p stellaTest1.rb

The output for that preview is shown in figure 7 below.

←[7m Test plan (efde9a) ←[0m
←[1m←[7m Google Search - Basic (54f9ac) 50% ←[0m←[0m
←[1m←[7m Google Search - Weather (f49bea) 10% ←[0m←[0m
←[1m←[7m Google Search - Time (f49bea) 10% ←[0m←[0m
←[1m←[7m Google Search - Books (f49bea) 10% ←[0m←[0m
←[1m←[7m Google Search - Blogs (f49bea) 10% ←[0m←[0m
←[1m←[7m Google Search - Local (f49bea) 10% ←[0m←[0m


Listing 7: Running preview on my Stella test plan.

In the log listing above, you can see the different percentages broken out by use case. If I had more detail in my use cases you'd see that as well. Once you've confirmed your use cases are setup correctly, you're ready to run.

To run a Stella performance test via the command line, you run the "generate" command. You can do this using the following syntax:

stella generate –c <<number of users>> -p <<path to test scrip>>/<<test script>>.rb hostname:port

For my example, that looks like:

stella generate –c 10 -p stellaTest1.rb http://www.l.google.com

That "-c" switch represents the number of virtual clients to use. There are a number of "generate" command line options worth mentioning:

Switch

Parameter Description

-c

Maximum number of virtual clients

-a

Arrival rate (new clients per second)

-r

Number of times to repeat the testplan (per vclient)

-d

Max duration to run test

-W

Ignore wait times

-w

Wait time (in seconds) between client requests (ignored if testplan supplied)

-p

Path to testplan

-g

Amount of time (in seconds) between timeline rotations

-h

Help (this listing)

When I run my performance test with 10 users, I receive the following (slightly modified for formatting purposes) output:

Logging to C:/Users/MichaelDKelly/Desktop/Stella/log/20100328-16-57-50-ebf29d

Runid: c86f22
Plan: Test plan (ebf29d)
Hosts: http://www.l.google.com
Clients: 10
Limit: 1 repetitions

Running...

Processing...

Test plan (ebf29d)

Google Search - Basic (df7b9d) 50%
GET /
response_time 0.155 <= 0.244s >= 0.319; 0.075(SD) 5(N)

Sub Total:←[0m
response_time 0.244s 0.075(SD)
response_content_size 1.10KB (avg:219.00B)

Total requests 5 (302: 5)
success 5
failed 0

Google Search - Weather (541f54) 10%
GET /
response_time 0.198 <= 0.244s >= 0.286; 0.037(SD) 5(N)

Sub Total:←[0m
response_time 0.244s 0.037(SD)
response_content_size 1.10KB (avg:219.00B)

Total requests 5 (302: 5)
success 5
failed 0

Google Search - Time (541f54) 10%
GET /
response_time 0.198 <= 0.244s >= 0.286; 0.037(SD) 5(N)

Sub Total:←[0m
response_time 0.244s 0.037(SD)
response_content_size 1.10KB (avg:219.00B)

Total requests 5 (302: 5)
success 5
failed 0

Google Search - Books (541f54) 10%
GET /
response_time 0.198 <= 0.244s >= 0.286; 0.037(SD) 5(N)

Sub Total:←[0m
response_time 0.244s 0.037(SD)
response_content_size 1.10KB (avg:219.00B)

Total requests 5 (302: 5)
success 5
failed 0

Google Search - Blogs (541f54) 10%
GET /
response_time 0.198 <= 0.244s >= 0.286; 0.037(SD) 5(N)

Sub Total:←[0m
response_time 0.244s 0.037(SD)
response_content_size 1.10KB (avg:219.00B)

Total requests 5 (302: 5)
success 5
failed 0

Google Search - Local (541f54) 10%
GET /
response_time 0.198 <= 0.244s >= 0.286; 0.037(SD) 5(N)

Sub Total:←[0m
response_time 0.244s 0.037(SD)
response_content_size 1.10KB (avg:219.00B)

Total requests 5 (302: 5)
success 5
failed 0

 

Total:
response_time 0.244s 0.056(SD)
response_content_size 2.19KB (avg:219.00B)

Total requests 10
success 10 (req/s: 18.38)
failed 0


Listing 8: Performance testing results for my Google search Stella test plan.

If you go to the directory where log files were stored, you'll see three log files: summary, stats, and exceptions. In those logs files you'll find this data, as well as other runtime data that you can parse and save off as needed.

Wrapping up

Previous JMeter tips:
Running your first load test with JMeter
JMeter is a highly recommended tool for load testing open source Java applications, as it supports technologies from HTTPS - SOAP. Learn how run your first JMeter test in this tip.

Recording and running software load tests with JMeter
Recording JMeter load tests can simplify the creation of Samplers needed for your test plans. This tip explains how to record JMeter test results and analyze the data generated.
At this point, you've likely got enough to get started. Once you have some tests created, you can run them from the command line, add them to your current test runner(s), or add them to you continuous integration environment using either command line options or with some simple Ruby code (depending on what you're using). These tests can be nice for testing deployed artifacts and capturing baseline results for a deployed build.

When getting started with Stella, it's difficult to remember that it's still in Beta. You need to set your expectations appropriately. While there is documentation, it's sometime difficult to follow and it's sometimes difficult to figure out the Stella domain specific language. Don't let that discourage you, I was able to get my first tests working in about an hour.

Given the current direction of the project, it doesn't look like Stella will ever replace my use of Watir or Selenium, but that's just fine with me. I like Stella for integration testing and lightweight performance testing. I think it complements the more traditional test automation done with tools like Watir and Selenium. If you fancy an integration test, check out Stella and see if it's right for you.


About the author: Michael (Mike) Kelly is currently an independent software development consultant and trainer. Mike also writes and speaks about topics in software testing. He is a regular contributor to SearchSoftwareQuality.com and a past president for the Association for Software Testing.
This was first published in May 2010

Dig deeper on Unit, Integration and Functional Software Testing

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchSOA

TheServerSide

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

Close