Manage Learn to apply best practices and optimize your operations.

A case study in the use of automation

Employers are looking for testers with automation skills, but what exactly does that mean? Agile expert and co-author of Agile Testing Lisa Crispin describes real-life examples of the use of tools to automate test scripts. Crispin describes how her work team uses FitNesse, Watir and other test tools in automation, giving examples of how the test scripts caught anomalies in the application that otherwise would have been overlooked.

In my tip, Automated test scripts: The Smartphones of testing, I describe the benefits of automation. This is all...

wonderful in theory, but exactly how is it done? What tools will allow you to reap the rewards from automation? There are a variety of tools and frameworks available, so it’s not necessary to use the same tools my team used. However, in this tip, I’ll give you specific examples of automation tools we use and how those have benefited the team saving us hours of tedious testing, allowing us to spend time on the critical thinking tasks that are so important.

Our approach

Our team had no test automation in 2003. Over the years, we’ve invested in automated regression tests at every level. The programmers code via test-driven design, so we have a huge suite of automated JUnit tests. We use FitNesse as a framework for several large suites of test at the API level, basically replacing the user interface. (There are several test frameworks available that provide the same functionality -- pick the one that works best for your team). Each FitNesse test page sets up test inputs (either in memory or in a test schema), sends the inputs to production code to operate on it, and compares the actual output with the expected results. At the GUI level, we have used two different tools: Canoo WebTest suites to cover the whole application with smoke tests using HTTP, and suites of Ruby scripts that use Watir and test::unit to test via the Internet Explorer browser. Again, many choices are available for test tools, but these were the tools my team used. Test suites at all levels run in our Hudson continuous integration, giving us continual feedback throughout each day.

Exploring through the GUI

My team’s software allows our client companies to set up a 401(k) retirement plan in which their employees can participate. An employer can establish a plan via a five-step process in our web application. They specify information about their company, select from a wide range of options such as whether or not they will do profit sharing or company match, choose the mutual funds in which participants can invest, and review legal documents. There are literally hundreds of permutations of the various options for a given 401(k) plan. To test them, we wrote sophisticated, flexible Ruby scripts using Watir that can test every combination of options. Another variable for our 401(k) plans is that a third party administrator or plan advisor can establish the plan on behalf of the employer, in which case there are even more selectable options.

Recently, we made a major change in our system model. Our system was designed with a one-to-one relationship between an employer account and the corresponding 401(k) plan. However, our customers need multiple accounts so that more than one person at the company can log in to enroll participants, submit contributions and do other functions. Providing the ability to have multiple accounts was the right solution, but was a fundamental change in our design. Even with so many automated regression tests, we needed to do lots of exploratory tests to ensure there was no unexpected behavior.

I used the Ruby/Watir scripts to set up scenarios from which I could do some exploring.  I can pass in variable values when I run the scripts. For example, if I wanted to create a basic plan on the server called “sunshine,” have it activate the plan and leave the page up in the browser so I can do my exploratory testing, I can type in:

tc_create_plan.rb -- --machine sunshine –plan_name SponsorEstablished –keep --activate

This creates a plan with the default set of options, and leaves the browser up with the plan activation landing page (see below). This saved me many, many keystrokes. I can now log in to the plan and verify that it looks as I expect: for example, that it has the correct employer contact information in the documents.


After checking that the plan didn’t have any unusual problems, I wanted to enroll an employee in that plan, so I ran a script to do that. Had I desired, I could have enrolled hundreds of employees with a quick script. Next, I wanted to make sure the plan sponsor could submit payroll contributions with no problems, so I typed:

tc_create_payroll.rb -- --machine sunshine --plan_name SponsorEstablished --keep

Everything checked out with a plain vanilla plan. The next thing I did was to establish a plan as a third-party administrator, again by running a script. I again enrolled an employee and submitted the payroll contributions for this new plan. I repeated the whole process but with the plan being established by a plan advisor. Then I established a “conversion” plan, one that is rolling its funds over from another 401(k) plan provider, with the same script and a different variable value (“plan_type conv”) and repeated the other scenarios with that type of plan.

If I had been doing purely manual testing, I’d probably have given up after the plain vanilla plan. Or if I had kept with it, by the time I got to the conversion plan, I’d be so bored I wouldn’t be paying much attention and might miss an actual problem. As it happened, this time I did not find any problems, and felt confident that the major change in our system design had not caused any unexpected issues.

Regression tests don’t find bugs -- they find regression failures. To find bugs, we need exploratory testing, keen observation and critical thinking. On another occasion, I was testing a fix for an issue where a valid combination of the “Auto-Enrollment” option with a plan start date in the next year was giving an error message during plan establishment. I ran the “tc_create_plan.rb” script as I described above, but passed in the options to make the plan have the “Auto-Enrollment” option and scheduled it to start on 12/31 of the current year. I got an unexpected error:


This looked like a bug. The error message says the user can’t elect a Safe Harbor contribution type, but the user did not select one. I showed this error to our Product Owner, so he could find out whether it is a valid error for which the message text needs changing, or whether the validation should allow the “Auto-Enrollment” option for that date as long as Safe Harbor is not selected.

I doubt I’d even have run across this issue if I’d been doing all the testing manually. I was trying one of many boundary conditions. I’d have run out of time and/or energy before I could try all the possible scenarios. But since I could create different edge and boundary combinations so quickly and easily, I ran the script several more times to make sure this was the only issue.

Exploring behind the GUI

The main purpose for our FitNesse tests is to drive development with customer-facing tests (also known as Acceptance Test-Driven Development). Testers and programmers collaborate to match up test cases and test fixtures that automate them, so we share understanding of desired system behavior. However, we’ve also found that FitNesse tests provide an easy way to crank through lots of different data scenarios to test algorithms, as well as testing operations on data in the database.  We don’t keep every test we do as part of our regression suites, but using FitNesse tests to create and verify many different scenarios saves us time and tedium.

Here’s an example. 401(k) plan participants can borrow money from their retirement account under certain circumstances. They have to repay the money to their account with interest. If they don’t make the appropriate payments, their loan will be considered a taxable distribution. The logic to determine the loan amortization and the status of the loan based on payments is ridiculously complex. Testing this via the GUI would take too long, but we can test many scenarios via FitNesse tests. Here’s an example:


We can easily plug in different values for loan amount, interest rate, frequency, term, origination date, payment dates and amounts, and verify the interest and principal for each payment. We can also check the loan balance and payment state. The above example is simple, but we can test several years’ worth of loan payments with one FitNesse test page -- much easier than trying to play out that scenario through the actual user interface.

Get started

If you already have some automated tests, experiment with using them to assist with exploratory testing. If you’ve been considering automating tests but aren’t sure if it’s worth the investment, think about how you can design automated regression tests to multi-task as a painless way to set up scenarios for further exploratory testing. Budget some time soon for a small automation experiment. Not only might it help you deliver higher-quality software -- you’ll enjoy the testing.

This was last published in February 2011

Dig Deeper on Automated Software Testing

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchMicroservices

TheServerSide.com

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

DevOpsAgenda

Close