“Don’t expect the impossible and don’t reach for too much too soon,” is one piece of advice Dorothy Graham and...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Mark Fewster give to IT leaders about an automation implementation strategy. The authors of Experiences with Test Automation explain more about the ROI of test automation, what surprises they found from the 28 test automation case studies and what advice they have for IT leaders.
Read Test automation: Exploring automation case studies in Agile development for the first part of this discussion.
SSQ: Of the types of test automation, are there certain types that you have found to provide a better ROI than others?
Dorothy Graham and Mark Fewster: Good question. Not sure how useful a comparison of ROI would be. Many of our case studies did not measure ROI, and those that did often measured it in different ways. Good ROI could come from the automation of performance testing in which huge numbers of concurrent users are simulated for economically high-risk commercial applications, but that’s not the focus of our book. One of the interesting things reported by a number of the case studies was that the faster the feedback to developers is, the more useful the automation can be, so that would point to TDD automation.
Many of the stories in our book show that good results can be achieved without formally calculating ROI, but in Michael Snyman’s anecdote, they achieved 900% ROI in automation regression tests – that’s rather impressive, we think! They did this by defining a formal automation process, integrated with the whole software development lifecycle.
SSQ: For test professionals who are looking to gain experience with test automation, what would be your biggest piece of advice? Should they start by learning a particular tool? Should they simply learn more about programming or scripting languages? Or should they learn a discipline such as performance testing?
Graham/Fewster: To always remember that automation is a means to an end and not an end in itself. Automation supports testing, if the aim of the support is not valuable testing then it is not worth doing.
For those who are new to test execution automation, it can be helpful to start by looking at a few open source tools, just to become familiar with what a tool can and can’t do. Begin by automating some tests that are repeatedly useful, and experiment with different versions of the software (to assess maintenance of the tests). Change to a different tool to find out how to design tests for portability (and low maintenance). Experiment, but don’t expect your initial experiments to be your best work – continue to learn and improve.
Be aware of who will be writing the tests, and experiment with scripting techniques and testware architecture to enable testers to write and run automated tests efficiently.
SSQ: In reading the 28 different case studies, what was your biggest surprise?
Graham/Fewster: There were a few surprises: The variety of applications where automation is being used, the persistence of automation believers in the face of opposition (often from their own managers), the importance of having a good relationship with developers, the ways that many people had used our previous book to good effect, but also how many times our good advice had been ignored and yet success followed! It turns out there are many ways to be successful, even if some the steps taken are the wrong ones.
SSQ: It was interesting that, though often automation is associated with Agile environments, many of the case studies were from those using traditional or Waterfall SDLC practices. Did you think that automation is equally effective regardless of methodology? Why or why not?
Graham/Fewster: Yes, good automation is effective regardless of development methodology or lifecycle. The methodology used affects the outcome of the project as a whole. Test automation supports testing within any methodology. Automation is useful for any type of development, but automation is essential in Agile development.
SSQ: While it looks like the majority of case studies reflected success stories, there still were a few that were not successful. What do you think is the most important thing IT leaders can do to have success in their automation implementation strategy?
Graham/Fewster: IT leaders (as opposed to test managers or QA managers) often have rather unrealistic ideas about what automation is, what it can do, and what it can’t do. It is important that they should not expect the impossible and don’t reach for too much too soon. Always consider the longer term. Keep a wary eye on the basics such as maintenance issues and testware architecture from the start. Continually seek to improve (to reduce build costs and increase benefits). We hope that this book will enable IT leaders to see what can be achieved through their understanding and support over a realistic timeframe.
SSQ: What is your advice to managers to gain the most ROI with test automation?
Graham/Fewster: Consider whether or not you need to measure ROI for automation. If so, first become aware of the costs of manual testing. Investigate the potential benefits of automation, remembering that some of the significant benefits may not be easy to quantify and put into an ROI calculation. Be realistic, take small steps, celebrate successes.
SSQ: Any recommendations on tool sets?
Graham/Fewster: No. We are often asked, “What is the best tool?” but this is like asking, “What is the best car?” The best car for someone with four children and three dogs would be quite different than what is best as a new driver’s first car, or for someone whose main concern was fuel efficiency. There is no “best tool” either, but a number of tools will likely be suitable for an organization. We both prize our independence from any tool affiliation so that we can concentrate on the principles applicable for automation whatever tool you are using.
Dig Deeper on Automated Software Testing