Perhaps you go to a department or company all-staff meeting. The VP of engineering, or perhaps the CEO, talks about...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
the new imperative for the company. The next two years will be the most challenging the company has ever experienced; the need for new features and products is more insatiable than ever before. Then, toward the end, sandwiched between two demos, he says, "And the way we are going to get there is through a test automation strategy."
You turn to your boss and mouth: "What? How? Really?"
Congratulations, you've got a new requirement, a test automation strategy that might not be very well thought out.
Where does this stuff come from?
Today's generation of development managers and VPs of engineering are likely to be former programmers. What programmers do is take business processes and automate them through code. When that kind of leader sees the problem of testing, they are likely to see a simple, straightforward business problem. If that's true, then a test automation strategy automates the problem away. It's the kind of thing people believe in without evidence.
That is exactly how most test automation projects start.
Instead of requirements for how much to reduce the test process by, how much more often it will run, how long it should take and so on, the test automation strategy is given a simple title, like "Automate Regression Testing," along with a due date for a proof of concept, some budget and perhaps staff.
In the worst of cases, the project is created because testing staff is behind, and testing staff is assigned to work on the project in their spare time. This test automation strategy will actually make more testing work, until the team catches up.
Avoid the bad proof of concept
It is incredibly tempting to see the problem of automation as a testing problem. Testers, who likely have very weak programming backgrounds, look for codeless or near-codeless automation. These are typically record/playback tools and are generally commercial. The testers download a free 30- or 60-day trial, then use that tool to make a proof of concept.
A real proof of concept should include the entire test cycle -- creating a build, installing it on a fresh server, setting up data, running the automated checks, validating and reporting the results. Under pressure, many testers will skip everything except the running of the test, including skipping the actual checks that the software returns the right values. These compromises are made with the best of intentions, with expressions like "We're just getting started" or "We can figure that out later."
The result is a test automation strategy demo for executives that looks impressive. With time on the trial running out, management purchases the software. Six months to a year later, the software is either unused ("We'll get to it next sprint; we're really busy on new feature testing now"), or perhaps a new hire or entire team is using the software -- but the actual effort to regression test the software has not gone down significantly.
Instead of treating automation like a test project, consider treating it like a software project. While the GUI layer is the most obvious for testers, it might not be the place to start.
Instead, start at the bottleneck for the delivery group.
Find the bottleneck first
Imagine sitting in front of a delivery team with a stopwatch, tracking the progress of a feature. The stopwatch tracks each activity on the way, from analysis to code, test and deploy. At the end of a sprint, you have a list for each activity, how long they took on average, how many times they were performed and the total time. Sorting by total time, we find the bottleneck of the delivery process.
In many cases, the bottleneck is not test execution. In fact, if we made test execution free, with results at the push of a button, there would still be debugging to find what the problems are, documenting the bugs, arguing about if they are bugs and if they should be fixed, fixing and so on. While test tooling can enable more frequent releases, in many cases, GUI-only test tooling might add a 4% to 5% increase in throughput of the delivery process, costing many times that.
In most cases, the biggest item on the list is not testing or programming, but rather waiting. Stories sit and wait because of multitasking, waiting for a build, the staging server is only updated daily and so on.
The multitasking problem is a process problem, but in many cases, technology and automation can remove the delays.
Consider a more expansive view of test automation
Paul Grizzaffi, a principle automation architect at Magenic, points out that good tooling is rarely about taking the exact manual process that ran before and pasting it into a tool. His fall 2017 STPCon keynote, "With Great Judgement Comes Great Responsibility," pushes back on the desire for a test automation strategy at all costs and instead suggests a more pragmatic approach.
Going back to the bottleneck, it might make sense to automate data setup, teardown, getting a current build or building out the test environment.
Another aspect to look at in a test automation strategy is when bugs are found. Good test automation helps find bugs earlier in the development process, with less effort to file, fix and verify. One place to do that is with the continuous integration (CI) system.
Deeper CI, earlier and more often
It's been nearly two decades since Joel Spolsky suggested the daily build as an excellent practice. It certainly was, but a fix made at 9 a.m. won't be testable until the next morning. The simplest form of test automation may be to get the build to run hourly or even each time a code change is committed. If unit tests (also known as programmer tests or microtests) exist and actually find bugs, then hooking those tests into CI and getting programmers notified when their change causes a failing test can mean bug fixes done earlier with no need to file, report or argue.
CI can do more than run unit tests; it can also create a virtual machine with the software running on it, inject test data and possibly run some build verification tests. If it is common for a build to pass but for login or other core functions to break, it may be better to put this in CI and find the problem as soon as possible.
In order for CI to create a virtual server, the process will need to be scriptable in code, which means programmers can create a test server and add it to the documentation, or story, when a feature is ready for test, eliminating delays and handoffs between programmers and testers.
By now, you've realized that the typical test automation strategy and automating parts of the test process that can add value are often different.
The biggest problem might be explaining that to other people.
What to do when things go wrong
So, the vice president of engineering said the future is automation. Meanwhile, the team is testing on a single test system, data is entered manually, the build is daily, every single new feature that is developed is kicked backed several times to the programmers for fixes and each candidate build will have a dozen new bugs in it -- but the future is test automation.
Doing an analysis of time spent by activity and presenting that information can help. Suggesting a test automation strategy be seen as a software development activity, instead of something testers need to do in their spare time, can also help. Once the programmers are involved, it may become clear that the path to maturity is in unit tests, in CI and in infrastructure more than driving the GUI with tools.
Or perhaps not. The key is to do your own analysis. Don't allow it to be handed to you. Don't accept an assignment that can't be successful. Instead, reframe the assignment. Sometimes, the best proofs of concept are the honest ones. They can be more like failures of concept.
As long as you learn and improve, you still win.
Full automation is not your testing friend
Why CI is almost always the answer
DevOps needs CI -- here's why