Q
Manage Learn to apply best practices and optimize your operations.

The Agile testing process -- why do I keep repeating myself?

It's just not Agile to repeat a test process over and over again. But it's a common problem. Expert Amy Reichert explains why the Agile test process might need a serious re-think.

I've come to notice over the many years I’ve worked in an Agile development team that I end up duplicating test...

efforts two, three or even four times. In other words, in the Agile testing process I'm using I test the same thing multiple times in a series of similar, related, but yet different server environments. These aren’t different platforms or versions -- they are simply different servers that accommodate continuous deployment and integration. The final test server may resemble a production server, but resemble is as close as it gets. Production servers are rarely ever tested even with automated tests. The reason I've been given for all this repeat testing is the accumulation of "junk" test data affects performance and/or application functionality.

Shouldn't we test for that? The worst I've ever had is testing across three servers in approximately three days or less. Yet, the failures didn't occur on any of those three servers but often occurred immediately in production. I fail to understand why a duplicate production server setup and data construction is not possible for QA testing. It would eliminate the need to unnecessarily repeat tests across servers as it continuously deploys and integrates. I really shouldn't need to test more than twice -- once in QA, and once in production or an exact production representative. 

Is it Agile to repeat test execution multiple times? If I repeatedly test on servers that are not production, is it worth the duplication of work, time, stress and energy? I'd say no to both questions.

Reducing work task duplication

I truly believe production is testable in a safe, effective manner. We could use an Agile testing process on the actual production server and then we should be able to test an exact, data-scrubbed replica. In this manner, QA testers test once in QA and then once in the production replica. Why not reduce QA testing to once only? Then, in the production replica, have development scripts to verify that the expected code is in place and intact after the final deploy. 

The Agile testing process shouldn't include duplicate work. It reduces productivity, overall effectiveness and quality. People resent repeating work constantly and their enthusiasm declines over time. Quality doesn’t improve when resources repeat tasks in a hurry, ineffectively or repeatedly.

Agile teams and development organizations need focused, diligent resources. Reduce duplicated work tasks to improve quality and effectiveness. Test on real systems, or exact replicas of actual production to increase code quality and test execution results.

Shampoo, rinse and repeat is only a good way to make you use and then buy more product. It should not be part of the Agile testing process.

Next Steps

Think you know all about the testing profession? Think again

Should testers learn to code? Maybe

When your first job is as a tester…it might be tricky

This was last published in August 2016

Dig Deeper on Agile DevOps

PRO+

Content

Find more PRO+ content and other member only offers, here.

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

Join the conversation

7 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How many times are you forced to test the same code? Does it make you crazy?
Cancel
No, not at all.
If there were any changes then it's not the same code.
Otherwise, employ automation for low-risk areas.
Cancel
Not terribly often. The only time that this occurs in when we are working on an application that other teams are also working on and merging code into. When everyone has their individual pieces working, and we have a build that is a deployment candidate, then all teams go back and do integration testing on that build in a UAT environment. 
Cancel
This is a little bit weird. Typically you have local, test (QA) and production environments. Once you test in QA and certify, it should be ready for deployment. The fact that you need to keep testing in multiple environments to certify just tells me that the developers/architects are not doing their job - they are making QA pay for their lack in confidence in their code.
Cancel
Although it is ideal to test only once in QA and then repeat it in production, in most cases it's next to impossible to practice such method. The main reason being the complexity of agile practice. In an agile environment, code is changing all the time based on customer feedback and stakeholders requests. For example, I have worked on projects where the feature implemented in one sprint is completely changed two sprints down the road. Such changes may or may not introduce regression. Hence, something that we tested last month is now changed, which may have produced side effects and regressions in other parts of the code/software/product. Therefore, it is required to repeat the same test cases to ensure nothing is broken. Although repeating the same test cases is not pleasant, there's a solution for it: Automation! That's where we can replace the man hours with robots. Hence, testers can continue testing new features, while the automation test suite checks for regressions by repeating the same test cycle every time a new build is deployed to the QA environment. I believe this practice is highly efficient and effective. Any comments?
Cancel

ataheri - *every* "regression" can be viewed as a failure to properly test before the code is committed to the repository. Granted some times this failure is the result of ROI considerations, but the vast majority [in my experience dealing with well over 100 client teams directly + other cases] are simply holes in the developer testing.


Cancel
I am a firm supporter of testing code multiple times!

However, I agree that many (most?) do it in a manner that introduces cost without providing value [thus lowering ROI]. Fortunately there are specific techniques that are highly effective.

As far as the "QA" (which is really QC, but I digress) vs. "Near Production", a key differentiator is the dataset that is used. Bringing true production data into a testing environment has risks, and the level is (at least partially) proportional to the number of people who have access. Using synthetic data at all of the testing stages up to pre-production, and then keeping access to pre-prod limited to the same (hopefully very small) group who have direct access to live production data can be a major risk mitigation technique.
Cancel

-ADS BY GOOGLE

SearchMicroservices

TheServerSide

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

Close