In Agile development, team members work together to test stories and ensure high quality. However, when the stories...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
from different teams are assembled, there is often a lack of clarity around who is responsible for testing how well they integrate. Processes around integration testing can be a point of confusion for Agile teams. Continuous integration (CI), which is the process of running regression tests with each build, can help, but will not solve all your integration test needs. Agile expert Janet Gregory discusses the challenges with integration testing and explains the practice of continuous integration.
The challenge with integration testing
At the Agile 2013 conference in August, Janet Gregory, co-author with Lisa Crispin of the popular book Agile Testing: A Practical Guide for Agile Testers and Teams, hosted an interactive session that gave participants an opportunity to discuss challenges they were encountering.
One such challenge had to do with integration testing when multiple teams are developing code. Though there is a "Scrum of Scrum" meeting to discuss dependencies, there are no stories or tasks assigned and so it's unclear who is responsible for the integration testing that must happen to ensure the code deployed by different teams is working well together.
Gregory answers this way:
This is a common problem when there is more than one team. Lisa [Crispin] and I call this issue 'forgetting the big picture.' I wrote a blog post on this a while ago. There are a couple of things that I suggest to try:
1. Write acceptance tests at the feature level (i.e., a feature is some business capability that makes sense to release; a feature has many stories). Ideally, this feature is contained within one team, but not always. If it is across teams, then either one or both teams take responsibility for making sure the feature is 'done.' I also recommend defining 'feature done,' as well as 'story done.'
2. Sometimes it is necessary to have an integration team -- not my favorite, but I've seen it successfully work. For example, at the end of every iteration or sprint, the project teams release their integrated, potentially shippable product to the integration team. This group usually has a more substantial test environment and can test the post-development activities like browser compatibility or interoperability or load, performance, etc.
3. To make either of the first two items work, someone has to be part of the 'Scrum of Scrums' or full product-release team to be aware of the dependencies.
What about continuous integration?
Martin Fowler, one of the Agile Manifesto signatories, describes the core practice of CI as follows:
Continuous integration is a software development practice where members of a team integrate their work frequently; usually each person integrates at least daily -- leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible.
Sometimes teams will feel that if they have implemented a CI environment, "integration testing" is essentially covered. There is no need to worry about it anymore. While certainly CI provides a means for regression testing with every build, it does not necessarily test all integration points or provide a thorough integration test, unless automated tests have been written with a thorough integration test in mind.
In a series of articles by Howard Deiner, CI is explained, step by step. In the first of these tips, "Continuous integration: Quality from the start with automated regression," Deiner explains that CI is a way to integrate automated regression tests into your build. These are typically automated unit tests that ensure that the development code is still operating as intended. If suddenly one of these tests fails, it might be because a different team implemented some dependent code that violated an agreed-upon requirement. In this way, CI does provide some level of testing and validation of code that is written by different teams.
Certainly CI is a positive step in the ability to find integration issues early. However, it is only as good as the automated tests that are being executed. Just because automated regression tests are being executed, we cannot be assured of the quality of those tests and whether or not they will catch all integration errors.
By combining the wisdom from Gregory of creating feature-level integration tests with that from Fowler and Deiner, and having those be automated and be part of a CI build, we can have the best of both worlds. CI provides a way to continually test the code. Teams need to ensure that those automated tests are of high quality and will catch errors that may occur beyond isolated unit tests.