With continuous integration, is all the regression testing done as part of the build? Do you also recommend some manual or exploratory regression testing with each build?
The purpose of continuous integration is to provide quick feedback. At the unit level, we should learn within a few minutes whether a check-in broke existing code. At the API and GUI levels, this feedback should still be quick – within an hour at most. This ensures that the developer who checked in the problematic code can identify and fix the problem right away, and keep the build “green.”
Agile teams work in tiny increments, which means that each developer checks in many times per day. This includes checking in test code. On my team, we have ten, twenty, thirty or sometimes more check-ins, and thus that many builds, in one day. You can’t deploy that many builds in one day, but you can make sure the code compiles, links, and runs any automated regression tests without errors. We use the check-in comments to decide which builds we want to deploy and test manually.
Each user story requires several testing activities. One is automating tests to drive coding, some of which will become part of the automated regression suite once they’re passing. Another is manual exploratory testing. The automated regression tests run on each build, but you have to choose which builds you want to deploy to do manual exploratory testing.
Whenever an automated regression test at any level (unit, API, GUI) fails, getting the build job “green” again is the highest priority of the team (aside from a severe production problem). Normally, the developer who checked in the code which broke a test investigates, and checks in a fix as soon as possible. However, the problem may actually be a test that needs updating, or something not related to tests or code – for example, the build machine ran out of disk space. In all cases, someone needs to take responsibility to get the build passing again. Having passing regression tests all day, every day, is our highest priority, because we need that continual, short feedback loop. Just as important is the additional time we have for exploratory testing, since we don’t need to do manual regression checks. Depending on the results of exploratory testing, we may automate more regression tests, or write new user stories for future development.
For a comprehensive resource on continuous integration, see Continuous integration: Achieving speed and quality in release management.
This was first published in September 2011