Tip

Continuous integration made simple: Five lessons you won't want to miss

It seems easy to get code to compile and test automatically; you just hook up a server to version control.

Yet over a longer term, it turns out that many companies struggle to use continuous integration effectively. In this tip, I'll share a few of those mistakes and how to avoid them.

Lesson #1: Have a strategy for managing the build

It seems obvious, but continually integrating means that, every hour or so, you'll get a new build. If developers are continuing to check new code into that branch, the new code will be picked up by the build machine. Without discipline, your newest build could have new errors and changes that invalidate the most professional testing.

There are a few ways you could manage the build process: Either have the capability to mark and 'promote' a candidate build, then perform testing on that build, or else branch the code, and, at a certain point, insist that new development occur on the branch. For example, one company I worked with had a 'master' branch; as the project approached release, we would create a project-name branch, and only check fixes targeted for that release into the project branch.

I recommend both strategies. The second may add a bit of overhead, as the project branch will have to be merged back to master occasionally. Modern version control tools like

    Requires Free Membership to View

git can take the pain out of merges.

Lesson #2: Stamp out false errors

The integration part of continuous integration is more than a compile step; it implies a series of automated checks that stress both the components in isolation (unit tests), the components with each other (integration tests), and, perhaps, some sort of customer understandable high-level tests (acceptance tests).

The higher-level the test, the more often it will fail. Some tests, especially GUI tests, may be intermittent, or prone to failure.

On one project, I found our team was using a certain language about tests; you'd hear things like, "Don't worry about the search-by-tag tests, it's just that flaky indexer feature." When that happened, the value of the tests had gone negative. Not only were the failures wasting our time, but we were ignoring the results anyway. This created an even greater risk: that we would ignore future failures that turned out to be real.

When people start talking about ignoring failures or commenting failing tests -- and they can't figure out why the tests are failing or how to make them pass -- there's a problem. Stop the process and fix the issue. Not just for one run, not just for today, but find the root cause and fix it. Prevent it next time, or throw the test away.

Lesson #3: Mind the build/deploy time

Continuous integration builds start out fast. Over time, the version control system gets heavy, the build gets more complex, developers add dependencies and third party tools, and automated checks get longer and longer. Within a year, a build that took five minutes can grow to an hour. For a large project, the build plus checks can run several hours -- one team I know of had complex GUI tests that took over twenty-four hours to run.

With tests that long, if something goes wrong, and you make a fix, it will take at least a whole business day, if not two, to find out if the tests passed.

Now imagine a high-pressure business environment ... and it takes three to four days for a build. This is not going to end well.

Most likely, the team will start to ignore failures, if not comment out all tests entirely.

To fix this, watch your build time carefully. If you find some tests are long and slow-running, you can pull them out into an overnight end-to-end run, or look for ways to run tests in parallel. Personally, I haven't found a great deal of value in having automated GUI checks run as part of the build, unless those checks are very fast verifications that succeed every time. (See Lesson #2).

Lesson #4: Exploratory testing after acceptance tests pass

It seems logical that passing "acceptance tests" means the code is ready for acceptance, or ready to be deployed.

Unless the application is simple, clean and straightforward, it's more likely that passing acceptance tests means the code is ready for acceptance by the testers. That is to say, when the acceptance tests pass is a time exploratory testers can shine, finding the bugs that only a human can find.

Automated checks can be helpful and wonderful ... as part of a balanced breakfast. Or, in a pinch, if your change is minor, you might take a little risk and "just run the checks and call it good." If you want to rely on automated checks to make sure the software is good, you'll want other safeguards in place, like an ability to slowly roll code out into increasing user groups over time, and roll a change back on-demand.

Lesson #5: Make expectations explicit, especially for distributed teams

It seems obvious to have a single code repository and CI system for distributed teams -- but is everyone playing the same game? Eric Landes, a solution architect with Agile Thought, pointed out some problems with such a setup.

He said:

At a prior company, we outsourced a project and agreed that unit tests were required. Our CI process would run the unit tests to make sure they all passed, and some code coverage metrics. After the first couple of sprints, we discovered that the remote team had a different understanding of what unit tests are. To them unit tests were what our group called, ‘integration tests.’ We then agreed on the following definition for unit tests (which I assume is more or less standard) - Unit tests are isolated, do not run against data stores, but test business logic at the developer level. If all tests do not pass, do not check in code. The CI process will run only those types of unit tests; if they fail, then the build is broken.

Eric's integration tests might fail when nothing was wrong with the code at all, but the database happened to be down. This kind of problem sends false error signals to the local team, which may spend time debugging a non-existent problem, or end up waiting twelve hours for the remote team to do so.

Again, there's no problem in having a distributed CI setup; only in having one where the different teams have a different understanding of the commit and build rules.

Conclusions

The real challenge for continuous integration isn't getting the system set up, or even getting the initial business processes defined. No, the challenge of continuous integration is keeping the system running as it grows over time into a giant blob of dependencies.

Yes, we've been dealing with that for decades with "the daily build." With continuous integration, the challenge is bigger, and it's all the time.

To keep things running, you'll want to make sure the build is repeatable, fast, and as simple as possible, while ensuring that the automated checks hit the sweet spot of valuable, minimal, and fast.

Adam Perlis, the inventor of ALGOL, once wrote:

"Fools ignore complexity. Pragmatists suffer it. Some can avoid it. Geniuses remove it."

For a comprehensive resource on continuous integration, see Continuous integration: Achieving speed and quality in release management.

This was first published in September 2011

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.