The motivation behind continuous integration in embedded software development

This tip, the first of a two-part series, speaks to the role that continuous integration plays to help you create better releases and reduce the workload in embedded software development.

Motivation

Why is continuous integration (CI) such a popular topic? There are a number of vendors that are happy to interest you in their wares in this area. But is this something that you should invest in, or is this something in “the things you own end up owning you” (Tyler Durden, Flight Club, 1999, film distributed by 20th Century Fox) arena because the cost for caring and feeding the CI server outweighs the benefits? 

The answer is that owning and operating a CI server is a wonderfully useful tool, and one that has the best payback potential of any Agile practice around. Agile expert Scott Ambler runs surveys every year on Agile process and practice adoption rates (for example, see http://www.ambysoft.com/surveys/practices2009.html). His surveys consistently show CI as the most effective Agile practice and the least difficult to learn.

Why is CI such a good practice for software development in general, and embedded development in particular? It’s easy to say that quality will be enhanced, but it’s important to understand the three basic reasons that we get to that conclusion. CI provides:

  • Quicker and better development feedback cycles
  • Reduced risk of defects showing up in final deliverables
  • Easier recognition of where the defects are

Why a continuous integration server is better than any human doing the same tasks

Humans are a necessary part of developing software. Software development is painfully hard. It takes concentration and creativity. That’s why machines can’t be used to write code. Embedded software can get really hard once we introduce constraints that the hardware most embedded systems run on into the mix. And creating tests for software is just as hard. Good testing practice requires the same sort of concentration and creativity, especially if we want to write tests that are repeatable and aid us to quickly qualify a set of work as “done.”

But running tests isn’t the same sort of thing. Running tests are repetitive and boring, because we setup the environment, run the test, observe if we reached the correct result, and repeat for each test in the suite. Using humans to do these tasks is not just wasteful, it’s actually risky. Why? Because humans tend to make mistakes in observations. Or forget to follow each and every step. Or interrupt themselves while they help answer a question for someone and then forget exactly where they were. Oh, and humans get tired and cranky if you work them 24 hours a day. But computers are just the opposite. Once they are programmed to run a test suite, they can do that time after time, 24 hours a day and always doing it right, and without complaints that they need a bathroom break.

So, let’s offload our boring and repetitive work to those machines of burden that we call computers. Now all we have to decide is what they should do to help us stay in the zone where we can concentrate and be creative on the things that we can do well.   

Let’s first agree on what a CI server really is. Many people have a rudimentary CI server, which gets triggered by a check-in to the code repository. The workflow is:

  • Before check-in, merge any conflicts that the developer code has with the repository code.
  • Commit the code for check-in.
  • The commit then is seen by the CI server, which checks out the code from the repository, builds the code, and reports any build errors back to the development team, usually through email.

This use of CI, which I will characterize as a continuous build server, has a really great benefit: frequent check-in s will help keep the conflict merging process easier, as there are less radical changes that occur when the merge conflict resolution process is done after every committed change than if we bunch up the changes for a huge conflict merge later. Easier conflict merging takes less time and will result in less wasteful rework to get the system back to working. And we, as humans, don’t have to remember to do each and every little step along the way.

Nothing wrong with that! But we can do so much more with little additional effort.

Let’s start with unit tests. For embedded software, we should be writing our code in a simulation environment for the target device using test-driven development (TDD) practices. TDD requires us to write a white box unit test that fails before we write our code and passes after we write it correctly. By using a unit testing framework (such as Unity, CxxTest, CppUnit, and others), we can easily tie these tests together and run them against our simulated device. The results of the tests are then available for automated parsing to determine which tests are passing, and which are failing, as well as where the failures are occurring. By ensuring that the unit tests run quickly (by the use of test doubles to isolate the system under test from depended-upon components), we can create an environment that can be run after a successful compilation of the code. That way, when we check-in our code, not only do we know that the code compiles, but we also ensure that any changes made to the code do not adversely affect any previously implemented functionality. This freedom from fear that the CI server gives you raises quality far more than any shallow assurance from a build server that your code still compiles.

And we can do even better than that. Assume for a moment that we adopt a rule that every defect we find in the field and then fix has an associated black box test written for it. We place this test into our regression suite of integration tests, which we automate using the CI server. Now, assume that we schedule the regression suite to be run in the middle of every night. Voilà! We guarantee that every bug we ever swatted down stays fixed, or at least we know that we somehow unfixed it yesterday. It really beats trying to have QA continuously trying to run their regression suite all the time, which they don’t have time to do anyway. And manual running of any test suite is repetitive, error prone, and boring. Remember, those sorts of tasks are error prone for humans, but perfect for computers.

Conclusion

Continuous integration servers are cheaper, faster, and just plain better than humans for activities related to the repetitive tasks associated with software development. In the second half of this tip, we will discuss how CI lowers your risk profile of having defects in the final product and enables faster defect resolution.

This was first published in May 2011

Dig deeper on Test-Driven and Model-Driven Development

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSOA

TheServerSide

SearchCloudApplications

SearchAWS

SearchBusinessAnalytics

SearchFinancialApplications

SearchHealthIT

Close