Let’s build something! Whether it’s on a Parallax BASIC Stamp or a Samsung Orion, the problems are just about the same. You have to build some hardware, develop software all along the software stack, sometimes controlling registers for functionality, sometimes implementing difficult algorithms, but the job is always hard. There must be a better way! When we look at the problem with our Agile development glasses on, we see that this better way may not as easy as making a C# application for our office golf club, but it is a task that can become less frustrating and more productive.
The classic embedded development cycle
- Write code for one or more features of the target system.
- Use the target system manufacturer’s starter kit to compile code, place into system memory, and enable a debug environment.
- Set breakpoints, and run system under test to debug code that was written.
- Bang head on desk many times while iterating over the above three steps until code starts to work.
- Load all target system code on to the device and
- try running everything.
- Bang head on desk many more times as things that used to work stop working.
- Produce device on a massive scale and hope that nothing goes wrong.
The ideal embedded development cycle
What we’d really like from an embedded systems development environment is to:
- Make the development of pure algorithm code easy.
- Make the development of code that glues the algorithms with the hardware being developed tolerable.
- Make the process of incrementally and iteratively adding features as painless as possible, especially in a multi-developer environment.
- Ensure that code which used to work stays working.
- Let us sleep easy at night knowing that we are releasing defect free embedded software.
The role of embedded software development tools to help us achieve the ideal state
There are many tools that we can “extend our grip” to aid embedded systems development work. These tools fall into two basic categories -- simulation and emulation environments. While some may claim that the differences between simulation and emulation are mere semantics, the distinction is very important to articulate so we can understand where each environment adds value in the development process.
Simulation environments are pure software solutions that create a complete software model of the entire embedded system. Some simulation software idealizes only the microprocessor used, whereas more advanced software can mimic peripherals that exist outside of the microprocessor, too.
Simulators are really useful in that because they are just software, they can be executed in an automated fashion, and can therefore be part of an automated continuous integration environment. This allows us to work with the version control system and enable a “check-in and verify” cycle, where developers work on small chunks of functionality, writing both white-box unit tests as well as black-box integration tests, and can verify that their changes not only “do the right thing,” but “do the thing right.” And all of that happens when the trigger set off by your version control system commit sets the continuous integration server whirring.
What simulators do not do as well is test the asynchronous and non-deterministic aspects of the device. For example, while some simulators actually allow you to specify a waveform to present to a particular pin of the system under test, the process is both “clunky” in nature and can never replace the real-time events that are handled by the hardware under real world conditions. This is the role that emulators do far better.
An emulator, for purposes of the distinction that I am trying to make, involve additional special hardware that allows one to simulate the hardware being worked on, but with added capabilities. These added capabilities usually include the ability to have a complete in-circuit facility to set breakpoints, read and write values, and do so directly on the actual I/O circuits that the ultimate system being developed will use. Because of the hardware dependency, emulators are usually not amenable to automation orchestrated by the continuous integration server. But they get the developer much closer to the actual metal that they are working with when trying to figure out a stubborn problem that simulation is not finding, because they can be programmed to breakpoint on conditions, use checkpoints for replay, and so on, on values that the actual hardware is seeing.
Emulators have their limitations as well. Since we are introducing a new piece of hardware to the development mix, their presence will always have some form of interference in circuit timings, and can lead one to think that things are perfect, only to find later that their presence has masked things such as hidden race conditions due to the side effects that the hardware debugging environment has provided.
Perhaps the best way to bring out the distinction between simulators and emulators that I’ve ever read came from Peter Hans van den Muijzenberg in an October 2005 posting at http://ed-thelen.org/comp-hist/emulation.html., where he states, “If you want to convince people that watching television gives you stomach-aches, you can simulate this by holding your chest/abdomen and moan. You can emulate it by eating a kilo of unripe apples.” Emulations work from the outside in. They work at the input and output boundaries of the circuits. Simulations, on the other hand, are a very convincing illusion of the circuit functionality at some level. If we had a deep enough understanding of all of the physics at an atomic level, and had infinitely powerful hardware to run the simulation on, there would be no need for emulators. But, alas, we are stuck with much higher-level simulations that don’t reproduce the nuances of the analogue and digital circuits that we work with, and are therefore constrained in how convincing the simulation is.
- Regression testing is a time-consuming, mind-numbing, and error-prone task for humans to perform. You should build regression test suites while developing features in an Agile “thin slices of vertical functionality” fashion. These regression suites should contain fast running unit tests (white-box tests) developed in a “Test First” fashion which should be run through simulators that are part of the automated continuous integration (CI) server. The CI server should then build the code and run the regression suite on every code repository commit. You should also build a comprehensive set of integration tests (black-box tests) during the feature development cycle, including any defect remediation work that was done. These too should be automated, but may require a lot more time to run, which may result in a “nightly build and comprehensive test” schedule for the CI server. Computers are good at the repetitive aspects of the chore and never complain about not getting enough sleep.
- All pure code algorithms can be much better coded and tested in just about any simulator for the processor they will run on. Test doubles should be provided for depended upon systems that are outside of the system under test. It is usually wasteful to spend time routinely running this type of code on an emulator.
- Code that is not easily exercised and validated in the simulator environment will require emulator hardware to effectively debug. Once you understand the problem, try to capture the conditions that lead to a bad outcome in a simulator test case that becomes part of regression test suite. When your code fixes cause the simulator test case to pass, go back and make sure that the code runs on emulator-augmented hardware as well. Rinse and repeat as necessary.
- Your final tests will always be on production prototypes of real hardware. If you did your job well, there should be no time-consuming surprises of unwelcome behavior. You and I both know that it is always nobler to suffer the slings and arrows of outrageous fortune than to take arms against a sea of troubles.
This was first published in May 2011