Is the Agile methodology rigorous enough for the unique challenges of embedded software development? Custom software...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
developer Atomic Object, based in Grand Rapids, Mich., has successfully used Agile development methodology and test-driven development (TDD) for both embedded and non-embedded software projects. Vice President Mike Karlesky, who holds degrees in electrical engineering and computer science -- a confluence suited for embedded development -- explains the challenges of developing and testing embedded software, and why Agile is a good fit.
Why is Agile methodology a good fit for the development of embedded software?
Mike Karlesky: It's especially relevant to embedded, because there are so many things that can go wrong, so many moving parts, people, etc. Agile is all about change, with a focus on responsibility and quality. In an environment with so much change and breaking points, Agile is even more effective and important in the embedded world.
One of our clients is an auto supplier in the smart rearview mirror world. They just achieved SPICE (software process improvement and capability determination) level 3 certification for a project. We played a part by using the Agile methods and testing methods. The reviewers were quite impressed with how rigorous the process that came out of Agile as it applied to SPICE-level certification.
With Agile, there's a great deal of emphasis on requirements tracking and traceability, and verifying that code corresponds to specified behavior. In other contexts that tends to be a manual review-based, infrequent process. There's also an emphasis on testing and automating that which is automatable, which is just about everything, and exercising the code by way of testing and relating back to the requirements and automating the collection thereof.
Does the development of embedded software have any unique challenges?
Karlesky: Certainly when bringing together hardware and software you have the question of what piece is reliable and behaving as expected. If one thing changes or there is an unknown problem, it can go unnoticed for long time and blow up in a disastrous recall problem later, for example, or a real time sink. Compared with desktop software where you can punch out a patch, if something with significantly broken behavior is pushed off the factory line, say 10,000 smart mirrors go out the door, a patch is not a simple thing. It means a potential recall or even certification or legal issues. Incrementally adding functionality and then system and unit testing gives us quite a bit in the sense of reliability as it goes through the lifecycle. Hopefully, if you've got 99% test coverage on the logic of the system, and a particular behavior exhibits, you should be able to nail down pretty quickly if it's a hardware or software thing. And with continuous integration, hopefully you find that quickly rather than days or weeks later.
Talk about Atomic's decision to practice test-driven development.
Karlesky: Our decisions are based on how do we make this testable. The only thing that changed with embedded development is where there haven't been test frameworks we've created them or worked with others to create them. We created test rigs that build, test and load onto target hardware and collect the results out of a serial port. It takes a bit of work to create an effective automated build environment, but the principles, mindset and results are all similar to any other context we've used TDD in. For embedded software specifically, so much tends to be a black box -- there are a few inputs and status indicators, and most everything is communicated on a bus. So much is hidden you can't poke at, so it's difficult for a tester to click on things in exploratory fashion. Having insight into the business and decision logic gives one ease about hidden lurking dangers that could go on; having a cohesive test-driven approach makes a lot of the mystery go away. Having all of that stuff unit tested outside the context of a tiny black box instills a lot of confidence and gives us a lot of metrics to work with.
Can you talk about some of the work you've done for X-Rite, which engineers and manufactures complex color measurement technologies?
Karlesky: Our relationship with them started with their desktop software. They were open to experimenting with Agile and TDD on the embedded side, and asked us to join a pilot project development team. We went in as coaches working side-by-side with team members. The project was a traditional black box with a significant GUI component, all built from scratch. We had the embedded software challenge plus GUI testing challenges. The same Agile techniques and principles applied, we just found new ways to do them. We created a simulator for the GUI, so we could test and inspect the software rather than load flash firmware onto the device. All in the service of a good testing philosophy, we had high metrics and code coverage. The project has continued without us easily; they make the changes and update the tests, and it's smooth for maintainability, which is what should happen with well-tested products.
So with Agile, do you feel you build software quality into your application development lifecycle?
Karlesky: I'd go one step further than quality and say sanity. There are so many moving parts and potential places for liability that knowing the knowns, and partitioning off the unknowns, is invaluable in developing with quality and some sense of sanity and predictability and measurability. If things are going well and we're responsible in our process, there aren't marathon sessions six weeks past when stuff is supposed to be delivered -- that's a recipe for introducing as many problems as you may have solved.