By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
When properly implemented, unit testing helps development groups deliver better applications faster, which in turn gives the organization a competitive advantage. Yet, surprisingly few organizations have even tried to implement unit testing, and just a fraction of those have actually succeeded in standardizing it organization-wide. Why? Typically it's because software developers and management are fed misinformation about what unit testing really involves and what's needed to make it a sustainable process.
Here are five of the top myths that prevent software development teams from reaping the rewards of this powerful software verification method:
Myth #1: We're already doing unit testing
Different people mean different things when they say "unit testing," but most industry experts agree that unit testing involves testing the units of code that make up the application foundation. In other words, it's done at the API level. Some teams claim to be doing unit testing, but they are actually doing something different: system test or what is sometimes called "dev test." Others develop some API-level unit tests but have not committed to making unit testing an integral part of their development process.
Unless the process of developing and maintaining API-level unit tests is ingrained into the team's development process, unit testing efforts will eventually decay as schedule and budget pressures emerge, policies and projects evolve, and employees come and go.
The few organizations that enjoy long-term success are those that make unit testing part of their daily workflow. That's why it's important to implement unit testing in such a way that not only leverages automation to guarantee that unit testing can be performed as thoroughly, painlessly and efficiently as possible, but also addresses the workflow details essential to a sustainable, scalable quality process -- such as routing each reported issue directly to the responsible developer and providing managers with one-click updating of the test configurations on hundreds of developer desktops.
Myth #2: Automation isn't really helpful
Many developers think that unless they personally write a unit test, it's not at all valuable. That's simply not true. Test tools are getting better and better as a result of improved test generation heuristics and algorithms. With even a basic level of automation, you can almost instantly create thousands of tests that you never would have created otherwise. That's a win right out of the box.
In addition to giving you tests and possibly finding defects "for free," automation enables you to concentrate on the most important, complex and comprehensive tests: the tests that actually require your specific expertise.
The high level of automation provided by currently available products relieves the team from a significant amount of work that would otherwise be quite difficult and time-consuming. Without this assistance, unit testing would consume substantially more team resources, and it would be a lot easier to dismiss unit testing as a great idea in theory but not something that's really feasible in practice.
Myth #3: We just need to purchase and install a good unit testing tool
I've seen too many teams purchase a unit testing tool thinking that this was the panacea for accomplishing a goal or satisfying a mandate. They tried to use the tool out of the box, without customizing it or embedding it into their workflow. Not surprisingly, they didn't end up achieving the desired results. They thought that unit testing was failing them, but they were actually failing to perform meaningful unit testing -- they were just paying it lip service.
A unit testing tool is not a silver bullet. Rather, it's a starting point. Developers need more than a tool; they also need the appropriate discipline, training, supporting infrastructure and workflow. If you really want the tool to become part of your process, you need to make a conscious commitment to try the tool, determine how it will work in your specific environment, and then ensure that your recommended "usage blueprint" is both understood and applied. There are a lot of tools out there, but unless you buy one that your team is really going to use -- one that's deployable, extensible and scales to the organization -- then you'll end up with "shelfware."
Myth #4: We got 75% coverage automatically -- we're done
Some people think that if an automated tool generates tests that achieve nearly 75% coverage, then they can tell their boss that they performed unit testing. This is absolutely false. You got a great start on the process of unit testing, but you certainly can't claim that this alone means you unit tested. You're not done -- you've only just begun. You still need to verify the specific requirements of the software you're building.
Automated testing is helpful, but you need to map these tests directly to requirements. To do this, you examine and understand the tests that the tool gave you as a starting point, then take this free gift and make it more valuable.
Most automated testing tools provide a palette of tools to help you extend the automatically generated tests. For instance, Parasoft Jtest provides an object repository, stubs, test case parameterization and Tracer (which enables you to generate functional test cases by simply running use case scenarios on the working application).
Myth #5: It's not worth the effort
Anyone considering unit testing should first realize that it's not going to be easy. But that doesn't mean that it's not worth the effort.
Unit testing does have a barrier to entry: The team needs to learn what unit testing is, how to unit test, what to unit test, and how to use the tools to facilitate their unit testing. If the group really isn't passionate about it, doesn't understand it, or doesn't have the time to get started with it, it probably won't be driven enough to take the plunge. The group might know it needs to be done, but it continues working around the problem rather than investing the time and effort in moving forward in the right direction. This really boils down to understanding the value of unit testing, commitment to quality, and acceptance of the additional time it will add to the project schedule.
So what makes unit testing worth all this effort? The great benefit to unit testing is that the earlier you catch problems, the fewer compound errors you end up with. By compound errors, I mean errors that don't essentially break anything, but then get buried deeper and deeper in the API and eventually become one part of a combination that results in a problem. When this happens, it's very difficult to diagnose the source of the problem. And when you do, there are typically many new layers of code that now depend on the API when the defect is present.
If you did proper unit testing, making sure that the requirements in the code were enforced by unit tests, you would have found the problem much earlier and more efficiently. If you find the problem early, the defect will never be checked into the code base. That means you won't have to find and fix it later -- when it's exponentially more difficult, costly and time-consuming to do so.
From what I've seen, developers who have really adopted a sustainable unit testing process not only end up writing better code, but also become more productive. Why? Because they're not constantly chasing after bugs and rewriting the same code over and over again. Any organization that can truly embrace unit testing and make it standard practice for all development projects will significantly increase quality, reduce defects found in the software development cycle and field, and consequentially gain a huge competitive advantage.
About the author: Andrew Chessin is technical leader at Cisco Systems Inc.