I'm going to assume that, by debugging, you mean stepping through code to find the source of a bug. This can definitely
be an expensive activity, and there are ways to prevent it from happening -- but the root is preventing defects. Not a lot of engineers (especially testers, but engineers in general) think of themselves as defect prevention experts, but this is where I expect myself and my engineering teams to be. Our job as a unified team is to think about patterns, practices and processes which reduce the defect introduction rate -- because a defect not written is GUARANTEED to be a defect not shipped!
A reactive approach to reducing your debugging time is to ensure you have sufficient logging enabled. Some tools I've worked with have only been debuggable through "print screen debugging" or execution-time logging. By logging more robustly you'll provide more info in the form of logs, sometimes providing visibility into what you need without requiring a debugging session. A couple of things to consider: make your logging levels run-time switchable. This allows you to move from error-only to assert to deep logging. Set your default logging level to the lowest possible level -- this adheres to Microsoft's recommendation of "secure in deployment." Often, deeper logging reveals information assets such as user names and other object-level values. By specifying the lowest level by default, you reduce the risk of accidental disclosure in production. And make sure your logging is in an easy-to-read format.
One proactive way to look at the solution is, rather than looking for shortcuts around debugging, to look for shortcuts which are resulting in defects. Are your developers writing solid unit tests? You mention test-driven development (TDD), which may be a good step toward reducing defects, depending on how seriously the development team takes them. Basically, any activity you do which focuses on defect detection and defect prevention will result (if done right) in fewer defects and less debugging. Any time the engineering team gets lazy and takes shortcuts, defects will result. It's the same as a machinist who gets lax on the job and turns out parts which are just slightly off. Engineering takes discipline and consistency!
The next thing to consider is configuration management. Today's J2EE and .NET environments are complex and evolve rapidly. One of the key sources of defects in these environments is poor change management -- a piece of code might function in the development sandbox but fails to even boot (or, worse yet, shows infrequent, random unexpected behavior) in the test bed. This is often caused by poor configuration management. Some configuration management issues need to be fixed at the development level -- for instance, if there are hard-coded environmental variables in source code, they need to be moved into configuration settings. In other cases, automated configuration scripts can be beneficial. By automating your configuration, you guarantee the same settings will be applied in each environment. Manage your configuration scripts just like source code -- keep them in a repository, check them in when finished, and include comments in them. Automated configuration reduces variables -- it reduces the defect rate and the "surface area" for debugging, should a defect be discovered.
Also, consider your data. With data, one of two things can cause defects: not having production-like data in your development environment, which allows defects to slip by in development which are caught (hopefully) in production, or having a bunch of automated data in development which causes false defects because the data was poorly modeled. I generally like to see small databases freshly created at the unit and functional test level, and then I like to see production-influenced databases (if not the production database itself) at the system and customer acceptance test level. Those small, clean databases reduce the likelihood of a defect cropping up and allow development to focus on delivering customer stories. Often what you'll see, however, is that a clean database in the development sandbox does not accurately reflect what's out in production. Modeling and deploying a production-like database at the dev level takes investigation, but by modeling your data and deploying it in development, you can capture the most common data structures without having to keep a production database around (and without the security issues which are potentially associated with having production data in your development environments).
Finally, what does your code look like? In agile and waterfall alike, your code is your heartbeat; you live and die by it. If your code is "spaghetti code" (weaving and winding all over the place) and difficult to read, not only will it likely have a higher defect rate per KLOC, but it'll also take forever to debug. If developers aren't commenting code sufficiently, even the cleanest code can be a challenge for someone to debug. As engineers, we need to consider our legacy -- and that legacy is printed in our code. Good teams will take time to comment code clearly. Good teams will also refactor when code trends toward unreadable. I have seen testers post and stand by defects related to code complexity, code readability and code commenting. And I support them as they do this -- in most cases, it makes more sense to take a little extra time and write readable, maintainable code during implementation than to breeze through the code and pick up the pieces later on. The refactor is easier when the code is fresh in the mind -- if you have to return to code you authored nine months ago, well, your ability to read it will be challenged!
So consider your debugging in terms of reactive approaches (increased logging) as well as proactive defect-prevention. The steps you take now to cut down your defect rate will reduce the time you spend debugging and will produce more elegant code.
Dig deeper on Software Configuration and Change Management
Related Q&A from John Overbaugh
Learn what's behind AWS outages and how to fix failures before they happen.continue reading
Learn strategies for best security test strategies for SaaS cloud.continue reading
Expert John Overbaugh identifies the three top concerns of the test manager and offers advice on how to stay ahead of the curve when it comes to ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.