Quality Time

Problem solve Get help with specific problems with your technologies, process and projects.

Avoid software defects by testing code

To prevent software defects, test the code. But before you do, question every assumption behind the software. No question is too trivial to raise.

As a columnist for SoftwareQuality.com, I spend a lot of time thinking about software defects.

But software defects were the last thing on my mind when the FedEx guy delivered my brand new, just-released FitBit Charge to the door.

I had waited patiently for this activity tracker, a replacement for the FitBit Force, which was recalled when some wearers developed a skin allergy to the wristband. And now, device in hand, I was just a simple setup process away from measuring and meeting my 10,000 steps-a-day goal.

But two minutes into the installation and setup, a software defect brought the process to a halt and offered me no way out. So instead of hitting the trail with the tracker on my wrist, I spent the afternoon wondering what the cryptic error messages "invalid height" and "invalid weight" meant. And I questioned whether the software development team behind the FitBit Charge had done any testing.

After a while, I figured out the only way to complete the installation: enter your height in centimeters and weight in kilograms. Even though a drop-down box let me toggle between feet, inches and centimeters -- and between pounds and kilometers -- the software did not support U.S. systems of measurement. That, apparently, was the reason behind the "invalid height" and "invalid weight" error messages.

So there it was: A software defect significant enough to derail the setup and render the activity tracker unusable. To its credit, FitBit was quick to issue a software update that fixed the metric problem -- and within a day or two my dashboard began charting my progress in miles, not kilometers.

Other than a terse "We are aware of this issue" response to my email outlining my experience, I didn't receive any explanation for the software defect. But the experience got me thinking about all the ways this particular defect -- and software defects in general -- could have been prevented: if the software team behind the FitBit Charge had tested every assumption behind the software under development.

Here is my take on what went wrong.

First, the team failed to specify a crucial requirement at the outset of the project: The software must support both the U.S. and metric system of measurement. I don't know how FitBit's software development process works, but I presume no one at the table -- not the business stakeholders, the developers or the testers -- noted this omission. Catching this oversight required neither line-of-business nor technical expertise. It simply required common sense -- and a mindset that questioned all the assumptions embedded within the project.

Second, as work on the project progressed, it appears that developers and testers accepted the requirements they received at face value, and did not question lack of support for the U.S. system of measurement. Didn't the developers wonder about this before they wrote a line of code -- or wrote the tests that would prove that particular piece of code was working? Did software testers explore what happened when they entered height and weight data in their respective fields? If they had, it wouldn't have taken long to uncover the software defect. Mistakes happen -- but better to catch them before a line of code has been written, before the software has been released to production.

Third, it appears that the team failed to conduct user acceptance testing. If they had, they would have seen U.S. users run into trouble a minute or two into the process, when they attempted to enter height and weight data.

As software pros, it's easy to think that the job is to define requirements, write code or test code -- and usually, some of each. But the most important job of all is testing the assumptions that have been made about the software before us.

Send me a note and I'll get back to you -- just as soon as I've reached my 10,000 steps-a-day goal.

This was last published in December 2014

Dig Deeper on Software Test Design and Planning



Find more PRO+ content and other member only offers, here.

Join the conversation


Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

As a career-long tester, this happens far more than people ant to admit, and yes, it happens to places that have decided that they are too cool for testing beyond the automated variety. Automation is a nice option to cover well trop paths, but it's not very good at making fresh assumptions, and it's really not good at making judgment calls. Your example leads me to believe it was developed and, if tested, was done so in a metric environment and deemed good enough. Yeah, that's a big whoops, but if there was not an actual requirement saying test for imperial measures, it's not going to enter the mind of someone who doesn't need to consider them. Context is king, and in this case, surroundings inform what people consider at the outset. 
Thanks for great article. I am on hold right now with the fitbit customer support line to deal with their very issue. Glad I have an idea of what is going wrong so I have all of the information when I reach an actual person (been on hold for 13 minutes now).
Good article. I wonder if FitBit buys into “there’s no such thing as bad publicity.” They may be priding themselves on their possibly Agile rapid response to the change. Since the device had “a drop-down box [that] let me toggle between feet, inches and centimeters -- and between pounds and kilometers,” I’d say the pounds-feet feature was well beyond an assumption and should have been caught by even white box tests. As an aside, maybe the apparently-across-the-pond developers were just getting back for the many frustrations foreigners experience with US code that commonly isn’t even aware, let alone providing for albeit incorrectly, that folks in the rest of the world do some things differently.
I experienced a similar problem with my smartphone with HTC camera. During a standard system update they completely removed the "dual capture" function and replaced it with the "split capture" function ( it has 3 cameras inside). When they changed this function, they also broke the some of the connections for editing features and settings which resulted in horribly poor quality photos. They released it two weeks before Thanksgiving and it was not fixed until after Christmas which is prime time for picture-taking for me. I ended up using other camera and devices that are bulkier but produced better quality photos. As a customer, I was furious that they removed my favorite setting, "dual capture" because my friends and I loved taking group shots with me (the photographer) in a small inserted photo from the selfie camera so we all felt included in the shot. With some additional research, the store clerk told me that the new "dual capture" will have a feature to crop and drop the photo but it sounds like a complicated process to me when previously all I had to do was to select my setting and click for a dual capture shot. They made something simple and easy-to- use difficult and clumsy. To date, they still have not released the promised editing features.
As an integration analyst and scrum master, I immediately recognized their lack of poor planning and regression testing. Since they made the change in the same version of my smartphone, I think it was a major faux pas from a marketing perspective. I was looking to purchase another HTC camera, similar to Go-Pro brand, but after this fiasco, I do not trust them to keep their act together for cohesiveness and integrity.

I think this is a classic example of trying to reach a release deadline by sacrificing quality and customer satisfaction. On my team, if we cannot reach a deadline with group consensus for approval for the acceptance criteria as well as testing (unit testing and regression), we do not release it until it is completed to satisfaction and DONE.
of course testing the code is crucial and particularly user acceptance testing where a lot of previously missed can be caught before implementation. I am from Europe, I live in the US, your example of US and metric system sounds weird to me because this would have been the first thing on my mind if I had to build the testcases - at a minimum, I would have mentioned it in review sessions, in today's global economy it is especially important. I find that, these days, a lot of the required standard testing activities are being skipped or done too quickly in the name of release.
in today's global economy it is unacceptable that companies do not automatically include the metric system in their apps. I am from Europe, I live in the US, both systems are always on my mind - no doubt I would have mentioned the missed requirement in a review session - which btw, would have been caught long before, at the documentation phase.
Wow, units of measurement error.... This happens a lot more often then you'd expect.  Man, does it show energy expended in Joules rather than Calories i wonder?