I really enjoy like doing small projects where I get to put hands on a keyboard and execute tests. Ideally for me when I’m doing this, I’m either doing exploratory testing or performance testing. There’s a challenge that comes with doing this however. For these small projects to work well for both me and my clients, I need to be really good at estimating my time. If someone asks if I can test their application for certain risks, I need to be fairly certain of how much testing I’ll need to do to shed light on those risks and how much it’s going to cost them. Often when I provide these types of estimates, I do the following:
- provide the usual disclaimers about what I can and can’t do as a tester (there’s no such thing as complete coverage, just because I test an area of the application doesn’t mean I’ve found all the issues, this is what I mean when I talk about test coverage, etc..)
- provide a scope summary of what I think they’re asking me to do
- provide a range for how long I think it will take (for example, 5 to 10 hours)
- provide a due date for when I commit to having it done
- provide a rate for what it will cost (either hourly, fixed bid, or other – but I’d never price my work by the defect, sorry uTest)
If I’m working with a team I’ve worked with in the past, all of this fits in a couple of a paragraphs in an email. That’s because they won’t need the disclaimers. If I’m working with someone new, it might be a one or two page statement of work.
Here’s why I think this is important…
I see a lot of testers working on sprint teams struggle with estimating their testing for stories. In my mind, it’s no different then the process I just outlined above. Assuming you don’t know in advance which stories will be delivered with other stories, then each story is it’s own little project waiting to happen. Based on this risks involved with that story, you outline your testing scope, provide an estimate (low/high) of what it will take to do your first pass of testing, and provide an idea of costs. (Costs might not be dollars, they might be trade offs.)
However, it’s my experience that many times testers can get overwhelmed by the scope of a sprint. They feel they need to “test everything” – even if they know that’s not possible – so they don’t bother to estimate the work. Or they forget to draw a distinction between testing the individual story and the overall release the story is potentially going to production with. This can cause confusion around scope and timing.
In my mind, part of the value of breaking the work up into stories is that you get little chunks of (hopefully) independently testable code. This allows me to more easily esitmate my testing work, because I can focus on just those features and risks involved with the individual story. Testing for the overall release, would be something I might estiamte seperately. With that testing, instead of looking at issues with the individual stories, I’d be looking at issues around integration and other overall quality criteria like performance, consistency, security, etc….