Klemsy - Fotolia
Test case prioritization and planning require a balancing act between application quality, customer experience and business needs. Agile test planning and test prioritization are not new processes. They rely on the skills of basic organization and apply to both feature and regression testing.
The first step in test case prioritization and planning is deciding which tests to write. For example, if you're testing a new feature, there are various testing methods to employ. Yes, we need to test the acceptance criteria, but we also need to test more than that. I've never received complete acceptance criteria, sometimes not even a complete sentence, let alone anything near a complete end-user story or workflow. Start testing with the "happy path" that tests just the given acceptance criteria. Next, before investing time into writing additional tests, do some exploratory testing and make notes on what you've done. Find out where the code or workflow breaks down. You'll discover several scenarios not covered by the acceptance criteria. Exploring the code in the application is an excellent way of understanding the feature and how it impacts the whole application. Find the "lost" scenarios and your end users will thank you, albeit probably not personally.
Next, write out the test cases that are needed from your exploratory testing notes. Now you already have a more complete testing suite thanks to test case prioritization. The next test method to employ is boundary testing. Many application developers skip defining limits and character restrictions when not specifically instructed to do so or even when acceptance criteria include those specifications. Use exploratory testing techniques again to find out in which fields you can exceed the character limit and create an error, or use negative numeric values and a mix of alpha and numeric characters to exceed reasonable value settings. For example, if I'm testing a healthcare application and setting a dosage on a med equal to 9999.999 tablets, the value is extreme. Unless the pills are candy, no reasonable person would take that value. Report it as a bug and see what response you get. Proceed to the next field and repeat. If a field should be only alpha characters, see how many you can cram in there and make sure to save. It's the save action that fails if the database table isn't coded to accept a large value. Of course, then there are special characters, negative numbers and decimal variations. If a field is numeric -- up to 3 digits -- make sure to test around the boundary. For example, how does the application respond to 012, 000, -111, 999.999 or 1,000? Does it truncate, round or freeze up? Make sure to save and attempt to process invalid values to check for error conditions in data storage.
Take the "happy path" acceptance criteria tests and try them literally backwards. So, ask for each step: Can the user do it backwards? Pick the options in the "wrong" order, reverse the workflow, and save or cancel without error. Hit the back button on the browser what happens. What happens when you click refresh? How does it respond to users entering values with shortcut keys? Shortcut keys are often popular with end users and forgotten by software developers and testers.
Finally, exercise your test case prioritization and planning skills and write up the test cases you actually need to re-execute in the future. It may be not all test cases are truly necessary to run again, or it could be they are all necessary. As a QA, you can determine which tests are needed and, of course, discuss with your team if you have reservations. Once you have the tests written, prioritize them. It doesn't matter what tool is used, you can prioritize tests cases using nearly any application. You can list them in priority order or set a field to show if the test is critical, high, medium or low priority. Now you have a prioritized test case list to use to plan out future regression testing.
This is the easy, breezy and effective way to get better test case prioritization and planning.
Wondering where smoke tests fit in? Amy Reichert has the answer
Testing and continuous integration -- here's how it works
How a QA can survive the move to continuous development
Dig Deeper on Topics Archive
Related Q&A from Amy Reichert
Let's explore the importance of result analysis, the right measurements and test design for application performance testing. Continue Reading
QA needs to reiterate its value to the business side of the organization. Use this tried-and-true advice to leverage documentation and automation to ... Continue Reading
Vendors have inched toward automated application testing for a long time, yet there is still room for growth. Software tester Amy Reichert offers her... Continue Reading