At Agile 2013, academic researchers unveiled new ways to use software testing for healthcare regulatory compliance and systematically map test trends. Professor Laurie Williams, the lead presenter, was given the IEEE Software Agile 2013 Best Research Award for research that can inspire significant positive change in software development. The annual Agile Alliance conference was in Nashville, Tenn., last week.
[Government agencies should] create a regulation that requires the people who write new regulations to write them in terms of acceptance tests.
Agile evangelist and consultant
Williams won the Institute of Electrical and Electronics Engineers (IEEE) award for her paper on application development approaches for complying with healthcare regulations. At North Carolina State University (NCSU), Williams is researching how healthcare regulations are like software requirements and could be managed within the behavior-driven development (BDD) framework.
Tackling healthcare regulation compliance
At Agile 2013, Williams described her team's project, which was creating a single testing scenario for compliance regulation using straightforward adapters. The researchers looked at seven distinct regulations, which they then converted into requirements statements and then test scenarios. Ideally, most software development stacks using acceptance test automation software could use this approach. This practice could become a standard for all healthcare software development. Indeed, it could be implemented -- almost at the code level -- without significantly hampering developers' freedom to work with whatever technologies they choose.
Williams reported that her NCSU team has yet to clear one hurdle in the project. The team was unable to write a viable testing scenario for one of the seven regulations. The regulation concerned exceptions for emergency cases but was not specific enough about what does and does not constitute an emergency.
Vague requirements are notoriously difficult to build and test, Williams said. However, legal regulations are often vague in order to skirt contentious political issues that would otherwise stymie the legislative process. There's a chance that this is one area where good coding practice might lead to improvements elsewhere.
The problem has to be tackled by those who write regulations, Williams said. During her session's Q&A, Agile evangelist and consultant Scott Ambler agreed. If, he said, government agencies should "create a regulation that requires the people who write new regulations to write them in terms of acceptance tests, then there would be a huge [positive] impact on productivity."
Exploring the meanings of software testing
Agile veteran wins Agile 2013 IEEE Research Award
Laurie Williams, the winner of the IEEE Software Agile 2013 Best Research Award, is no stranger to the Agile development community. In 2001, she started the first XP/Agile event, XP Universe, which grew to become the annual Agile Alliance conference, this year's being Agile 2013. Her writing includes being lead author of Pair Programming Illuminated and co-editor of Extreme Programming Perspectives. In 2011, she received the Association for Computing Machinery's Distinguished Scientist award. She is currently a professor in the computer science department of the College of Engineering at North Carolina State University.
What does the word testing mean to those who are doing it? It depends on which type of developer or project manager you ask, said Theodore Hellman, instructor at the University of Calgary, in his Agile 2013 presentation. Hellman, a Ph.D. student, and his team sought to answer the question by researching the title, abstract and keywords from papers published across three similar Agile conferences: Agile 2012, XP2012, and XP/ Agile Universe.
Hellman found that in Agile, "testing" tends to mean (in order of most likely to least) test-driven development, automated testing, acceptance testing, unit testing, regression testing, refactoring, specifications, Web testing, BDD and GUI testing.
The researchers were surprised that mutation testing and other higher-level types of testing used in scientific and educational setting did not make the list. He concluded that academic researchers are "not cross-pollinating with the broader testing community to the extent that we could."
Hellman's team discovered that some keywords in conference presentations remain constant, while others spike and then fade away. An example is the increase in numbers of academic papers published for Agile conferences and the decrease in those for XP conferences. The team concluded that the difference is that Agile conferences tend to have more people-focused content, while XP conferences focus more on technical aspects; the former is also probably easier to write. Another conclusion is that many researchers "publish once and then leave the discussion," Hellman explained.
There's also a great need for follow-up research reports. "We're not getting the whole story around those testing efforts," Hellman said. There's not enough value gained from getting one snapshot of what one team tried at one point in time. "We don't see how that turned out, and we don't see what other ways they tried later," he said. "One thing the academic community can do to make ourselves more valuable is to try and get those people back to write more research."