Earlier this month, the New York Times ran an article on a report criticizing the F.D.A. on device testing. The article seems to indicate that one of the leading causes for poor testing is manufacturer claims about new devices being like other existing devices already on the market.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The article also points out that the F.D.A. has failed to update its rules for Class III devices for a while now. As near as I can tell, the software (the part I care about) is in those Class III devices. For those not up on their F.D.A. history, the article has a great tidbit that I found very interesting.
Created in 1976, the F.D.A.’s process for approving devices divides the products into three classes and three levels of scrutiny. Tongue depressors, reading glasses, forceps and similar products are called Class I devices and are largely exempt from agency reviews. Mercury thermometers are Class II devices, and most get quick reviews. Class III devices include pacemakers and replacement heart valves.
Congress initially allowed many of the Class III products to receive perfunctory reviews if they were determined to be nearly identical to devices already on the market in 1976 when the rules were changed. But the original legislation and a companion law enacted in 1990 instructed the agency to write rules that would set firm deadlines for when all Class III devices would have to undergo rigorous testing before being approved.
The agency laid out a plan in 1995 to write those rules but never followed through, the accountability office found. The result is that most Class III devices are still approved with minimal testing.
I only found the article because I happened to see a letter to the editor from Stephen J. Ubl, the president and chief executive of Advanced Medical Technology Association. The letter caught me eye because in May I’ll be facilitating the second Workshop on Regulated Software Testing. His comment about the “extensive review of specifications and performance-testing information” is exactly the type of stuff I want to see at the workshop.
Regulated device/software testing is a difficult thing to do. For those who want to focus on the testing, there’s a lot of process already and it can distract from doing the testing. For those who want to make sure the process is followed and that all the right testing is taking place, then your focus is on the process and evidence. Figuring out that balance is always hard, whether you’re the F.D.A. or the company developing the product.