Right now, if I do a search on ‘electronic voting’ on Google News, I get 748 results for the past month. The headlines include phrases like “How to trust,” “Technology is not fullproof,” “Electronic voting machines are fallible,” and the ever present headline of “Electronic voting machines also caused widespread problems in Florida.” There are many legitimate concerns around electronic voting technology.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
At this year’s Conference for the Association for Software Testing (CAST), July 13-16th in Colorado Springs, AST eVoting Special Interest Group (SIG) members Geordie Keitt and Jim Nilius will be taking a look at the highly visible testing processes around eVoting systems, and will outline their concerns for the way the testing is done today.
“Our talk is about the reasons why an electronic voting system can undergo a rigorous, expensive, careful series of tests and achieve certification, and still be a terrible quality product. The laboratory certification system is not capable of ensuring quality products where the complexity of the system is as poorly represented by its standards as eVoting systems are. We have come up with an interesting model of the certification testing context, which helps us see when the rigors of cert lab testing are appropriate and adaptive and when they are not. We will be asking the conference attendees for help in honing our arguments in preparation for publication. We will be presenting them to National Institute of Standards and Technology (NIST) to suggest changes to the accreditation guidelines for certification labs.”
Geordie Keitt works for ProChain Solutions doing software testing, a career he began in 1995. He’s tested UNIX APIs, MQ Series apps, Windows apps, multimedia training courses, Web marketplaces, a webmart builder IDE, the websites that run bandwidth auctions for the FCC, and now critical chain project scheduling software. Geordie aspires to be a respected and respectful practitioner and teacher of the craft of rapid, exploratory software testing. He is the lead of the AST’s eVoting SIG.
Jim Nilius has over 23 years of experience in software testing, test management and architecture. His most recent role was as Program Manager and Technical Director of SysTest Lab’s Voting System Test Laboratory. An ISO 17025 test & calibration lab accredited by the US Election Assistance Commission under NIST’s National Voluntary Laboratory Accreditation Program and mandated by the Help America Vote Act of 2002. The lab performs testing of voting systems for Federal Certification against the 2005 Voluntary Voting System Guidelines. He is a member of AST’s eVoting SIG.
“Jim has a wealth of experience in this domain,” said Geordie Keitt, “and as the Chair of the AST eVoting SIG I wanted to draw out of him as much knowledge as possible and get it out into the open where we can all look at it and try to detect patterns and learn lessons from it.” Keitt and Nilius did an initial presentation of their work at the second Workshop on Regulated Software Testing (WREST). One of the outcomes from that workshop was a diagram of the challenges facing a sapient testing process in a regulated environment.
“We are bringing an immature argument before a group and asking for their help to toughen it. We are testing the founding principles of regulated testing, which predates software testing by a hundred years and has yet to recognize that new software systems are more complicated than mature ones, instead of the other way around. We need help to hone and tighten our argument for it to be effective.”
For more on the upcoming show, check out the CAST conference website. Also, checkout the website for the AST eVoting SIG to get involved and see what else they are working on. For more on the work of Geordie Keitt and Jim Nilius, take a look at their outcomes from the second Workshop on Regulated Software Testing.