The image “http://www.votetrustusa.org/images/votetrust-small2.jpg” cannot be displayed, because it contains errors.

 

   
National Issues

Testing Election Software Effectively PDF Print Email
By John Washburn, VoteTrustUSA Voting Technology Task Force   
October 27, 2006

This voting system testing program was first posted here on February 2, 2006.

 

A Proposal for Effective Testing of Election Software

Last month a mock election in Leon County was run exactly as it should be - where all proper policies and procedures are followed. Contrary to the claims of the vendor, the election results provided by the software administering the election were both incorrect and the manipulation was undetectable except through the most extraordinary of means.

This comes as a surprise only to those who have not been paying attention. For more than a decade and a half, citizen activists, investigative reporters, and computer scientists have been reporting on the inherent risks presented by electronic voting through either malice or mistake. (See "Decades of Concern" below)

Every revelation of a security defect, demonstrated or speculated, has been met with one of four responses from vendors:

1) If there were such a problem it would have been discovered during the federal testing.
2) Well, that is the other vendor’s equipment. It does not apply to our equipment.
3) Well, that was a bug, but is fixed in our latest product offering.
4) Well, that is a problem, but it could not occur under circumstances found in a real election where proper policies and procedures are followed.
What set the demonstration in Leon County apart was the fact that the test was specifically designed to meet and counter each of these responses. This attention to detail is described in this first hand account as the third iteration of this test was performed. Another distinctive feature of the Leon County test is the persistence of a lone election official. The much publicized testing done on December 13, 2205 was actually the third time this test was done. The prior two times were in May and June of 2005. The full report was distributed on July 4, 2005 to election officials across the country. In response to the July 4 report, Diebold repeated stock responses 1, 3, and 4 as late as a October 17th meeting of Cuyahoga County Board of Elections (see pages 135 line 4 to page 136 line 20 of the transcript). Diebold later admitted on January 3, 2006 to the Secretary of the Commonwealth of Pennsylvania that the response given during the October 17th meeting were indeed unfounded.

It is time to recognize the vendor-funded testing efforts performed under the auspices of the National Association of State Election Directors (NASED) have produced software testing results which are as reliable as the research performed by the Tobacco Institute on the effects of smoking. It is time to consider a proper framework for certification. Ten years which could have been spent testing election systems effectively have been wasted because of the current frame work.

How has this time been wasted? The current framework for testing election systems is compartmentalized, improperly proprietary, and most of all ineffective.

Testing results are compartmentalized because each state or county board of election have worked largely in isolation. Some states such as California, Ohio, Maryland, and Pennsylvania have hired consultants to perform testing of election systems from a variety of vendors. In every case, such investigation has revealed deep, election-altering defects present in the software regardless of vendor. However, such reports have not been published in a timely fashion (as in the case of Ohio) or they have been published only on the website of the Secretary of State. The vendor becomes the only conduit through which election administrators in other states are informed of any testing results. Not surprisingly, only positive test results have had legs. All negative testing results are conveniently omitted.

Testing results are improperly considered proprietary because trade secrets do not apply to the testing results of election software. A “Trade Secret” is defined by the Uniform Trade Secrets Act and as follows:
"Trade secret" means information, including a formula, pattern, compilation, program device, method, technique, or process, that:
(i) derives independent economic value, actual or potential, from not being generally known to, and not being readily ascertainable by proper means by, other persons who can obtain economic value from its disclosure or use, and
(ii) is the subject of efforts that are reasonable under the circumstances to maintain its secrecy.

This definition is incorporated into the statutes of most if not all states.  (For example, the above definition can be found in Wisconsin state statute in Wis. Stat. §134.90.)  Testing results for election machinery fail to meet the definition’s second requirement.

The system of Government long established by the American People and to which the American People have become accustomed over the course of more than two centuries provides one and only one mechanism by which the consent of the governed is transferred to the Government so that the Government may derive its just powers from that consent. This single mechanism is the ballot box. The Consent of the Governed is a self-evident requirement of all Governments - not just the Government of the American People.  Because of this fundamental primacy, no process which may introduce doubt, confusion, distrust, or opacity to this vital transfer of consent from the Governed to a Government is neither acceptable nor reasonable.

No aspect of how public officials using public monies to administer public elections on behalf of the voting public to elect candidates for public office should be private, nor can any aspect thereof be reasonably kept secret from the public. This includes all aspects of election systems which may be used to aid in the administration of a public election.  Hiding information about whether or not a system meets the requirements of merchantability or fitness for use is not acceptable since both the voting public and election officials need information to determine if an election system conforms to state election law.  If precise information regarding the merchantability or fitness for use of an election system is unavailable to the public, there can be no confidence on the part of the people in the transmission of consent from “The Governed” to the Government.

It is not reasonable to keep the testing results of election systems secret under any circumstances and therefore, there are no trade secret property interests in testing results for election systems.

Finally, testing to date has been demonstrably ineffective. The defect demonstrated in Leon County has been present in all precinct scanners and touch screens from Diebold Election Systems, Inc. (DESI, formerly Global Election Management Systems) for more than 10 years and has not been discovered during any of the 13 rounds of testing documented by NASED. If the defect in the DESI/GEMS systems has been missed repeatedly and over time, it is reasonable to ask what other defect in this and other systems have also gone undetected by this ineffectual testing?

A Better Framework

A new testing framework which is public, distributed, and effective is required.  Because of the years lost any new framework must also be very efficient. Eric Lazarus of DecisionSmith proposed just such a new framework at the November, 2005, Voting System Testing Summit.  Many details how a consortium of states would administer and conduct the testing of election systems were left open due to the time  constraints of the Summit. I propose these details need consideration immediately and propose the following:

 

A consortium of states should:

1) Buy vendor's systems,
2) Make such systems available for testing,
3) Record all testing results in full video and sound,
4) Accept and archive written reports of all testing results: positive, negative and null, and
5) Distribute all testing results to any who ask for them.
Under the proposed framework, the consortium would provide 2 capital assets. The first would be an inventory of complete election systems. For 70 systems (the 50 current systems plus 40% for growth) at an average acquisition cost of $25,000, this woulkd represent a capital expenditure of $1.75 million. The second capital asset would be a recording studio modeled on a usability test lab, which would be used to record system testing. At a minimum such a testing studio would consist of single room in which:
1) there is a camera looking from the monitor toward the user,
2) a camera looking directly down at the keyboard,
3) a camera looking directly down at the mouse,
4) a camera looking at any other equipment,
5) there are sufficient other cameras to record the general activity in the room,
6) the room is wired for sound,
7) all persons in the room wired for sound as well, and
8) to the greatest extent practicable all equipment is instrumented to create as complete an electronic log of events as is possible.

 

Such detailed recordings provide an objective and complete record of what was or was not done during the testing. The cost of such a room is on the order of $1 million dollars to construct. This is a total capital outlay of nearly $3 million dollars.

Any group granted access to an election system would be required to create a detailed written report and a recording in the room above. The consortium would then transcribe, catalog, and archive all test results and for a nominal fee and distribute the results to any who request the information.

Who gets to do what testing on which system when? 

 

The Help America Vote Act (HAVA) deadline of the first federal election after January 1, 2006 leaves little time to find the defects undiscovered by the current Tobacco Institute model. Even if the deadline is postponed to the General Election in November, 2006 election as proposed in Rep. Fitzpatrick’s bill, there is much testing to do and little time in which to execute it. Time is needed to grant access to election equipment, to assemble the talent necessary, to execute proposed tests, and to disseminate results. One option would be for the consortium to appoint a director of committee to review the various testing proposals, to determine which merit testing. I disagree with this allocation model.

Human societies have to date only devised three methods to allocate scarce resources.  They are:

1) Power. The person with the most guns or political clout gets the resource. This is also known as rationing
2) Time. Stand in line and wait in queue. The first in line is the first served.
3) Money. The highest bidder is granted access to the resource.
Allocation by political clout is problematic because effective testing by politically inconvenient groups or testing which may yield politically inconvenient results will not be pursued. I argue that it is just such politically inconvenient testing which most needs to be executed in the limited time remaining. Allocation by queue with prioritization is allocation by political clout under another name. However, allocation by queue without prioritization runs the real risk of allowing ineffective or inefficient tests to be executed before more effective or more efficient tests. The goal of executing a software test is to gather information which currently does not exist. The amount of novel information a software test gathers in a single execution is a measure of the effectiveness and efficiency of the software test. In the limited time remaining, software testing must be both effective (learn something new) and efficient (learn a lot).

Auctioning the access to the equipment has several benefits. First, the consortium can re-coup some or all of its capital outlays for the equipment and recording studio. Second since the payment for access is non-refundable, those persons or groups who most believe their testing will succeed are those likeliest to bid the highest. This would effectively prioritize the testing in a way that favors testing which is both effective and efficient. Since the failure to produce the desired results must be recorded and reported, those confident of success will bid higher. Third, auctioning is unbiased against testing by politically disfavored groups and is unbiased to tests designed to discover politically inconvenient facts. Auctioning would be unbiased in selecting among testing because auctioning is blind to respectability, perceived lack thereof, or current anonymity.

Auctioning is not without it problems. A vendor could “buy” all the access to their equipment and thereby prevent effective and efficient testing of their machinery.  Also, this system is biased against a tester of limited means. Since a vendor has unlimited access to all versions of its own equipment it is no hardship to bar vendors from renting equipment from the consortium.  

The current process of testing and certificating election systems has proven to be ineffective and needs to be dramatically improved. Proposals like the one discussed in this article should be taken into consideration. Democracy demands that actions be taken quickly to restore confidence in the election process.  Time and tides wait for no election system and with the democratic underpinnings of the Republic at stake, ignorance is not bliss.


Decades of Concern: A Wildly Incomplete List

Communications of the ACM Risks Forum, 1986

VoteScam: the Stealing of America by James and Kenneth Collier published in 1992 documented automated fraud from 1980 to 1989.

The writings of Rebecca Mercuri, Ph.D. which are collected at notablesoftware, 1995-present.

Beverly Harris of BlackBoxVoting.org, 2000-present.

Roxanne Jekot of CountTheVote.org, 2001-present.

The Johns Hopkins University Information Security Institute Technical Report TR-2003-19, by Kohno, Rubin, Stubblefield, And Wallach, July 23, 2003

Research by the Secretary of State of Ohio some of which is found here:
http://www.sos.state.oh.us/sos/hava/compuware112103.pdf,
http://www.sos.state.oh.us/sos/hava/ess110405.pdf,

Research by two Secretaries of State of California some of which is found here:
http://www.ss.ca.gov/elections/090904_2_consultant_redact.pdf,
http://www.ss.ca.gov/elections/ks_dre_papers/attachmentaa.doc.

A Deeper Look: Rebutting Shamos on e-Voting, by Ronald E. Crane, J.D., B.S.C.S., Arthur M. Keller, Ph.D., Edward Cherlin, and David Mertz, Ph.D, December 2005.
Comment on This Article
You must login to leave comments...


Other Visitors Comments
There are no comments currently....
< Prev   Next >
National Pages
Federal Government
Federal Legislation
Help America Vote Act (HAVA)
Election Assistance Commission (EAC)
Federal Election Commission
Department of Justice - Voting Section
Non-Government Institutions
NASS
NASED
Independent Testing Authority
The Election Center
Carter Baker Commission
Topics
General
Voting System Standards
Electoral College
Accessibility
Open Source Voting System Software
Proposed Legislation
Voting Rights
Campaign Finance
Overseas/Military Voting
Canada
Electronic Verification
: mosShowVIMenu( $params ); break; } ?>