Electronic Vote Counting: Where Did We Go Wrong?
By Howard Stanislevic, VoteTrustUSA E-Voter Education Project
November 21, 2006
Recent discussions with other researchers in the election auditing field led me to a paper published over 30 years ago that shows just how "far" we've come in ensuring the integrity of our elections.
In 1975 while working for the National Bureau of Standards (NBS), Roy G. Saltman, authored a paper entitled, "Effective Use of Computing Technology in Vote-Tallying." In Appendix B of that seminal work, "Mathematical Considerations and Implications in the Selection of Recount Quantities", Saltman described a methodology for determining sample sizes to be used in independent audits of electronic vote counting systems which include optical scanners, punch card readers and touchscreen or pushbutton direct recording electronic (DRE) voting machines with voter-verified paper audit trails.
Among other things, Saltman reported that regulations covering recounts in different states varied, as they do today. Some typical recount regulations were:
(1) a manual recount could be demanded by any candidate willing to pay for it; He also pointed out that the then current law in the state of California which called for a fixed audit percentage of only 1% of the state's precincts was inadequate, stating that, "recount percentages should increase as the opposing vote totals approach equality." In other words, narrower reported margins of victory require larger audits. Yet the 1% law is still on the books today and until recently, did not even include absentee ballots!
(2) a full manual recount was automatic if the candidates' totals differed by a very small percentage of the vote and;
(3) a fixed percentage of precincts were manually recounted regardless of the apparent vote separation (margin) between the candidates.
Federal legislation proposing a 2% audit has also been introduced, and some states have other fixed percentages in their auditing laws and regulations. None of these audits may be adequate to confirm the outcome of all races on the ballot, and some of them may actually be larger than necessary. The idea that they are better than nothing, which seems to be their only justification, is not particularly reassuring when the integrity of our democracy is at stake. Such laws should specify a probability of miscount detection or confidence level -- not a fixed auditing percentage. For example, a confidence level of 99% means there would be a 99% probability of detecting any miscount large enough to alter the outcome of a race.
Saltman was not only concerned with the mathematics of the auditing problem in 1975. He went on to say, "The selection of some precincts for recounting should be granted to candidates." This is a far cry from the "sore loser" status accorded candidates who question election results today despite the lack of meaningful audits (even where possible) and the absence of observable independent verification processes inherent in many electronic vote counting systems. According to Saltman, "Candidates' supporters and precinct workers are those persons most likely to have the keenest sense that a possible discrepancy exists." Few would disagree, but there still has to be something to audit to prove that such discrepancies exist; a hunch is not much to go on.
Consistent with the above, Saltman also introduced a notion called the "maximum level of undetectability by observation." I called this same parameter "within-precinct miscount" in a recently published paper, "Random Auditing of E-Voting Systems: How Much is Enough?" But like some others working on the auditing problem today, I had no idea that Saltman had previously come up with the same essential concept so long ago. What it means in English is that there is a limit to the amount of vote shifting that could take place in a precinct without arousing suspicion by the folks on the ground. This actually limits the necessary size of the sample to be used for the audit without compromising its effectiveness, as long as anomalous precincts can be recounted in addition to those selected by the random audit, as suggested above. For example, an assumption that no more than 20% of the total votes in one machine or precinct could be switched from one candidate to another without anyone noticing (or another value based on historical election data) could be used in determining the sample size of the audit.
One thing that was not thoroughly explored back in 1975 is that varying precinct sizes could conceal miscounts in such a way as to render them undetectable by an audit that implicitly assumed all precincts had about the same number of votes. One way of addressing this as explained in my paper is to adjust the sample size, based on ranking the precincts' vote counts and determining the absolute minimum number of corrupt precincts that could change the outcome. Saltman briefly mentioned a more complex method involving proportional sampling.
Clearly precinct size needs to be taken into account somehow, but this was not overlooked by Saltman 31 years ago. Those responsible for our election systems have certainly had ample opportunity over the last three decades to decide on the best way to audit them, so why has it become necessary for election integrity activists to reinvent the wheel and to also rediscover such solutions in archival documents?
Over three decades have passed since the electronic vote count auditing problem was essentially solved by Roy Saltman. During this period the NBS has been renamed the National Institute of Standards and Technology (NIST); three versions of national voting system standards have been produced (none of which has an independent auditing or verification requirement), Congress has allocated billions of dollars to the States to buy voting systems of poor reliability and in many cases, no auditability; electronic vote counting systems have become almost ubiquitous -- and we are still auditing them improperly, if at all. Some are attempting to make new laws using the same flawed methodology of fixed-percentage auditing shown by Saltman to be inadequate back in 1975. And others seem to prefer to do absolutely nothing.
So after all these years, how did we get it so wrong? Only those responsible can answer this question, but it would probably be better to just get about the business of fixing this problem so we don't have to have this discussion again 30 years from now.
Comment on This Article
You must login to leave comments...
Other Visitors Comments
There are no comments currently....