Academic journal article The William and Mary Bill of Rights Journal

Voting System Risk Assessment Via Computational Complexity Analysis

Academic journal article The William and Mary Bill of Rights Journal

Voting System Risk Assessment Via Computational Complexity Analysis

Article excerpt


Any voting system must be designed to resist a variety of failures, ranging from inadvertent misconfiguration to intentional tampering. The problem with conducting analyses of these issues, particularly across widely divergent technologies, is that it is very difficult to make apples-to-apples comparisons. This paper considers the use of a standard technique used in the analysis of algorithms, namely complexity analysis with its "big-O" notation, which can provide a high-level abstraction that allows for direct comparisons across voting systems. We avoid the need for making unreliable estimates of the probability a system might be hacked or of the cost of bribing key players in the election process to assist in an attack. Instead, we will consider attacks from the perspective of how they scale with the size of an election. We distinguish attacks by whether they require effort proportional to the number of voters, effort proportional to the number of poll workers, or a constant amount of effort in order to influence every vote in a county. Attacks requiring proportionately less effort are correspondingly more powerful and thus require more attention to countermeasures and mitigation strategies. We perform this analysis on a variety of voting systems in their full procedural context, including optical scanned paper ballots, electronic voting systems, both with and without paper trails, Internet-based voting schemes, and future cryptographic techniques.


The United States Elections Assistance Commission (EAC) recently solicited submissions for how it might assess the risks of voting systems.1 According to its solicitation:

The first phase will create reference models to be used in the assessment. This includes developing election process models to describe the operational context in which voting systems are used. It also entails developing voting systems models by generic technology type. This is needed because the types of threats encountered and their potential impacts vary by technology.2

The EAC asked the public to suggest how it might develop these models, with submissions due in April 2008. 3 While these submissions have not yet been made available to the public, we will discuss some prior work on this topic and then propose our own solution to this problem.

Clearly, we need an objective, quantifiable method for comparing voting systems.4 Election officials who might purchase one system over another need to be able to concisely understand the relative insecurities of one product versus another.5 Security analysts, testing authorities, and regulators need common ground for both setting a lower bound on acceptable security and for explaining how much better a system is than whatever the minimum standard requires.6

Qualitative analyses are, for better or worse, the standard method used to make arguments.7 To pick a well-known example, the U.S. military was investigating the possibility of allowing its soldiers to vote, from overseas locations, via the Internet. They convened a panel of experts to conduct a security review. Several of the experts wrote a "minority report" expressing their concerns with the project,8 leading to its cancellation and replacement with a fax-based system.9 Alvarez and Hall criticize this outcome stating, "In the end, a small but vocal segment of the scientific community opposed the use of scientific experimentation in voting systems and technologies."10 While the SERVE report's authors were concerned about the fundamental unsuitability of standard consumer platforms (e.g., Microsoft Windows XP plus Internet Explorer), with the risks of viruses, worms, or other forms of malware that could easily be engineered to compromise an election,11 Alvarez and Hall felt that

the central argument in this critique was overly general, ignored the reality of UOCAVA voting,12 and ignored what would have been a broad array of project, procedural, and architectural details of the SERVE registration and voting system, which in all likelihood would have minimized or mitigated their concerns had the system been used in the planned trial. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed


An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.