Technology, Security and Privacy: The Fear of Frankenstein, the Mythology of Privacy and the Lessons of King Ludd
Taipale, K. A., Yale Journal of Law & Technology
I. PRELUDE II. INTRODUCTION III. SOME ASSUMPTIONS IV. FRANKEN-TECH: THE FEAR OF TECHNOLOGY V. THE PRIVACY NORM PROSELYTIZERS: A FETISH FOR SECRECY VI. PRIVACY INTERESTS AT STAKE A. THE CHILLING EFFECT B. THE SLIPPERY SLOPE C. ABUSE AND MISUSE D. JOSEPH K. AND THE SEPARATION OF SELF VII. THE TECHNOLOGIES A. TECHNOLOGIES OF IDENTIFICATION 1. IDENTIFICATION SYSTEMS AND SECURITY 2. PRIVACY CONCERNS B. TECHNOLOGIES OF DATA AGGREGATION AND ANALYSIS 1. DATA AGGREGATION, DATA ANALYSIS, AND SECURITY 2. PRIVACY CONCERNS C. TECHNOLOGIES OF COLLECTION 1. SENSE-ENHANCING TECHNOLOGIES AND SECURITY 2. PRIVACY CONCERNS VIII. THE PRIVACY DIVIDE A. CONTROLLING THE PRIVACY DIVIDE: THE PRIVACY APPLIANCE AS METAPHOR B. ANONYMIZATION OF DATA 1. ANONYMIZATION AND SECURITY 2. DEVELOPMENT IMPERATIVES C. PSEUDONYMITY 1. PSEUDONYMITY AND SECURITY 2. DEVELOPMENT IMPERATIVE IX. TOWARDS A CALCULUS OF REASONABLENESS A. DUE PROCESS 1. PREDICATE 2. PRACTICAL ALTERNATIVES 3. SEVERITY AND CONSEQUENCES OF INTRUSION 4. ERROR CORRECTION B. PRIVACY AND SECURITY INFORMATION NEEDS 1. SCOPE OF ACCESS 2. SENSITIVITY OF DATA 3. METHOD OF QUERY 4. SUMMARY: SCOPE, METHOD AND SENSITIVITY C. THREAT ENVIRONMENT AND REASONABLENESS X. CONCLUSION A. BUILDING IN TECHNICAL CONSTRAINTS B. OVERRIDING PRINCIPLES C. IN SUM XI. FINALE
This article suggests that the current public debate that pits security and privacy as dichotomous rivals to be traded one for another in a zero-sum game is based on a general misunderstanding and apprehension of technology on the one hand and a mythology of privacy that conflates secrecy with autonomy on the other. Further, political strategies premised on outlawing particular technologies or techniques or seeking to constrain technology through laws alone are second-best--and ultimately futile--strategies that will result in little security and brittle privacy protection.
This article argues that civil liberties can best be protected by employing value sensitive technology development strategies in conjunction with policy implementations, not by opposing technological developments or seeking to control the use of particular technologies or techniques after the fact through law alone. Value sensitive development strategies that take privacy concerns into account during design and development can build in technical features that can enable existing legal control mechanisms and related due process procedures for the protection of civil liberties to function.
This article examines how identification, data aggregation and data analysis (including data mining), and collection technologies intersect with security and privacy interests and suggests certain technical features and strategies premised on separating knowledge of behavior from knowledge of identity based on the anonymization of data (for data sharing, matching and analysis technologies) and the pseudonymization of identity (for identification and collection technologies). Technical requirements to support such strategies include rule-based processing, selective revelation, and strong credential and audit.
At the turn of the century technological development was occurring at a rate that dizzied the mind. These technological developments were bringing a better standard of living to all, yet the gap between the rich and poor was becoming more pronounced. The government, fearful of foreigners, enacted repressive laws and the intellectual elite suggested that the government was too powerful and that charges of treason were too easily leveled. (1)
It was during this period--the beginning of the nineteenth century--that Lady Mary Wollstonecroft Shelley wrote her novel Frankenstein (2) and the Luddite movement was born. (3) It is claimed that Frankenstein and the monster capture "the complex duality of the Romantic soul, the dark as well as the bright side, the violent as well as the benevolent impulses, the destructive as well as the creative urges" (4) So too with advanced information technology and the duality of our concerns with security and privacy.
The current public debate that pits security and privacy as dichotomous rivals to be traded one for another in a zero-sum game is based on a general misunderstanding and apprehension of technology on the one hand and a mythology of privacy that conflates secrecy with autonomy on the other. Further, political strategies premised on outlawing particular technologies or techniques or seeking to constrain technology through laws alone are as doomed to failure as Ned Ludd's (5) swing of the sledgehammer--and will result in little security and brittle privacy protection.
Security and privacy are not a balancing act but rather dual obligations of a liberal democracy (6) that present a wicked problem for policy makers. Wicked problems are well known in public policy (7) and are generally problems with no correct solution. Wicked problems reveal additional complexity with each attempt at resolution and have infinite potential outcomes and no stopping rule--that is, the process ends when you run out of resources not when you reach the correct solution. (8) There is no fulcrum point--as is implicit in the balance metaphor--at which point the correct amount of security and privacy can be achieved. Wicked problems occur in a social context and the wickedness of the problem reflects the diversity of interests among the stakeholders. (9) Resolving wicked problems requires an informed debate in which the nature of the problem is understood in the context of those interests, the technologies at hand for resolution, and the existing resource constraints. (10)
In a technologically mediated information society, civil liberties can only be protected by employing value sensitive technology development strategies in conjunction with policy implementations, not by opposing technological developments or seeking to control the use of particular technologies or techniques after the fact through law alone. (11) Value sensitive development strategies that take privacy concerns into account during design and development (12) can build in technical features that enable existing legal control mechanisms for the protection of civil liberties and due process to function. (13)
Code is not law, but code can bound what law, norms and market forces can achieve. (14) Technology itself is neither the problem nor the solution, rather it presents certain opportunities and potentials that enable or constrain public policy choice. Technical features alone cannot eliminate privacy concerns, but by incorporating such features into technological systems familiar privacy protecting due process mechanisms (or their analogues) are enabled. (15)
This article examines how identification, data aggregation and analysis (including data mining), and collection technologies currently being considered for use in the context of domestic security intersect with security and privacy interests and suggests certain technical features and strategies that can help ameliorate these concerns. This article proposes that technical development strategies premised on separating knowledge of behavior from knowledge of identity based on the anonymization of data (for data sharing, matching and analysis technologies) and the pseudonymization of identity or authorization (for identification and collection technologies) can help protect individual autonomy while still meeting security needs. Technical requirements to support such strategies include rule-based processing, selective revelation, and strong credential and audit. (16)
III. SOME ASSUMPTIONS
This article focuses on the intersection of technology and domestic and national security in the context of the current 'war on terrorism' (17) but the analysis presented herein is equally applicable to law enforcement more generally--subject, however, to certain caveats. In particular, to the extent that there is a relationship between law enforcement applications and privacy concerns, the lesser the crime targeted the greater the hurdle for any new technology or wider use that implicates those concerns. (18)
It is beyond the scope of this article to attempt to delineate precisely where the line between preemptive and reactive strategies should be drawn by delimiting particular types of crimes that meet particular criteria, or by specifying which government organs or agencies should be permitted particular uses. Rather, this article is primarily concerned with the over-arching issues involved in employing advanced information technologies to help identify and find actors who are hidden among the general population and who have the potential for creating harms of such magnitudes that a consensus of society requires that government adopt a preventative rather than reactive approach. (19)
The events of 9/11 have put to rest any doubts that we face a formidable threat from certain organized but generally state-less forces that are intent on inflicting serious damage on US interests, including the killing of large numbers of innocent civilians. (20) Regardless of one's view of the particular political strategy being used in response, the current threat is real and, among other things, we need to enlist technology, and reform organizational structures, to help counter this threat. (21) To date we have not taken sufficient advantage of information technology to help secure the nation against these kinds of threats. (22) However, technology cannot provide security by itself, and we also need to adopt new organizational structures and procedures to take advantage of opportunities that information technology can make available. (23)
At the same time, however, we must recognize that the use of these technologies and procedures can be intrusive on certain privacy interests that help protect individual freedom and political autonomy, and are core to our political liberties. (24) These interests must also be protected. It has become cliche, yet remains axiomatic, that every compromise we make to civil liberties in the 'war on terrorism' is itself a victory for those who would like to destroy our way of life. (25) Terrorism itself is a complex problem. Eliminating the current terrorist threat involves a mix of three essential strategies. (26)
First, we must eliminate political preconditions to terrorism. We must solve unresolved conflict throughout the world, end lack of economic and political opportunity, and generally make the world safe for democratic processes and civil society. It is beyond the scope of this article to discuss these issues fully. (27) Second, we must harden targets. (28) Of course, target hardening generally only influences an adversary's target-selection process--when preferred targets are hardened terrorists will seek softer targets like any rational enemy. More importantly, we cannot harden all potential targets--not even all high-value targets. (29) Thus, discussing locking cockpit doors is not an "alternative strategy" to employing information technology as some have implied, (30) rather physical defense is a discrete strategy that needs to be considered on its own merits. Which brings us to the third strategy--that is, we must identify terrorists and preempt terrorist acts. (31) To do this requires in part the better use of information and the better use of advanced information technology to share relevant information and to help sort relevant from irrelevant information. (32)
This article concerns itself with the use of advanced information technologies in support of this third strategy. Thus, this article assumes that there is some category of malicious actor--terrorist, if you will--for which there exists a political consensus for proactive investigative strategies intended to prevent future acts of terrorism. The one conclusion that seems clear from the report of the Congressional Joint Committee looking into 9/11 is "that terrorism cannot be treated as a reactive law enforcement issue, in which we wait until after the bad guys pull the trigger before we stop them." (33) But, reconciling the need for preventative strategies with traditional notions of due process and individual freedom is a complex task.
It is also important to recognize that technology alone cannot provide security; at best--and even then only if used within appropriately designed security systems--it can help better allocate scarce security resources towards more effective uses. (34) There is no technological silver bullet that will provide absolute security nor is there any technical solution that will absolutely protect privacy. (35) Technology alone is not a solution to either problem; but neither are simple laws prohibiting the use of specific technologies or particular techniques the answer in themselves. (36) Instead, some complex system--a social construction (37)--combining organizational structures, rules and procedures, and technologies must be developed (or must evolve) together to ensure that we achieve better security while protecting privacy and civil liberties. (38)
This article examines the conflict between security and privacy in the context of advanced digital information systems and their related technical characteristics in order to achieve some better understanding of potential solutions--organizational, procedural and technical--to achieving security while protecting privacy.
IV. FRANKEN-TECH: THE FEAR OF TECHNOLOGY
Cass Sunstein, among others, has written much about the notion that people act apparently irrationally with regard to certain risk trade-offs. (39) For example, during the recent DC sniper episode, citizens of one state would drive to another to get gas rather than use a local gas station for fear of the sniper--thus exposing themselves to greater statistical risk of death from a traffic fatality than from an actual sniper attack. So too, people who fear flying and prefer to drive may actually expose themselves to a much greater risk of injury or death on the highway. (40)
Sunstein identifies three noteworthy points about how fear impacts risk analysis. (41) The first is that without actual knowledge of a particular risk, people rely on the availability heuristic, through which they assess a risk by reference to whether a readily available example of the outcome can be recalled, (42) that is, people exhibit a greater fear of a risk the more they are reminded of actual or similar outcomes. The second is that people generally show a disproportionate fear of risks that are either unfamiliar or appear hard to control, (43) that is, people exhibit a greater fear of a risk from an unfamiliar or novel source even if its probability is slight. And, the third is that people are prone to what Sunstein calls probability neglect--that is, in the face of risks with high emotional content, emotion plays a significant role in obscuring 'rational' choice. (44)
These three impacts are also observed in policy choice. Sunstein has documented many instances in which media attention to a particular environmental issue, for example, Love Canal, Alar, or asbestos in schools, has resulted in 'irrational' policy choices not grounded in objective assessments of relative risk. (45) In these instances the media focus essentially determines the emotional state of the polity. (46) Further, the media attention itself is often manipulated by what Sunstein calls "availability entrepreneurs" who take advantage of a particular event to publicize (and thus elevate) a relatively unlikely risk in order to further their own particular agenda. (47)
Thus, the public debate on policy issues--particularly on complex issues or novel problems with unknown consequences--is often dominated by these information entrepreneurs, including activists and the media itself, who attempt to engender information cascades to further their own particular agenda. (48) "An [information] cascade is a self-reinforcing process of collective belief formation by which an expressed perception triggers a chain reaction that gives the perception increasing plausibility through its rising availability in public discourse." (49) The result is often that relatively minor risks can be overblown causing a high level of social anxiety, the expenditure or misallocation of significant resources, and the imposition of costly regulation in situations where other risks, of greater magnitude, are ignored. (50)
This same phenomenon skews the public debate on technology, security and privacy. The availability of information privacy horror stories (in particular, the prevalence of identity theft, spam and hacker stories in the media), (51) and the general mistrust in government agencies to handle personal information appropriately, (52) combined with a general apprehension about technology (53) and how it works, (54) and the natural anxiety relating to disclosure of personal, particularly intimate, information--all spurred on by the privacy lobby (55)--has created a public anxiety about electronic privacy (56) out of proportion to the actual privacy risks and has obscured discussion of the very real threats posed by either failing to provide security or by misallocating security resources. (57)
Anecdotal support for the notion that there is an unreasonable fear based on unfamiliarity with the technology underlying the public debate on privacy can be found by drawing an analogy with early concerns about the use of credit cards online. While people do not think twice now about using their credit cards online, there was much consternation in the late 1990s when even the long-term success of online commerce was questioned based on the unwillingness of consumers to use credit cards online--a fear wholly out of proportion to the actual risk and one that never entered their minds when they handed over their card to a minimum wage busboy or threw their credit card receipt in a public trash receptacle. Some would argue that the overblown concern for electronic privacy may be the 'risk of the moment' based in part on a lack of awareness or understanding of the nature and consequences of current technology developments (58) and the novelty of the threat.
While some might argue that the government has used the fear of terrorism (the actual threat to any particular individual from terrorism, even in 2001, was a relatively low probability risk) to push policies without adequate public debate, (59) so too, others could argue that the "privacy lobby" has used fear of electronic privacy intrusion--wholly disproportionate to its actual injury or risk to civil liberty--to oppose technological developments and further their own agenda. (60)
V. THE PRIVACY NORM PROSELYTIZERS: A FETISH FOR SECRECY
A significant problem in determining policy in this area is that privacy means different things to different people. (61) It is beyond the scope of this article to definitely define privacy or reconcile competing views. (62) However, much of the public debate about the use of technology seems to take place within an unexamined mythology of privacy--a mythology that conflates privacy with absolute secrecy on the one hand and the maintenance of absolute secrecy with liberty on the other. But, this deified notion of privacy based on absolute secrecy--that is, keeping others from knowing what we are doing by emphasizing concealment (63)--confounds two simpler ideas: knowing what someone does (behavior) and knowing who someone is (identity). Further, it is based on a presumed privacy entitlement for electronic data that exceeds that afforded paper-based records or real-world experience. (64)
This perception of a privacy entitlement arose hot by accident or necessity, but from the intentional action of what Steven Hetcher calls norm proselytizers (what I refer to as the privacy lobby and Sunstein might call availability entrepreneurs) who have an interest in promoting online privacy. (65) Nevertheless, it is not my intention to minimize the privacy interests at stake here. (66) Quite the contrary, I argue that we should insist on value sensitive development strategies that build in technical constraints; that we subject the development and use of these technologies to strict authorization, oversight, and judicial review; and that we use advanced technical means to "watch the watchers" to prevent abuse or misuse.
However, we face one of two inevitable futures--one in which technologies are developed with privacy protecting values and functions built into the design or one in which we rely solely on legal mechanisms and sanctions to control the use of technologies that have been developed without regard to such protections. (67) In my view, it is the fetish for absolute secrecy promulgated by the privacy lobby that precludes or delays the development of appropriate technologies to improve security while also protecting civil liberties, and leaves us with little security and brittle privacy protection.
Thus, for example, I have previously argued that last year's defunding by Congress of DARPA's Information Awareness Office (IAO) and its Terrorism Information Awareness (TIA) program and related projects (68) was a pyrrhic victory for civil liberties as that program provided a focused opportunity around which to publicly debate the rules and procedures for the future use of these technologies and, most importantly, to oversee the development of the appropriate technical features required to support any concurred upon implementation or oversight policies to protect privacy. (69)
In any case, privacy (particularly any legal or moral claim for the protection of privacy) should be based on the need to protect individual political and personal autonomy, not simply as a characteristic of data for its own sake. (70) Thus, a fetish for absolute secrecy of innocuous data (or voluntarily produced data) that results in alternative intrusions or harms--say a physical search at the airport (or physical harm from lack of security)--is suspect and should be questioned. (71)
Additionally, the brittle nature of privacy protection based solely on law needs to be considered. (72) If technologies are developed without privacy protecting features built in but outlawed for law enforcement or domestic security purposes, those laws can be changed in the future in response to a new terrorist attack, and the then existing technologies will not be capable of supporting implementation policies that provide any privacy protection at all. (73)
Post hoc analyses of the events of 9/11 have revealed that much relevant information existed but intelligence agencies and law enforcement were unable to "connect the dots." (74) It would be an unusual polity that now demanded accountability from its representatives for being unable to connect the dots from existing datasets to prevent terrorist acts (75) yet denied them the available tools to do so, particularly if there were to be another catastrophic event.
Thus, simple opposition to government research projects or outlawing the use of particular technologies or techniques seems a second-best--and ultimately futile--strategy; one that leaves us dependent on classified programs or proprietary commercial interests to develop security technologies (76) and laws alone to protect privacy. A more effective strategy for the protection of privacy and civil liberties while improving security is to build in technical features that support those values in the first place.
The early Luddites resisted the introduction of technology by smashing frames, and they were imprisoned or shipped off to Australia accomplishing little; later movements in their name adapted to the introduction of new technologies by forming organizational structures--the precursors to the modern labor union--and procedures--collective bargaining--to control the terms under which new technology was to be developed and deployed. Perhaps there is a lesson for privacy advocates to be learned from King Ludd. (77)
VI. PRIVACY INTERESTS AT STAKE
There can be no doubt that vital privacy interests are at stake. We must preserve the general culture of freedom in America (78) and de everything in our power to maintain, improve and protect it. (79) Individual freedom is the basis on which our country was founded and its incorporated values stand at the core of our Constitution and Bill of Rights. (80) Thus, we must stand ever vigilant to potential dangers to our civil liberties. (81)
Nevertheless, rights incur responsibilities. (82) Security and liberty are dual obligations and we cannot slight one for the other. (83) It should be remembered that the Fourth Amendment implicitly recognizes this duality because--in the words of Amitai Etzioni--the "prohibition on unreasonable searches is not accorded more weight than the permission to conduct reasonable searches." (84) In past crises, particularly when they have threatened national security, many have been willing to sacrifice civil liberties in the short-term in order to meet the particular emergency or challenge. (85) In many cases, we as a nation later came to regret those actions as having gone too far. (86)
In meeting the current challenge of international terrorism we are confronted with two additional complexities. First, the 'war on terrorism' may be one with no definable end, thus we need to develop doctrine and procedures that serve to protect important values from the outset and that can be maintained indefinitely. We cannot sustain "emergency procedures" for any length of time. (87) Second, we face a threat from actors who as part of their strategic and tactical doctrine move among the general population and take advantage of our open society to mask their own organization and activities. (88) The task therefore is not to defend against outsiders but to identify and investigate potentially malicious actors from within the population without undermining or compromising the freedom and autonomy of the vast majority of innocent people.
Therefore, neither demonizing a minority nor engendering suspicion of everyone is a viable or acceptable outcome--however, neither is undermining legitimate security needs by deifying absolute secrecy as the only means of protecting individual autonomy. The particular privacy concerns most implicated by employing advanced information technologies for proactive law enforcement activities are primarily three: first, the chilling effect that information access and data sharing by government might have on innocent behavior, second, the slippery Mope that may result when powerful tools are used for increasingly pettier needs until finally we find ourselves smothered under a veil of constant surveillance, and, third, the potential for abuse or misuse.
A. THE CHILLING EFFECT
The chilling effect primarily involves the concern that potential lawful behavior, particularly constitutionally protected activity, would be inhibited due to the potential for a kind of post hoc surveillance (often referred to as "dataveillance") that is said by many to result from the increased sharing of information among currently discrete sources. (89)
"Potential knowledge is present power," and awareness that government may analyze activity is likely to alter behavior, "people act differently if they know their conduct could be observed." (90) The risk is that protected rights of expression, protest, association, and political participation may be affected by encouraging "conformity with a perceived norm, discouraging political dissent, or otherwise altering participation in political life." (91)
Maintaining individual privacy, however, is not synonymous with being able to commit or plan terrorist acts in secret without being discovered. Thus, chilling effects-based arguments against technologies or procedures that can potentially protect against catastrophic terrorist acts must show a real privacy impact on legitimate and innocent activity not just exhibit a fetish for absolute secrecy premised on vague referrals to potentially inhibited acts.
The Supreme Court requires that chilling-effects based challenges present more than allegations of a subjective chill; it requires that such challenges show both actual harm and a significant effect on protected activities not outweighed by legitimate government interest. Thus, in Laird v. Tatum, (92) the Court wrote:
In none of these cases, however, did the chilling effect arise merely from the individual's knowledge that a governmental agency was engaged in certain activities or from the individual's concomitant fear that, armed with the fruits of those activities, the agency might in the future take some other and additional action detrimental to that individual. (93)
Although the Court went on to note that it was not opining on the "propriety or desirability, from a policy standpoint, of the challenged activities" but merely its adjudicability, (94) it nevertheless seems appropriate for the policy debate likewise to require articulation or identification of some specific harm not outweighed by the compelling government interest. A vague claim of enforced conformity ought not in itself, ipso facto, win the argument.
Further, the mere existence of a chilling effect is not alone sufficient to hold governmental action unconstitutional:
[T]he existence of a "chilling effect," ... has never been considered a sufficient basis, in and of itself, for prohibiting state action. Where [the state action] does not directly abridge free speech, but--while regulating a subject within the State's power--tends to have the incident effect of inhibiting First Amendment rights, it is well settled that the [state action] can be upheld if the effect on speech is minor in relation to the need for control of the conduct and the lack of alternative means for doing so. (95)
Thus, chilling effects arguments against the use of technology should require determining confidence intervals--that is, the acceptable error rate--for a particular application in a particular use (i.e., its roasonableness). In the context of information processing for preemptive law enforcement, the confidence interval is the net result of false positives and false negatives, each adjusted for its related consequence and resource consumption. (96) To analogize to the Court's analysis, the adjusted false positive rate equates to the potential for actual harm to the individual and the adjusted false negative rate equates to the government interest, in this case, security.
Thus, determining confidence intervals for policy purposes can be viewed as a function of these two competing relationships--the number of false positives (innocents identified) adjusted by the severity of the consequences to the individual on the one hand and the number of the false negatives (terrorists not identified) adjusted by the consequences to security on the other. (97) If the consequences of a false positive are relatively low, for example, a bag search at the airport, and the consequences of a false negative are high, for example, the plane crashes into the Pentagon, the acceptable confidence interval for policy purposes should be adjusted (either technically or by procedures) to bias towards false positives and reduce false negatives. If, on the other hand, the consequences to the individual from a false positive are severe, for example incarceration, and the consequences of false negatives are slight, for example, a parking ticket scoff-law slips through, then the confidence interval should be adjusted (either technically or by policy) to reduce false positives at the risk of increasing false negatives.
This is not to suggest that there is some perfect correlation to be calculated among relative risks (which risks cannot be precisely quantified) but rather to suggest that when it comes to setting policy, recognizing that appropriate controls for a particular use will depend on the totality of the circumstance at the point and time of use--including (as discussed below) the scope and method of inquiry, the sensitivity of data, and the particular security interest or threat as well as the nature of the privacy intrusion--and cannot be rigidly proscribed or even anticipated. Thus, a perfect system design would incorporate flexibility in both its policy and technical controls to allow for changes in circumstances at the point of use, and its reasonableness would be judged on its use in such circumstances.
B. THE SLIPPERY SLOPE
The slippery slope argument (98) is that measures that might be adopted now for perfectly legitimate national security concerns might eventually be used in the ordinary course of law enforcement to investigate and apprehend lesser law breakers resulting in extraordinary procedures developed to counter a specific threat becoming the norm--in this case leading to a permanent and complete surveillance society (a world in which Michael Froomkin notes "it should be possible to achieve perfect law enforcement" (99)).
This fear is particularly relevant when one recognizes that there will always be an insatiable need for more security (100) and there will always exist a bureaucratic imperative for additional control. (101) There is also the practical consequence of making tools available--they will be used. For the law enforcement professional seeking to accomplish their mission we could expect no less than that they try to take advantage of every tool or opportunity that is available for each and every task that they are responsible for. (102) When these three factors--the need for more security, the imperial bureaucratic drive, and the practical availability of tools--are combined, the threat of the slippery slope is real and potentially significant.
Structural implementation options can help ameliorate these concerns. For example, the data analysis (intelligence) function could be operationally separated from the law enforcement function as the Markle Taskforce has suggested. (103) The Gilmore Commission has recommended that the Terrorist Threat Integration Center be spun off as an independent agency to coordinate the domestic intelligence function, (104) and I (and others) have argued that a separate agency with a narrow charter to process intelligence for domestic security, no independent law enforcement powers, and subject to strict oversight should be considered. (105) While these organizational structures do not eliminate concern they can help. Further, technical architectures to counter the slippery slope also exist. A distributed architecture with local responsibility and accountability for data and access, together with strong credential and audit functions to track usage, (106) can provide protection from a centralized expansion of power or use. (107)
C. ABUSE AND MISUSE
Information systems are also open to abuse or misuse. There are many examples of such misuse--from IRS agents looking up their neighbor's tax returns (108) to law enforcement officials sharing information with criminal suspects. (109) Even examples of institutionalized abuse, such as the FBI COINTELPRO, are recent enough to evoke concern. (110) For purposes of policy and technical design, however, the substance of the concerns need not be resolved--that is, we do not need to debate whether, for example, the government or its employees should be trusted to do what is right and not abuse its citizens. Instead, organizational structures, procedures, and technical features that function together to limit the potential for abuse can (and should) be designed and implemented to address these concerns.
Often neglected in this part of the debate is acknowledgment that the same characteristics of these technologies that give rise to some of the privacy concerns in the first place--the existence of "electronic footprints" in dataspace--also provide opportunities for resolution or mitigation--that is, these systems can be turned on themselves to "watch the watchers." Immutable logging together with strong credentialing and audit can provide significant deterrent to abuse making "abuse difficult to achieve and easy to uncover" by providing secure access control and tamper-resistant evidence of where data goes and who has had access to it. (111)
Additionally, real-time automated monitoring of system usage and post usage analysis and review, together with oversight of systems logs, can provide significant checks on both abuse and misuse. (112) Organizational structures to ensure such results should also be devised as part of systems implementations. Thus, for example, determining whether log files are to be kept locally (and, if so, under whose authority, for example, by the technical systems administrators, or the agency's inspector general, general counsel, or privacy officers, etc.) or externally by oversight bodies is not just a technical question but also one with substantive policy implications.
D. JOSEPH K. AND THE SEPARATION OF SELF
It may well be that existing metaphors (113) and doctrines based on outdated notions of defining the relationship between an individual and their 'personal' information based on place or expectation are inadequate to address compelling new challenges brought by emerging technology to civil liberties. (114) Dan Solove (115) …
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication information: Article title: Technology, Security and Privacy: The Fear of Frankenstein, the Mythology of Privacy and the Lessons of King Ludd. Contributors: Taipale, K. A. - Author. Journal title: Yale Journal of Law & Technology. Volume: 7. Publication date: Annual 2004. Page number: 123+. © 2006 Yale Journal of Law & Technology. COPYRIGHT 2004 Gale Group.
This material is protected by copyright and, with the exception of fair use, may not be further copied, distributed or transmitted in any form or by any means.