During the past decade, the problems involving information privacy--the ascendance of Big Data and fusion centers, the tsunami of data security breaches, the rise of Web 2.0, the growth of behavioral marketing, and the proliferation of tracking technologies--have become thornier. Policymakers have proposed and passed significant new regulation in the United States and abroad, yet the basic approach to protecting privacy has remained largely unchanged since the 1970s. Under the current approach, the law provides people with a set of rights to enable them to make decisions about how to manage their data. These rights consist primarily of rights to notice, access, and consent regarding the collection, use, and disclosure of personal data. The goal of this bundle of rights is to provide people with control over their personal data, and through this control people can decide for themselves how to weigh the costs and benefits of the collection, use, or disclosure of their information. I will refer to this approach to privacy regulation as "privacy self-management."
Privacy self-management takes refuge in consent. It attempts to be neutral about substance--whether certain forms of collecting, using, or disclosing personal data are good or bad--and instead focuses on whether people consent to various privacy practices. Consent legitimizes nearly any form of collection, use, or disclosure of personal data.
Although privacy self-management is certainly a laudable and necessary component of any regulatory regime, I contend that it is being tasked with doing work beyond its capabilities. Privacy self-management does not provide people with meaningful control over their data. First, empirical and social science research demonstrates that there are severe cognitive problems that undermine privacy self-management. These cognitive problems impair individuals' ability to make informed, rational choices about the costs and benefits of consenting to the collection, use, and disclosure of their personal data.
Second, and more troubling, even well-informed and rational individuals cannot appropriately self-manage their privacy due to several structural problems. There are too many entities collecting and using personal data to make it feasible for people to manage their privacy separately with each entity. Moreover, many privacy harms are the result of an aggregation of pieces of data over a period of time by different entities. It is virtually impossible for people to weigh the costs and benefits of revealing information or permitting its use or transfer without an understanding of the potential downstream uses, further limiting the effectiveness of the privacy self-management framework.
In addition, privacy self-management addresses privacy in a series of isolated transactions guided by particular individuals. Privacy costs and benefits, however, are more appropriately assessed cumulatively and holistically--not merely at the individual level. As several Articles in this Symposium demonstrate, privacy has an enormous social impact. Professor Neil Richards argues that privacy safeguards intellectual pursuits, and that there is a larger social value to ensuring robust and uninhibited reading, speaking, and exploration of ideas. (1) Professor Julie Cohen argues that innovation depends upon privacy, which is increasingly under threat as Big Data mines information about individuals and as media-content providers track people's consumption of ideas through technology. (2) Moreover, in a number of cases, as Professor Lior Strahilevitz contends, privacy protection has distributive effects; it benefits some people and harms other people. (3) Privacy thus does more than just protect individuals. It fosters a certain kind of society, since people's decisions about their own privacy affect society, not just themselves. Because individual decisions to consent to data collection, use, or disclosure might not collectively yield the most desirable social outcome, privacy self-management often fails to address these larger social values. …