LET'S PAUSE FOR A MOMENT AND LOOK BACK OVER THESE THREE CHAPTERS. THERE IS a pattern to the problems they present—a way of understanding how all three problems are the same.
In one sense, each has asked: How much control should we allow over information, and by whom should this control be exercised? There is a battle between code that protects intellectual property and fair use; there is a battle between code that might make a market for privacy and the right to report facts about individuals regardless of that market; there is a battle between code that enables perfect filtering and architectures that ensure some messiness about who gets what. Each case calls for a balance between control and no control.
My vote in each context may seem to vary. With respect to intellectual property, I argue against code that tracks reading and in favor of code that guarantees a large space for an intellectual commons. In the context of privacy, I argue in favor of code that enables individual choice—both to encrypt and to express preferences about what personal data is collected by others. Code would enable that choice; law could inspire that code. In the context of free speech, however, I argue against code that would perfectly filter speech—it is too dangerous, I claim, to allow perfect choice there. Better choice, of course, is better, so code that would empower better systems of reputation is good, and code that would widen the legitimate range of broadcasting is also good.
The aim in all three contexts is to work against centralized structures of choice. In the context of filtering, however, the aim is to work against structures that are too individualized as well.
You may ask whether these choices are consistent. I think they are, but it's not important that you agree. You may believe that a different balance makes sense— more control for intellectual property or filtering perhaps, and less for privacy. My real interest is in conveying the necessity of such balancing and of the values implicit