The story so far has focused on regulation — both the changing regulability of behavior in cyberspace (it is increasing) and the distinctive way in which behavior in cyberspace will be regulated (through code).
In this Part, I apply the analysis drawn so far to three areas of social and political life that will be affected by these changes — intellectual property, privacy, and free speech.
In each of these areas, I will identify values that are relevant. I will then ask how those values translate to life online. In some cases, the values carry over quite directly, but, in others, they produce what I called in Chapter 2 a “latent ambiguity.” That ambiguity forces us to choose between two very different conceptions of the value at stake. My aim is not to make that choice, but instead simply to throw at least two options into relief.
I have another objective in each chapter as well. In my view, the most important lesson about law in cyberspace is the need for law to account for the regulatory effect of code. Just as the wise regulator accounts for the way the market interacts with legal regulation, so too the wise regulator must account for the ways in which technology interacts with legal regulation. That interaction is often counterintuitive. But unless a regulator takes this interactive effect into account, the regulation — whether to control behavior or to protect certain liberties — will fail.
To know what values are relevant, however, we need a method for carrying values into a new context. I begin this part with an account of that method. The values I will describe are part of our tradition, and they need to be interpreted and made real in this context. Thus, I begin this part with one approach that the law has developed for recognizing and respecting these values. This is the interpretive practice I call “translation.” A translator practices a fidelity to earlier commitments to value. Latent ambiguities are those instances where fidelity runs out. We have nothing to be faithful to, because the choices we now face are choices that our forbears did not.[1]
At the height of a previous war on drugs — Prohibition, in the late 1920s — the federal government began using a technique of police work that startled many but proved quite effective: wiretapping.[1] Life had just begun to move onto the wires, and, in an effort to take advantage of the evidence that this new medium might yield, the government began to tap phones without warrants.
Because law enforcement officials themselves were conflicted about the ethics of wiretapping, taps were used sparingly. Nonetheless, for threats perceived to be extremely grave, the technique was deployed. Illegal alcohol, the obsession of the age, was just such a threat.
The most famous of these taps led to the 1928 Supreme Court case Olmstead v. United States. The government was investigating one of the largest illegal liquor import, distribution, and sales organizations in the nation. As part of the investigation, the government began to tap the telephones used by dealers and their agents. These were private phones, but the taps were always secured without trespassing on the property of the targets[2]. Instead, the taps were placed on the wires in places where the government had rightful access to the phone lines.
Using these taps, the government recorded many hours of conversations (775 typewritten pages, according to Justice Louis Brandeis)[3], and it used these recordings to convict the defendants in the case. The defendants challenged the use of these recordings, claiming that the government had violated the Constitution in securing them. The Fourth Amendment protects “persons, houses, papers, and effects, against unreasonable searches and seizures, ” and this wiretapping, the defendants argued, was a violation of their right to be protected from unreasonable searches.
Under then-existing law, it was plain that to enter the apartments of alleged bootlegger Roy Olmstead and his associates and search them (at least while they were gone), the government investigators would have needed a warrant, that is, they would have needed the approval of a judge or magistrate before invading the defendants’ privacy. This is what the Fourth Amendment had come to mean — that certain places (persons, houses, papers, and effects) were protected by presumptively requiring a warrant before they could be invaded[4]. Here there had been no warrant, and hence, as the defendants argued, the search had been illegal. The evidence had to be excluded.
We might pause to ask why. If we read the text of the Fourth Amendment carefully, it is hard to see just where a warrant is required:
(a) The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and (b) no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
The Fourth Amendment is really two commands. (I’ve added “a” and “b” to help make the point.) The first says that a certain right (“the right of the People to be secure”) shall not be violated; the second limits the conditions under which a warrant shall be issued. But the text of the amendment does not state a relationship between the first part and the second part. And it certainly does not say that a search is unreasonable if it is not supported by a warrant. So why the “warrant requirement”[5]?
To make sense of the amendment, we must go back to its framing. At that time, the legal protection against the invasion of privacy was trespass law. If someone entered your property and rifled through your stuff, that person violated your common law rights against trespass. You could sue that person for trespass, whether he was a police officer or private citizen. The threat of such suits gave the police an incentive not to invade your privacy[6].
Even without a warrant, however, a trespassing police officer might have a number of defenses. These boil down to whether the search was “reasonable.” But there were two important facts about this reasonableness. First, the determination of reasonableness was made by a jury. Neighbors and peers of the officer judged whether his behavior had been proper. Second, in some cases reasonableness was found as a matter of law — that is, the judge would instruct the jury to find that the search had been reasonable. (For example, when the officer found contraband on the property of the defendant, whether there was sufficient suspicion before the search or not, the search was reasonable.)[7]
This regime created obvious risks for an officer before he searched someone’s property. If he searched and found nothing, or if a jury thought later that his search had not been reasonable, then he paid for his illegal behavior by being held personally liable for the rights he had violated.
But the regime also offered insurance against this liability — the warrant. If the officer secured a warrant from a judge before he made his search, the warrant immunized him against trespass liability. If he then found no contraband or his search turned out to be unreasonable, he still had a defense to a suit.
Creating incentives was one aim of the original system. The law gave an officer an incentive to obtain a warrant before he searched; if he was uncertain, or wanted to avoid all risk of liability, he could first check his judgment by asking a judge. But if the officer was sure, or wanted to hazard the gamble, then not getting a warrant did not make the search automatically unreasonable. He was at risk of increased liability, but his liability was all that was at stake.
The weak link in this system was the judge. If judges were too lax, then warrants would be too easy to get[8], and weak judges were a concern for the framers. Under British rule judges had been appointed by the Crown, and by the time of the Revolution, the Crown was the enemy. Having seen much abuse of the power to issue warrants, the framers were not keen to give judges control in determining whether the government’s searches were reasonable.
In particular (as I described in Chapter 2), the framers had in mind some famous cases in which judges and the executive had issued “general warrants” giving government officers the power to search generally for objects of contraband[9]. In modern terms, these were “fishing expeditions.” Because the officers had warrants, they could not be sued; because the judges were largely immune from suit, they could not be sued. Because no one could be sued, there was a temptation for abuse. The framers wanted to avoid just such judge-made abuse. If there was to be immunity, it would come from a jury, or from a successful search.
This is the origin of clause (b) of the Fourth Amendment. The framers required that judges, when issuing warrants, name particularly “the place to be searched, and the persons or things to be seized”, so that judges would not be able to issue warrants of general power. The immunity of the warrant would be limited to particular people and places, and only when probable cause existed to issue the warrant.
This constitutional regime was designed to balance the people’s interests in privacy against the legitimate need for the government to search. The officer had an incentive to get a warrant (to avoid the risk of personal liability); the judge had a rule that restricted the conditions under which he could issue a warrant; and together these structures limited official invasions of privacy to cases that presented a strong reason to invade.
That much is background. But notice what follows.
The original regime presupposed a great deal. Most obviously, it presupposed a common-law system of trespass law — it was the threat of legal liability from trespass law that created the incentives for officers to seek warrants in the first place. This presupposition placed property at the core of the Constitution’s original protections.
Equally important, the regime presupposed much about the technology of the time. The Fourth Amendment focuses on trespass because that was the primary mode of searching at the time. If it had been possible simply to view the contents of a house without going inside, the restrictions of the Fourth Amendment would have made little sense. But the protections of the amendment did make sense as a way to draw the balance between government’s power to search and the people’s right to privacy given the regime of trespass law and privacy-invading technologies that prevailed at the end of the eighteenth century.
Presuppositions — what is taken for granted or considered undebatable — change[10]. How do we respond when such presuppositions change? How do we read a text written against a background of certain presuppositions when those presuppositions no longer apply?
For Americans, or for any nation with a constitution some two hundred years old, this is the central problem for constitutional interpretation. What if state governments, for example, were simply to abolish rights against trespass? Would the amendment be read any differently[11]? What if technologies for searching were to change so dramatically that no one would ever need to enter another’s property to know what is kept there? Should the amendment then be read differently?
The history of the Supreme Court’s treatment of such questions lacks a perfectly clear pattern, but we can identify two distinct strategies competing for the Court’s attention. One strategy is focused on what the framers or founders would have done — the strategy of one-step originalism. The second strategy aims at finding a current reading of the original Constitution that preserves its original meaning in the present context — a strategy that I call translation.
Both strategies are present in the Olmstead wiretapping case. When the government tapped the phones of the defendants without any warrant, the Court had to decide whether the use of this kind of evidence was permissible or consistent with the principles of the Fourth Amendment. The defendants said: The government must get a warrant to tap phones. The government said: The Fourth Amendment simply does not apply.
The government’s argument was quite simple. The amendment presupposed that the government would be trespassing to search, and it was regulating the conditions under which officers could trespass. But because wiretapping is an invasion of privacy without a trespass, the government is able to tap the defendants’ phones without ever entering their property; the amendment therefore does not apply. It simply does not reach to protect invasions that are invasions without trespass.
The Supreme Court agreed. In an opinion written by Chief Justice (and former President) William Howard Taft, the Court followed the government.
The amendment does not forbid what was done here. There was no searching. There was no seizure. The evidence was secured only by the use of the sense of hearing and that only. The language of the amendment cannot be extended and expanded to include telephone wires reaching to the whole world from the defendant’s house or office[12].
This conclusion was received with surprise and shock. Already much of life had moved to the wires. People were beginning to understand what it meant to have intimate contact “online”; they counted on the telephone system to protect their intimate secrets. Indeed, telephone companies, having strongly fought the authority that the government claimed, pledged not to assist the government except as required by law[13]. This resistance notwithstanding, the Court concluded that the Constitution did not interfere with invasions of this sort. It would not have done so when the Constitution was written; it did not do so at the time when the case was decided.
But the dissent written by Justice Brandeis (there was also a dissent by Justices Holmes, Stone, and Butler) had a different view. As with Taft’s opinion, the focus was fidelity. But his fidelity was quite differently conceived.
Brandeis acknowledged that the Fourth Amendment, as originally written, applied only to trespass[14]. But it did so, he argued, because when it was written trespass was the technology for invading privacy. That was the framers’ presupposition, but that presupposition had now changed. Given this change, Brandeis argued, it was the Court’s responsibility to read the amendment in a way that preserved its meaning, changed circumstances notwithstanding. The aim must be to translate the original protections into a context in which the technology for invading privacy had changed[15]. This would be done, Brandeis argued, by applying the Fourth Amendment’s protection to invasions that were not themselves trespasses.
These two opinions mark two different modes of constitutional interpretation. Taft finds fidelity by simply repeating what the framers did; Brandeis finds fidelity by finding the current equivalent to what the framers did. If we followed Taft, Brandeis argued, we would defeat the protections for privacy that the framers originally set; if we followed Brandeis, Taft implied, we would be adding something to the Constitution that the framers had not written.
Partisans on both sides claimed that the opinion of the other would have “changed” the meaning of the Constitution. But whose opinion, the Court’s or Justice Brandeis’s, would really “change” the meaning of the Fourth Amendment?
To answer this question, we must first ask: Change relative to what? What is the baseline against which this change is a change? Certainly Brandeis would have agreed that in 1791 any finding by the Court that the amendment reached beyond trespass would have been improper. But when something presupposed by the original amendment has changed, is it clear that the Court’s proper response is to act as if nothing has changed at all?
Brandeis’s method accounted for the changed presupposition. He offered a reading that changed the scope of the amendment in order to maintain the amendment’s protection of privacy. Taft, on the other hand, offered a reading that maintained the scope of the amendment but changed its protection of privacy. Each reading kept something constant; each also changed something. The question is: Which reading preserved what fidelity demands should be preserved?
We might better see the point through a somewhat stylized re-creation. Imagine that we could quantify privacy; we could thus describe the change in the quantity of privacy that any change in technology might bring. (Robert Post has given an absolutely persuasive argument about why privacy is not quantifiable, but my purposes here are simply illustrative[16].) Imagine that in 1791 protecting against physical trespass protected 90 percent of personal privacy. The government could still stand on the street and listen through open windows, but the invasion presented by that threat was small, all things considered. For the most part, a regime that protected against trespass also protected privacy.
When telephones came along, however, this protection changed. A lot of private information was put out across the phone lines. Now, if tapping was not trespass, much less of private life was protected from government snooping. Rather than 90 percent being protected by the amendment, only 50 percent was protected.
Brandeis wanted to read the amendment so that it protected the 90 percent it originally protected — even though doing so required that it protect against more than simple trespass. He wanted to read it differently, we could say, so that it protected the same.
This form of argument is common in our constitutional history, and it is central to the best in our constitutional tradition[17]. It is an argument that responds to changed circumstances by proposing a reading that neutralizes those changes and preserves an original meaning. It is an argument invoked by justices on both the right and the left[18], and it is a way to keep life in a constitutional provision — to make certain that changes in the world do not change the meaning of the Constitution’s text. It is an argument, we can say, that aims at translating the protections that the Fourth Amendment gave in 1791 into the same set of protections at any time later in our history. It acknowledges that to do this the Court may have to read the amendment differently, but it is not reading the amendment differently to improve the amendment or to add to its protections. It is reading the amendment differently to accommodate the changes in protection that have resulted from changes in technology. It is translation to preserve meaning.
If there is a justice who deserves cyberspace’s praise, if there is a Supreme Court opinion that should be the model for cyber activists in the future, if there is a first chapter in the fight to protect cyberspace, it is this justice, this opinion, and this case. Brandeis gave us a model for reading the Constitution to preserve its meaning, and its values, across time and context. It is a method that recognizes what has changed and accommodates that change to preserve something of what the framers originally gave us. It is a method that translates the Constitution’s meaning across fundamentally different contexts — whether they are as temporally distant as we are from the framers or as distant as cyberspace is from real space.
But it was Taft’s opinion that became law and his narrow view of the Fourth Amendment that prevailed. It took forty years for the Supreme Court to embrace Brandeis’s picture of the Fourth Amendment — 40 years before Olmstead was overruled. The case overruling it was Katz v. United States[19].
Charles Katz was suspected of transmitting gambling information to clients in other states by telephone. Federal agents recorded his half of several of his telephone calls by attaching an eavesdropping device to the outside of a public phone booth where he made his calls. Katz was convicted on the basis of this evidence, and the court of appeals upheld the conviction on the basis of Olmstead.
Harvard Law School Professor Laurence Tribe was involved in the case at the beginning of his legal career:
As a law clerk to Supreme Court Justice Potter Stewart, I found myself working on a case involving the government’s electronic surveillance of a suspected criminal in the form of a tiny device attached to the outside of a public telephone booth. Because the invasion of the suspect’s privacy was accomplished without physical trespass into a “constitutionally protected area”, the Federal Government argued, relying upon Olmstead, that there had been no “search” or “seizure” and therefore the Fourth Amendment “right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures ” simply did not apply.
At first, there were only four votes to overrule Olmstead and to hold the Fourth Amendment applicable to wiretapping and electronic eavesdropping. I’m proud to say that, as a 26-year-old kid, I had at least a little bit to do with changing that number from four to seven — and with the argument, formally adopted by a seven-Justice majority in December 1967, that the Fourth Amendment “protects people, not places” 389 US at 351. In that decision, Katz v. United States, the Supreme Court finally repudiated Olmstead and the many decisions that had relied upon it, reasoning that, given the role of electronic telecommunications in modern life, the First Amendment purposes of protecting free speech as well as the Fourth Amendment purposes of protecting privacy require treating as a “search” any invasion of a person’s confidential telephone communications, with or without physical trespass[20].
The Court in Katz followed Brandeis rather than Taft. It sought a reading of the Fourth Amendment that made sense of the amendment in a changed context. In the framers’ context of 1791, protecting against trespass to property was an effective way to protect against trespass to privacy, but in the Katz context of the 1960s it was not. In the 1960s much of intimate life was conducted in places where property rules did not reach (in the “ether”, for example, of the AT&T telephone network). And so a regime that made privacy hang on property did not protect privacy to the same degree that the framers had intended. Justice Stewart in Katz sought to remedy that by linking the Fourth Amendment to a more direct protection of privacy.
The link was the idea of “a reasonable expectation of privacy.” The core value, Stewart wrote, was the protection of “people, not places.[21]” Hence, the core technique should be to protect people where they have a reasonable expectation of privacy. Where this is the case, the government cannot invade that space without satisfying the requirements of the Fourth Amendment.
There is much to admire in Stewart’s opinion, at least to the extent that he is willing to fashion tools for preserving the Constitution’s meaning in changed circumstances — or again, to the extent that he attempts to translate the protections of the Fourth Amendment into a modern context. There is also much to question[22]. But we can put those questions aside for the moment and focus on one feature of the problem that is fairly uncontentious.
While lines will be hard to draw, it is at least fairly clear that the framers made a conscious choice to protect privacy. This was not an issue off the table of their original debate or a question they did not notice. And this is not the “right to privacy” that conservatives complain about in the context of the right to abortion. This is the right to be free from state intrusion into the “sanctity” of a private home. State-enforced threats to individual privacy were at the center of the movement that led to the republic. Brandeis and Stewart simply aimed to effect that choice in contexts where the earlier structure had grown ineffectual.
Translations like these are fairly straightforward. The original values chosen are fairly clear; the way in which contexts undermine the original application is easily grasped; and the readings that would restore the original values are fairly obvious. Of course, such cases often require a certain interpretive courage — a willingness to preserve interpretive fidelity by changing an interpretive practice. But at least the direction is clear, even if the means are a bit unseemly[23].
These are the easy cases. They are even easier when we are not trying to carry values from some distant past into the future but instead are simply carrying values from one context into another. When we know what values we want to preserve, we need only be creative about how to preserve them.
Cyberspace will present many such easy cases. When courts confront them, they should follow the example of Brandeis: They should translate, and they should push the Supreme Court to do likewise. Where circumstances have changed to nullify the protections of some original right, the Court should adopt a reading of the Constitution that restores that right.
But some cases will not be so easy. Sometimes translation will not be an option, and sometimes the values that translation would track are values we no longer want to preserve. Sometimes we cannot tell which values translation would select. This was the problem in Chapter 2 with the worm, which made the point about latent ambiguities. Changing contexts sometimes reveals an ambiguity latent in the original context. We must then choose between two different values, either of which could be said to be consistent with the original value. Since either way could be said to be right, we cannot say that the original context (whether now or two hundred years ago) decided the case.
Professor Tribe describes an example in a founding article in the law of cyberspace, “The Constitution in Cyberspace.[24]” Tribe sketches a method of reading the Constitution in cyberspace that aims to make the Constitution “technologically neutral.” The objective is to adopt readings (or perhaps even an amendment) that make it plain that changes in technology are not to change the Constitution’s meaning. We must always adopt readings of the Constitution that preserve its original values. When dealing with cyberspace, judges are to be translators: Different technologies are the different languages, and the aim is to find a reading of the Constitution that preserves its meaning from one world’s technology to another[25].
This is fidelity as translation. This kind of translation speaks as if it is just carrying over something that has already been said. It hides the creativity in its act; it feigns a certain polite or respectful deference. This way of reading the Constitution insists that the important political decisions have already been made and all that is required is a kind of technical adjustment. It aims to keep the piano in tune as it is moved from one concert hall to another.
But Tribe then offers an example that may make this method seem empty. The question is about the meaning of the confrontation clause of the Sixth Amendment — the defendant’s right in a criminal trial “to be confronted with the witnesses against him.” How, Tribe asks, should we read this clause today?
At the time of the founding, he argues, the technology of confrontation was simple — confrontation was two-way. If a witness confronted the accused, the accused, of necessity, confronted the witness. This was a necessity given to us by the technology of the time. But today it is possible for confrontation to be one-way — the witness confronts the accused, but the accused need not confront the witness. The question then is whether the confrontation clause requires one-way or two-way confrontation[26].
Let us grant that Tribe’s descriptions of the available technologies are correct and that the framers embraced the only confrontation clause that their technology permitted. The real question comes in step two. Now that technology allows two possibilities — one-way or two-way confrontation — which does the Constitution require?
The Court’s answer in its 1990 decision in Maryland v. Craig was clear: The Constitution requires only one-way confrontation. A confrontation clause regime that permits only one-way confrontation, at least when there are strong interests in not requiring two, is a fair translation of the original clause[27].
As a matter of political choice, I certainly like this answer. But I do not see its source. It seems to me that this is a question the framers did not decide, and a question that if presented to them might well have divided them. Given the technology of 1791, they did not have to decide between one-way and two-way confrontation; given the conflict of values at stake, it is not obvious how they would have decided it. Thus, to speak as if there were an answer here that the framers gave us is a bit misleading. The framers gave no answer here, and, in my view, no answer can be drawn from what they said.
Like the worm in Chapter 2, the confrontation clause presents a latent ambiguity[28]. Constitutional law in cyberspace will reveal many more such latent ambiguities. And these ambiguities offer us a choice: How will we go on?
Choices are not terrible. It is not a disaster if we must make a decision — as long as we are capable of it. But here is the nub of the problem as I see it. As I argue in more detail in Part IV, given the current attitudes of our courts, and our legal culture generally, constitutional choices are costly. We are bad at making them; we are not likely to get better at it soon.
When there is no answer about how to proceed — when the translation leaves open a question — we have two sorts of responses in constitutional practice. One response is passive: The court simply lets the legislature decide. This is the response that Justice Scalia presses in the context of the Fourteenth Amendment. On matters that, to the framers, were “undebatable”, the Constitution does not speak[29]. In this case, only the legislature can engage and press questions of constitutional value and thus say what the Constitution will continue to mean.
The second response is more active: The court finds a way to articulate constitutional values that were not present at the founding. The courts help spur a conversation about these fundamental values — or at least add their voice to this conversation — to focus a debate that may ultimately be resolved elsewhere. The first response is a way of doing nothing; the second is a way of exciting a dialogue about constitutional values as a means to confronting and resolving new questions[30].
My fear about cyberspace is that we will respond in the first way — that the courts, the institutions most responsible for articulating constitutional values, will stand back while issues of constitutional import are legislatively determined. My sense is that they will step back because they feel (as the balance of this book argues) that these are new questions that cyberspace has raised. Their newness will make them feel political, and when a question feels political, courts step away from resolving it.
I fear this not because I fear legislatures, but because in our day constitutional discourse at the level of the legislature is a very thin sort of discourse. The philosopher Bernard Williams has argued that because the Supreme Court has taken so central a role in the articulation of constitutional values, legislatures no longer do[31]. Whether Williams is correct or not, this much is clear: The constitutional discourse of our present Congress is far below the level at which it must be to address the questions about constitutional values that will be raised by cyberspace.
How we could reach beyond this thinness of discourse is unclear. Constitutional thought has been the domain of lawyers and judges for too long. We have been trapped by a mode of reasoning that pretends that all the important questions have already been answered, that our job now is simply to translate them for modern times. As a result, we do not quite know how to proceed when we think the answers are not already there. As nations across the world struggle to express and embrace constitutional values, we, with the oldest written constitutional tradition, have lost the practice of embracing, articulating, and deciding on constitutional values.
I return to this problem in Chapter 15. For now, my point is simply descriptive. Translation is one way to deal with the choices that cyberspace presents. It is one way of finding equivalence across contexts. But in the four applications that follow, I press the question: Is the past enough? Are there choices the framers did not address? Are they choices that we must make[32]?
Harold Reeves is among the best research assistants I have had. (But alas, the law has now lost him — he’s become a priest!). Early into his second year at the University of Chicago Law School, he came to me with an idea he had for a student “comment” — an article that would be published in the law review[1]. The topic was trespass law in cyberspace — whether and how the law should protect owners of space in cyberspace from the kinds of intrusions that trespass law protects against in real space. His initial idea was simple: There should be no trespass law in cyberspace[2]. The law should grant “owners” of space in cyberspace no legal protection against invasion; they should be forced to fend for themselves.
Reeves’s idea was a bit nutty, and in the end, I think, wrong[3]. But it contained an insight that was quite brilliant, and that should be central to thinking about law in cyberspace.
The idea — much more briefly and much less elegantly than Reeves has put it — is this: The question that law should ask is, What means would bring about the most efficient set of protections for property interests in cyberspace? Two sorts of protections are possible. One is the traditional protection of law — the law defines a space where others should not enter and punishes people who enter nonetheless. The other protection is a fence, a technological device (a bit of code) that (among other things) blocks the unwanted from entering. In real space, of course, we have both — law, in the form of trespass law, and fences that supplement that law. Both cost money, and the return from each is not necessarily the same. From a social perspective, we would want the mix that provides optimal protection at the lowest cost. (In economics-speak, we would want a mix such that the marginal cost of an additional unit of protection is equivalent to the marginal benefit.)
The implication of this idea in real space is that it sometimes makes sense to shift the burden of protection to citizens rather than to the state. If, for example, a farmer wants to store some valuable seed on a remote part of his farm, it is better for him to bear the cost of fencing in the seed than to require the police to patrol the area more consistently or to increase the punishment for those they catch. The question is always one of balance between the costs and benefits of private protection and state protection.
Reeves’s insight about cyberspace follows the same line. The optimal protection for spaces in cyberspace is a mix between public law and private fences. The question to ask in determining the mix is which protection, on the margin, costs less. Reeves argues that the costs of law in this context are extremely high — in part because of the costs of enforcement, but also because it is hard for the law to distinguish between legitimate and illegitimate uses of cyberspaces. There are many “agents” that might “use” the space of cyberspace. Web spiders, which gather data for web search engines; browsers, who are searching across the Net for stuff to see; hackers (of the good sort) who are testing the locks of spaces to see that they are locked; and hackers (of the bad sort) who are breaking and entering to steal. It is hard, ex ante, for the law to know which agent is using the space legitimately and which is not. Legitimacy depends on the intention of the person granting access.
So that led Reeves to his idea: Since the intent of the “owner” is so crucial here, and since the fences of cyberspace can be made to reflect that intent cheaply, it is best to put all the incentive on the owner to define access as he wishes. The right to browse should be the norm, and the burden to lock doors should be placed on the owner[4].
Now put Reeves’s argument aside, and think for a second about something that will seem completely different but is very much the same idea. Think about “theft” and the protections that we have against it.
• I have a stack of firewood behind my house. No one steals it. If I left my bike out overnight, it would be gone.
• A friend told me that, in a favorite beach town, the city used to find it impossible to plant flowers — they would immediately be picked. But now, he proudly reports, after a long “community spirit” campaign, the flowers are no longer picked.
• There are special laws about the theft of automobiles, planes, and boats. There are no special laws about the theft of skyscrapers. Cars, planes, and boats need protection. Skyscrapers pretty much take care of themselves.
Many things protect property against theft — differently. The market protects my firewood (it is cheaper to buy your own than it is to haul mine away); the market is a special threat to my bike (which if taken is easily sold). Norms sometimes protect flowers in a park; sometimes they do not. Nature sometimes conspires with thieves (cars, planes, and boats) and sometimes against them (skyscrapers).
These protections are not fixed. I could lock my bike and thereby use real-space code to make it harder to steal. There could be a shortage of firewood; demand would increase, making it harder to protect. Public campaigns about civic beauty might stop flower theft; selecting a distinctive flower might do the same. Sophisticated locks might make stolen cars useless; sophisticated bank fraud might make skyscrapers vulnerable. The point is not that protections are given, or unchangeable, but that they are multiplied and their modalities different.
Property is protected by the sum of the different protections that law, norms, the market, and real-space code yield. This is the implication of the argument made in Chapter 7. From the point of view of the state, we need law only when the other three modalities leave property vulnerable. From the point of view of the citizen, real-space code (such as locks) is needed when laws and norms alone do not protect enough. Understanding how property is protected means understanding how these different protections work together.
Reeves’s idea and these reflections on firewood and skyscrapers point to the different ways that law might protect “property” and suggest the range of kinds of property that law might try to protect. They also invite a question that has been asked by Justice Stephen Breyer and many others: Should law protect some kinds of property — in particular, intellectual property — at all[5]?
Among the kinds of property law might protect, my focus in this chapter will be on the property protected by copyright[6]. Of all the different types of property, this type is said to be the most vulnerable to the changes that cyberspace will bring. Many believe that intellectual property cannot be protected in cyberspace. And in the terms that I’ve sketched, we can begin to see why one might think this, but we will soon see that this thought must be wrong.
Roughly put, copyright gives a copyright holder certain exclusive rights over the work, including, most famously, the exclusive right to copy the work. I have a copyright in this book. That means, among other rights, and subject to some important exceptions, you cannot copy this book without my permission. The right is protected to the extent that laws (and norms) support it, and it is threatened to the extent that technology makes it easy to copy. Strengthen the law while holding technology constant, and the right is stronger. Proliferate copying technology while holding the law constant, and the right is weaker.
In this sense, copyright has always been at war with technology. Before the printing press, there was not much need to protect an author’s interest in his creative work. Copying was so expensive that nature itself protected that interest. But as the cost of copying decreased, and the spread of technologies for copying increased, the threat to the author’s control increased. As each generation has delivered a technology better than the last, the ability of the copyright holder to protect her intellectual property has been weakened.
Until recently, the law’s response to these changes has been measured and gradual. When technologies to record and reproduce sound emerged at the turn of the last century, composers were threatened by them. The law responded by giving composers a new, but limited, right to profit from recordings. When radio began broadcasting music, the composers were held to be entitled to compensation for the public performance of their work, but performers were not compensated for the “performance” of their recordings. Congress decided not to remedy that problem. When cable television started rebroadcasting television broadcasts, the copyright holders in the original broadcasts complained their work was being exploited without compensation. Congress responded by granting the copyright holders a new, but limited, right to profit from the rebroadcasts. When the VCR made it simple to record copyrighted content from off the air, copyright holders cried “piracy.” Congress decided not to respond to that complaint. Sometimes the change in technology inspired Congress to create new rights, and sometimes not. But throughout this history, new technologies have been embraced as they have enabled the spread of culture.
During the same period, norms about copyrighted content also evolved. But the single, defining feature of these norms can perhaps be summarized like this: that a consumer could do with the copyrighted content that he legally owned anything he wanted to do, without ever triggering the law of copyright. This norm was true almost by definition until 1909, since before then, the law didn’t regulate “copies.” Any use the consumer made of copyrighted content was therefore highly unlikely to trigger any of the exclusive rights of copyright. After 1909, though the law technically regulated “copies”, the technologies to make copies were broadly available. There was a struggle about Xerox machines, which forced a bit of reform[7], but the first real conflict that copyright law had with consumers happened when cassette tapes made it easy to copy recorded music. Some of that copying was for the purpose of making a “mixed tape”, and some was simply for the purpose of avoiding the need to buy the original recording. After many years of debate, Congress decided not to legislate a ban on home taping. Instead, in the Audio Home Recording Act, Congress signaled fairly clear exemptions from copyright for such consumer activity. These changes reinforced the norm among consumers that they were legally free to do whatever they wanted with copyrighted work. Given the technologies most consumers had access to, the stuff they wanted to do either did not trigger copyright (e.g., resell their books to a used bookstore), or if it did, the law was modified to protect it (e.g., cassette tapes).
Against the background of these gradual changes in the law, along with the practical norm that, in the main, the law didn’t reach consumers, the changes of digital technology were a considerable shock. First, from the perspective of technology, digital technologies, unlike their analog sister, enabled perfect copies of an original work. The return from copying was therefore greater. Second, also from the perspective of technology, the digital technology of the Internet enabled content to be freely (and effectively anonymously) distributed across the Internet. The availability of copies was therefore greater. Third, from the perspective of norms, consumers who had internalized the norm that they could do with “their content” whatever they wanted used these new digital tools to make “their content” available widely on the Internet. Companies such as Napster helped fuel this behavior, but the practice existed both before and after Napster. And fourth, from the perspective of law, because the base technology of the Internet didn’t reveal anything about the nature of the content being shared on the Internet, or about who was doing the sharing, there was little the law could do to stop this massive “sharing” of content. Thus fifth, and from the perspective of copyright holders, digital technologies and the Internet were the perfect storm for their business model: If they made money by controlling the distribution of “copies” of copyrighted content, you could well understand why they viewed the Internet as a grave threat.
Very quickly, and quite early on, the content industry responded to this threat. Their first line of defense was a more aggressive regime of regulation. Because, the predictions of cyberspace mavens notwithstanding, not everyone was willing to concede that copyright law was dead. Intellectual property lawyers and interest groups pushed early on to have law shore up the protections of intellectual property that cyberspace seemed certain to erase.
The initial response to this push was a White Paper produced by the Commerce Department in 1995. The paper outlined a series of modifications aimed, it said, at restoring “balance” in intellectual property law. Entitled “Intellectual Property and the National Information Infrastructure”, the report sought to restate existing intellectual property law in terms that anyone could understand, as well as to recommend changes in the law in response to the changes the Net would bring. But as scholars quickly pointed out, the first part was a bust[8]. The report no more “restated” existing law than Soviet historians “retold” stories of Stalin’s administration. The restatement had a tilt, very definitely in the direction of increased intellectual property protection, but it pretended that its tilt was the natural lay of the land.
For our purposes, however, it is the recommendations that were most significant. The government proposed four responses to the threat presented by cyberspace. In the terms of Chapter 7, these responses should be familiar.
The first response was traditional. The government proposed changes in the law of copyright to “clarify” the rights that it was to protect[9]. These changes were intended to better define the rights granted under intellectual property law and to further support these rights with clarified (and possibly greater) legal penalties for their violation.
The second response addressed norms, specifically copying norms. The report recommended increased educational efforts, both in schools and among the general public, about the nature of intellectual property and the importance of protecting it. In the terms of Chapter 7, this is the use of law to change norms so that norms will better support the protection of intellectual property. It is an indirect regulation of behavior by direct regulation of norms.
The third and fourth responses mixed technology and the market. The report called for legal support — through financial subsidies and special legal protection — of “copyright management schemes.” These “schemes” were simply technologies that would make it easier to control access to and use of copyrighted material. We will explore these “schemes” at some length later in this chapter, but I mention them now as another example of indirect regulation — using the market to subsidize the development of a certain software tool, and using law to regulate the properties of other software tools. Copyright management systems would be supported by government funding and by the threat of criminal sanctions for anyone deploying software to crack them[10].
Congress followed the recommendations of the 1995 White Paper in some respects. The most important was the enactment of the Digital Millennium Copyright Act in 1998. That statute implemented directly the recommendation that “technological protection measures” be protected by law. Code that someone implements to control either access to or use of a copyrighted work got special legal protection under the DMCA: Circumvention of that code, subject to a few important exceptions, constituted a violation of the law.
We will return to the DMCA later. The point just now, however, is to recognize something important about the presumption underlying the White Paper. The 1995 package of proposals was a scattershot of techniques — some changes in law, some support for changing norms, and lots of support for changing the code of cyberspace to make it better able to protect intellectual property. Perhaps nothing better than this could have been expected in 1995 — the law promised a balance of responses to deal with the shifting balance brought on by cyberspace.
Balance is attractive, and moderation seems right. But something is missing from this approach. The White Paper proceeds as if the problem of protecting intellectual property in cyberspace was just like the problem of protecting intellectual property in real space. It proceeds as if the four constraints would operate in the same proportions as in real space, as if nothing fundamental had changed.
But something fundamental has changed: the role that code plays in the protection of intellectual property. Code can, and increasingly will, displace law as the primary defense of intellectual property in cyberspace. Private fences, not public law.
The White Paper did not see this. Built into its scattershot of ideas is one that is crucial to its approach but fundamentally incorrect — the idea that the nature of cyberspace is anarchy. The White Paper promises to strengthen law in every area it can. But it approaches the question like a ship battening down for a storm: Whatever happens, the threat to copyright is real, damage will be done, and the best we can do is ride it out.
This is fundamentally wrong. We are not entering a time when copyright is more threatened than it is in real space. We are instead entering a time when copyright is more effectively protected than at any time since Gutenberg. The power to regulate access to and use of copyrighted material is about to be perfected. Whatever the mavens of the mid-1990s may have thought, cyberspace is about to give holders of copyrighted property the biggest gift of protection they have ever known.
In such an age, the real question for law is not, how can law aid in that protection? but rather, is the protection too great? The mavens were right when they predicted that cyberspace will teach us that everything we thought about copyright was wrong[11]. But the lesson in the future will be that copyright is protected far too well. The problem will center not on copy-right but on copy-duty — the duty of owners of protected property to make that property accessible.
That’s a big claim. To see it, however, and to see the consequences it entails, we need consider three examples. The first is a vision of a researcher from Xerox PARC (appropriately enough), Mark Stefik, and his idea of “trusted systems.[12]” The second is an implication of a world dominated by trusted systems. The third is an unreckoned cost to the path we are now on to “protect intellectual property.” The examples will throw into relief the threat that these changes present for values that our tradition considers fundamental. They should force us to make a choice about those values, and about their place in our future.
It all depends on whether you really understand the idea of trusted systems. If you don’t understand them, then this whole approach to commerce and digital publishing is utterly unthinkable. If you do understand them, then it all follows easily.
In what we can call the first generation of digital technologies, content owners were unable to control who copied what. If you have a copy of a copyrighted photo rendered in a graphics file, you could make unlimited copies of that file with no effect on the original. When you make the one-hundredth copy, nothing would indicate that it was the one-hundredth copy rather than the first. And as we’ve described again and again, in the original code of the Internet, there was nothing to regulate how or to whom copyrighted content was distributed. The function of “copying” as it was developed by the coders who built it, either in computers or networks, aimed at “copying” — not at “copying” with specified permissions.
This character to the function “copy” was not unique to cyberspace. We have seen a technology that presented the same problem, and I’ve already described how a solution was subsequently built into the technology[13]. Digital Audio Tape (DAT) technology was thought to be a threat to copyright owners. A number of solutions to this threat were proposed. Some people argued for higher penalties for illegal copying of tapes (direct regulation by law). Some, such as Richard Stallman, argued for a tax on blank tapes, with the proceeds compensating copyright holders (indirect regulation of the market by law). Some argued for better education to stop illegal copies of tapes (indirect regulation of norms by law). But some argued for a change in the code of DAT machines that would block unlimited perfect copying.
The tax and code regulators won. In late 1992, as a compromise between the technology and content industr ies, Congress passed the Audio Home Recording Act. The act first imposed a tax on both recorders and blank DAT media, with the revenues to be used to compensate copyright holders for the expected copyright infringement enabled by the technology. But more interestingly, the Act required manufacturers of DAT technology to include a Serial Copy Management System, which would limit the ability of DAT technology to copy. That limit was effected through a code inserted in copies made using DAT technology. From an original, the technology would always permit a copy. But from a copy made on a DAT recorder, no further digital copy could be made. (An analog copy could be made, thus degrading the quality of the copy, but not a perfect digital copy.) The technology was thus designed to break the “copy” function under certain conditions, so as to indirectly protect copyright owners. The net effect of these two changes was to minimize any harm from the technology, as well as to limit the functionality of the technology where it would be expected that functionality would encourage the violation of copyright. (Many think the net effect of this regulation also killed DAT technology.)
Something like the same idea animated Stefik’s vision[14]. He was not keen to make the quality of copies decrease. Rather, his objective was to make it possible to track and control the copies of digital content that are made[15].
Think of the proposal like this. Today, when you buy a book, you may do any number of things with it. You can read it once or one hundred times. You can lend it to a friend. You can photocopy pages in it or scan it into your computer. You can burn it, use it as a paperweight, or sell it. You can store it on your shelf and never once open it.
Some of these things you can do because the law gives you the right to do them — you can sell the book, for example, because the copyright law explicitly limits the copyright owner’s right to control your use of the physical book after the “first sale.” Other things you can do because there is no effective way to stop you. A book seller might sell you the book at one price if you promise to read it once, and at a different price if you want to read it one hundred times, but there is no way for the seller to know whether you have obeyed the contract. In principle, the seller could sell a police officer with each book to follow you around and make sure you use the book as you promised, but the costs of this control would plainly exceed any benefit.
But what if each of these rights could be controlled, and each unbundled and sold separately? What if, that is, the software itself could regulate whether you read the book once or one hundred times; whether you could cut and paste from it or simply read it without copying; whether you could send it as an attached document to a friend or simply keep it on your machine; whether you could delete it or not; whether you could use it in another work, for another purpose, or not; or whether you could simply have it on your shelf or have it and use it as well?
Stefik describes a network that makes such unbundling of rights possible. He describes an architecture that would allow owners of copyrighted materials to sell access to those materials on the terms they want and would enforce those contracts.
The details of the system are not important here (it builds on the encryption architecture I described in Chapter 4)[16], but its general idea is easy enough to describe. As the Net is now, basic functions like copying and access are crudely regulated in an all-or-nothing fashion. You generally have the right to copy or not, to gain access or not.
But a more sophisticated system of rights could be built into the Net — not into a different Net, but on top of the existing Net. This system would function by discriminating in the intercourse it has with other systems. A system that controlled access in this more fine-grained way would grant access to its resources only to another system that controlled access in the same way. A hierarchy of systems would develop, and copyrighted material would be traded only among systems that properly controlled access.
In such a world, then, you could get access, say, to the New York Times and pay a different price depending on how much of it you read. The Times could determine how much you read, whether you could copy portions of the newspaper, whether you could save it on your hard disk, and so on. But if the code you used to access the Times site did not enable the control the Times demanded, then the Times would not let you onto its site at all. In short, systems would exchange information only with others that could be trusted, and the protocols of trust would be built into the architectures of the systems.
Stefik calls this “trusted systems”, and the name evokes a helpful analog. Think of bonded couriers. Sometimes you want to mail a letter with something particularly valuable in it. You could simply give it to the post office, but the post office is not a terribly reliable system; it has relatively little control over its employees, and theft and loss are not uncommon. So instead of going to the post office, you could give your letter to a bonded courier. Bonded couriers are insured, and the insurance is a cost that constrains them to be reliable. This reputation then makes it possible for senders of valuable material to be assured about using their services. As Stefik writes:
with trusted systems, a substantial part of the enforcement of a digital contract is carried out by the trusted system. The consumer does not have the option of disregarding a digital contract by, for example, making unauthorized copies of a work. A trusted system refuses to exercise a right that is not sanctioned by the digital contract[17].
This is what a structure of trusted systems does for owners of intellectual property. It is a bonded courier that takes the thing of value and controls access to and use of it according to the orders given by the principal.
Imagine for a moment that such a structure emerged generally in cyberspace. How would we then think about copyright law?
An important point about copyright law is that, though designed in part to protect authors, the control it was designed to create was never to be perfect. As the Supreme Court noted, copyright “protection has never accorded the copyright owner complete control over all possible uses of his work.[18]” Thus, the law grants only particular exclusive rights, and those rights are subject to important limitations, such as “fair use”, limited terms, and the first sale doctrine. The law threatened to punish violators of copyright laws — and it was this threat that induced a fairly high proportion of people to comply — but the law was never designed to simply do the author’s bidding. It had public purposes as well as the author’s interest in mind.
Trusted systems provide authors with the same sort of protection. Because authors can restrict unauthorized use of their material, they can extract money in exchange for access. Trusted systems thus achieve what copyright law aims to, but they can achieve this protection without the law doing the restricting. It permits a much more fine-grained control over access to and use of protected material than the law permits, and it can do so without the aid of the law.
What copyright seeks to do using the threat of law and the push of norms, trusted systems do through the code. Copyright orders others to respect the rights of the copyright holder before using his property; trusted systems give access only if rights are respected in the first place. The controls needed to regulate this access are built into the systems, and no users (except hackers) have a choice about whether to obey them. The code complements the law by codifying the rules, making them more efficient.
Trusted systems in this sense are a privatized alternative to copyright law. They need not be exclusive; there is no reason not to use both law and trusted systems. Nevertheless, the code is effectively doing the work that the law was designed to do. It implements the law’s protection, through code, far more effectively than the law did.
What could be wrong with this? We do not worry when people put double bolts on their doors to supplement the work of the neighborhood cop. We do not worry when they lock their cars and take their keys. It is not an offense to protect yourself rather than rely on the state. Indeed, in some contexts it is a virtue. Andrew Jackson’s mother, for example, told him, “Never tell a lie, nor take what is not your own, nor sue anybody for slander, assault and battery. Always settle them cases yourself.[19]” Self-sufficiency is strength and going to the law a sign of weakness.
There are two steps to answering this question. The first rehearses a familiar but forgotten point about the nature of “property”; the second makes a less familiar, but central, point about the nature of intellectual property. Together they suggest why perfect control is not the control that law has given owners of intellectual property. And together they suggest the potential problem that copyright law in cyberspace will create.
The realists in American legal history (circa 1890–1930) were scholars who (in part) emphasized the role of the state in what was called “private law.[20]” At the time they wrote, it was the “private” in private law that got all the emphasis. Forgotten was the “law”, as if “property” and “contract” existed independent of the state.
The realists’ aim was to undermine this view. Contract and property law, they argued, gave private parties power[21]. If you breach a contract with me, I can have the court order the sheriff to force you to pay; the contract gives me access to the state power of the sheriff. If your contract with your employer says that it may dismiss you for being late, then the police can be called in to eject you if you refuse to leave. If your lease forbids you to have cats, then the landlord can use the power of the courts to evict you if you do not get rid of the cats. These are all instances where contract and property, however grounded in private action, give a private person an entitlement to the state.
No doubt this power is justified in many cases; to call it “law” is not to call it unjust. The greatest prosperity in history has been created by a system in which private parties can order their lives freely through contract and property. But whether justified in the main or not, the realists argued that the contours of this “law” should be architected to benefit society[22].
This is not communism. It is not an attack on private property, and it is not to say that the state creates wealth (put your Ayn Rand away). These are claims about the relationship between private law and public law, and they should be uncontroversial.
Private law creates private rights to the extent that these private rights serve some collective good. If a private right is harmful to a collective good, then the state has no reason to create it. The state’s interests are general, not particular. It has a reason to create rights when those rights serve a common, rather than particular, end.
The institution of private property is an application of this point. The state has an interest in defining rights to private property because private property helps produce a general, and powerful, prosperity. It is a system for ordering economic relations that greatly benefits all members of society. No other system that we have yet devised better orders economic relations. No other system, some believe, could[23].
But even with ordinary property — your car, or your house — property rights are never absolute. There is no property that does not have to yield at some point to the interests of the state. Your land may be taken to build a highway, your car seized to carry an accident victim to the hospital, your driveway crossed by the postman, your house inspected by health inspectors. In countless ways, the system of property we call “private property” is a system that balances exclusive control by the individual against certain common state ends. When the latter conflict with the former, it is the former that yields.
This balance, the realists argued, is a feature of all property. But it is an especially important feature of intellectual property. The balance of rights with intellectual property differs from the balance with ordinary real or personal property. “Information”, as Boyle puts it, “is different.[24]” And a very obvious feature of intellectual property shows why.
When property law gives me the exclusive right to use my house, there’s a very good reason for it. If you used my house while I did, I would have less to use. When the law gives me an exclusive right to my apple, that too makes sense. If you eat my apple, then I cannot. Your use of my property ordinarily interferes with my use of my property. Your consumption reduces mine.
The law has a good reason, then, to give me an exclusive right over my personal and real property. If it did not, I would have little reason to work to produce it. Or if I did work to produce it, I would then spend a great deal of my time trying to keep you away. It is better for everyone, the argument goes, if I have an exclusive right to my (rightly acquired) property, because then I have an incentive to produce it and not waste all my time trying to defend it[25].
Things are different with intellectual property. If you “take” my idea, I still have it. If I tell you an idea, you have not deprived me of it[26]. An unavoidable feature of intellectual property is that its consumption, as the economists like to put it, is “nonrivalrous.” Your consumption does not lessen mine. If I write a song, you can sing it without making it impossible for me to sing it. If I write a book, you can read a copy of it (please do) without disabling me from reading another copy of it. Ideas, at their core, can be shared with no reduction in the amount the “owner” can consume. This difference is fundamental, and it has been understood since the founding.
Jefferson put it better than I:
If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it. Its peculiar character, too, is that no one possesses the less, because every other possess the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lites his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, expansible over all space, without lessening their density at any point, and like the air in which we breathe, move, and have our physical being, incapable of confinement or exclusive appropriation. Inventions then cannot, in nature, be a subject of property[27].
Technically, Jefferson presents two concepts: One is the possibility of excluding others from using or getting access to an idea, which he defines as “action of the thinking power . . . which an individual may exclusively possess as long as he keeps it to himself. ” This is the question whether ideas are “excludable”; Jefferson affirms that an idea is “excludable” until “the moment it is divulged.”
The other concept is whether my use of a divulged idea lessens your use of the same idea. This is the question of whether divulged ideas are “rivalrous.[28]” Again, Jefferson suggests that, once they are divulged, ideas are not “rivalrous.” Jefferson believes that the act of divulging/sharing has made ideas both nonexcludable and nonrivalrous, and that there is little that man can do to change this fact[29].
In fact, shared ideas are both nonexcludable and nonrivalrous. I can exclude people from my secret ideas or writings — I can keep them secret, or build fences to keep people out. How easily, or how effectively, I can do so is a technical question. It depends on the architecture of protection that a given context provides. But given the proper technology, there is no doubt that I can keep people out. What I cannot do is to exclude people from my shared ideas or writings simply because they are not my secrets anymore.
My shared ideas are “nonrivalrous” goods, too. No technology (that we know of) will erase an idea from your head as it passes into my head. My knowing what you know does not lessen your knowing the same thing. That fact is a given in the world, and it makes intellectual property different. Unlike apples, and unlike houses, once shared, ideas are something I can take from you without diminishing what you have.
It does not follow, however, that there is no need for property rights over expressions or inventions[30]. Just because you can have what I have without lessening what I have does not mean that the state has no reason to create rights over ideas, or over the expression of ideas.
If a novelist cannot stop you from copying (rather than buying) her book, then she may have very little incentive to produce more books. She may have as much as she had before you took the work she produced, but if you take it without paying, she has no monetary incentive to produce more.
Now, of course, the incentives an author faces are quite complex, and it is not possible to make simple generalizations[31]. But generalizations do not have to be perfect to make a point: Even if some authors write for free, it is still the case that the law needs some intellectual property rights. If the law did not protect authorship at all, there would be fewer authors. The law has a reason to protect the rights of authors, at least insofar as doing so gives them an incentive to produce. With ordinary property, the law must both create an incentive to produce and protect the right of possession; with intellectual property, the law need only create the incentive to produce.
This is the difference between these two very different kinds of property, and this difference fundamentally affects the nature of intellectual property law. While we protect real and personal property to protect the owner from harm and give the owner an incentive, we protect intellectual property to ensure that we create a sufficient incentive to produce it. “Sufficient incentive”, however, is something less than “perfect control.” And in turn we can say that the ideal protections of intellectual property law are something less than the ideal protections for ordinary or real property.
This difference between the nature of intellectual property and ordinary property was recognized by our Constitution, which in article I, section 8, clause 8, gives Congress the power “to promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries. ”
Note the special structure of this clause. First, it sets forth the precise reason for the power — to promote the progress of science and useful arts. It is for those reasons, and those reasons only, that Congress may grant an exclusive right. And second, note the special temporality of this right: “for limited Times.” The Constitution does not allow Congress to grant authors and inventors permanent exclusive rights to their writings and discoveries, only limited rights. (Though apparently those limited times can be extended[32].) It does not give Congress the power to give them a perpetual “property” in their writings and discoveries, only an exclusive right over them for a limited time.
The Constitution’s protection for intellectual property then is fundamentally different from its protection of ordinary property. I’ve said that all property is granted subject to the limit of the public good. But even so, if the government decided to nationalize all property after a fifteen-year term of ownership, the Constitution would require it to compensate the owners. By contrast, if Congress set the copyright term at fifteen years, there would be no claim that the government pay compensation after the fifteen years were up. Intellectual property rights are a monopoly that the state gives to producers of intellectual property in exchange for their production of it. After a limited time, the product of their work becomes the public’s to use as it wants. This is Communism at the core of our Constitution’s protection of intellectual property. This “property” is not property in the ordinary sense of that term.
And this is true for reasons better than tradition as well. Economists have long understood that granting property rights over information is dangerous (to say the least)[33]. This is not because of leftist leanings among economists; it is because economists are consequentialists, and their objective in granting any property right is simply to facilitate production. But there is no way to know, in principle, whether increasing or decreasing the rights granted under intellectual property law will lead to an increase in the production of intellectual property. The reasons are complex, but the point is not: Increasing intellectual property’s protection is not guaranteed to “promote the progress of science and useful arts” — indeed, often doing so will stifle it.
The balance that intellectual property law traditionally strikes is between the protections granted the author and the public use or access granted everyone else. The aim is to give the author sufficient incentive to produce. Built into the law of intellectual property are limits on the power of the author to control use of the ideas she has created[34].
A classic example of these limits and of this public use dimension is the right of “fair use.” Fair use is the right to use copyrighted material, regardless of the wishes of the owner of that material. A copyright gives the owner certain rights; fair use is a limitation on those rights. It gives you the right to criticize this book, cut sections from it, and reproduce them in an article attacking me. In these ways and in others, you have the right to use this book independent of how I say it should be used.
Fair use does not necessarily work against the author’s interest — or more accurately, fair use does not necessarily work against the interests of authors as a class. When fair use protects the right of reviewers to criticize books without the permission of authors, then more critics criticize. And the more criticism there is, the better the information is about what books people should buy. The better the information is about what to buy, the more people will buy it. Authors as a whole benefit from the system of fair use, even if particular authors do not.
The law of copyright is filled with such rules. Another is the “first sale” doctrine. If you buy this book, you can sell it to someone else free of any constraint I might impose on you[35]. This doctrine differs from the tradition in, for example, Europe, where there are “moral rights” that give the creator power over subsequent use[36]. I’ve already mentioned another example — limited term. The creator cannot extend the term for which the law will provide protection (even if Congress can); that is fixed by the statute and runs out when the statute runs out.
Taken together, these rules give the creator significant — but not perfect — control over the use of what he produces. They give the public some access, but not complete access. They are balanced differently from the balance the law strikes for ordinary property — by design. They are constitutionally structured to help build an intellectual and cultural commons.
The law strikes this balance. It is not a balance that would exist in nature. Without the law, and before cyberspace, authors would have very little protection; with the law, they have significant, but not perfect, protection. The law gives authors something they otherwise would not have in exchange for limits on their rights, secured to benefit the intellectual commons as a whole.
So copyright law strikes a balance between control and access. What about that balance when code is the law? Should we expect that any of the limits will remain? Should we expect code to mirror the limits that the law imposes? Fair use? Limited term? Would private code build these “bugs” into its protections?
The point should be obvious: When intellectual property is protected by code, nothing requires that the same balance be struck. Nothing requires the owner to grant the right of fair use. She might allow individuals to browse for free, as a bookstore does, but she might not. Whether she grants this right depends on whether it profits her. Fair use becomes contingent upon private gain. More importantly, it becomes contingent upon the private gain of authors individually rather than authors as a class.
Thus, as privatized law, trusted systems regulate in the same domain that copyright law regulates. But unlike copyright law, they do not guarantee the same limits on copyright’s protection. Trusted systems give the producer maximum control over the uses of copyrighted work — admittedly at a cheaper cost, thus perhaps permitting many more authors to publish. But they give authors almost perfect control in an area in which the law did not. Code thus displaces the balance that copyright law strikes by displacing the limits the law imposes. As Daniel Benloliel puts it,
Decentralized content providers are . . . privatizing the enforcement authority with strict technological standards, under which individuals would be banned from access and use of particular digital content in a way that might override legitimate fair use[37].
So far my description simply sets law against code: the law of copyright either complemented by, or in conflict with, private code. You may not yet be convinced that we should consider this a conflict, because it has always been the case that one can exercise more control over a copyrighted work tha n the law gives you the right to exercise over the copyright. For example, if you own a painting that is in the public domain, there’s no requirement for you to let anyone see it. You could lock it in your bedroom and never let anyone see it ever. In a sense, you’ve thus deprived the world of the value of this painting being in the “public domain.” But no one has ever thought that this interaction between the law of trespass and copyright has created any important conflict. So why should anyone be troubled if copyright owners use code to lock up their content beyond the balance the law of copyright strikes?
If this is where you’re stuck, then let me add one more part to the story. As I mentioned above, the DMCA contains an anti-circumvention provision. That part of the law forbids the circumvention of some technical protection measures; it forbids the development of tools to circumvent technical protection as well. Most important, it forbids these circumventions regardless of the purpose of the circumvention. Thus, if the underlying use you would make of a copyrighted work — if you could get access to it — is a “fair use”, the DMCA still makes it an offense to circumvent technical protections to get access to it. Thus one part of the law of copyright grants “fair use”, while another part of copyright removes at least some fair use liberty where the fair use has been removed by technical means[38].
But so what, the skeptic will ask. What the law gives, the law can take away, can’t it?
No it can’t, and that’s the point. As the Supreme Court has indicated, copyright law is consistent with the First Amendment only because of certain important limitations built into the law. Removing those limitations would then raise important First Amendment questions. Thus, when the law acts with code to remove the law’s protection for fair use, this should raise an important question — at least for those concerned about maintaining the balance that copyright law strikes.
But maybe this conflict is just temporary. Couldn’t the code be changed to protect fair use?
The answer to that hopeful (and again, hopeful because my main point is about whether incentives to protect fair use exist) question is no, not directly. Fair use inherently requires a judgment about purpose, or intent. That judgment is beyond the ken of even the best computers. Indirectly, however, fair use could be protected. A system that allowed an individual to unlock the trusted system if he claimed the use was fair (perhaps marking the used work with a tag to make it possible to trace the use back to the user) could protect fair use. Or as Stefik describes, a system that granted users a “fair use license”, allowing them to unlock the content and use insurance backing the license to pay for any misuse, might also protect fair use[39]. But these alternatives again rely on structures beyond code. With the code itself, there is no way adequately to police fair use.
Some will respond that I am late to the party: Copyright law is already being displaced, if not by code then by the private law of contract. Through the use of click-wrap, or shrink-wrap, licenses, authors are increasingly demanding that purchasers, or licensees, waive rights that copyright law gave them. If copyright law gives the right to reverse-engineer, then these contracts might extract a promise not to reverse-engineer. If copyright law gives the right to dispose of the book however the purchaser wants after the first sale, then a contract might require that the user waive that right. And if these terms in the contract attached to every copyright work are enforceable merely by being “attached” and “knowable”, then already we have the ability through contract law to rewrite the balance that copyright law creates.
I agree that this race to privatize copyright law through contract is already far along, fueled in particular by decisions such as Judge Frank Easterbrook’s in ProCD v. Zeidenberg. But contracts are not as bad as code. Contracts are a form of law. If a term of a contract is inconsistent with a value of copyright law, you can refuse to obey it and let the other side get a court to enforce it. In some cases, courts have expressly refused to follow a contract term precisely because it is inconsistent with a copyright law value[40]. The ultimate power of a contract depends upon the decision by a court to enforce the contract or not. Although courts today are relatively eager to find ways to enforce these contracts, there is at least hope that if the other side makes its case very clear, courts could shift direction again[41]. As Stefik writes, trusted systems “differ from an ordinary contract in critical ways.”
In an ordinary contract, compliance is not automatic; it is the responsibility of the agreeing parties. There may be provisions for monitoring and checking on compliance, but the actual responsibility for acting in accordance with the terms falls on the parties. In addition, enforcement of the contract is ultimately the province of the courts[42].
The same is not true of code. Whatever problems there are when contracts replace copyright law, the problems are worse when code displaces copyright law. Again — where do we challenge the code? When the software protects without relying in the end on the state, where can we challenge the nature of the protection? Where can we demand balance when the code takes it away?
I don’t mean to enter the extremely contentious debate about whether this change in control is good or appropriate. I’ve said too much about that elsewhere[43]. For our purposes here, the point is simply to recognize a significant change. Code now makes possible increasingly perfect control over how culture is spread. Regulations have “been fairly consistent . . . on the side of expanding the power of the owners to control the use of their products.[44]” And these regulations invite a demand for perfect control over how culture is spread.
The rise of contracts qualifying copyright law and the rise of code qualifying copyright law raise a question that the law of copyright has not had to answer before. We have never had to choose whether authors should be permitted perfectly to control the use of their intellectual property independent of the law, for such control was not possible. The balance struck by the law was the best that authors could get. But now, code gives authors a better deal. The question for legal policy is whether this better deal makes public sense.
Here we confront the first latent ambiguity within the law of copyright. There are those who would say that copyright law already decides this question — whether against code-based control, or for it. But in my view, this is a choice the law has yet to make. I have my own views about how the law should decide the question. But what technology has done is force us to see a choice that was not made before. See the choice, and then make it.
Put most directly: There has always been a set of uses of copyrighted work that was unregulated by the law of copyright. Even within the boundary of uses that were regulated by the law of copyright, “fair use” kept some uses free. The core question is why? Were these transactions left free because it was too costly to meter them? Or were these transactions left free because keeping them free was an important public value tied to copyright?
This is a question the law never had to resolve, though there is support for both views[45]. Now the technology forces us to resolve it. The question, then, is how.
A nice parallel to this problem exists in one part of constitutional law. The framers gave Congress the power to regulate interstate commerce and commerce that affects interstate commerce[46]. At the founding, that was a lot of commerce, but because of the inefficiencies of the market, not all of it. Thus, the states had a domain of commerce that they alone could regulate[47].
Over time, however, the scope of interstate commerce has changed so that much less commerce is now within the exclusive domain of the states. This change has produced two sorts of responses. One is to find other ways to give states domains of exclusive regulatory authority. The justification for this response is the claim that these changes in interstate commerce are destroying the framers’ vision about state power.
The other response is to concede the increasing scope of federal authority, but to deny that it is inconsistent with the framing balance[48]. Certainly, at the founding, some commerce was not interstate and did not affect interstate commerce. But that does not mean that the framers intended that there must always be such a space. They tied the scope of federal power to a moving target; if the target moves completely to the side of federal power, then that is what we should embrace[49].
In both contexts, the change is the same. We start in a place where balance is given to us by the mix of frictions within a particular regulatory domain: Fair use is a balance given to us because it is too expensive to meter all use; state power over commerce is given to us because not all commerce affects interstate commerce. When new technology disturbs the balance, we must decide whether the original intent was that there be a balance, or that the scope of one side of each balance should faithfully track the index to which it was originally tied. Both contexts, in short, present ambiguity.
Many observers (myself included) have strong feelings one way or the other. We believe this latent ambiguity is not an ambiguity at all. In the context of federal power, we believe either that the states were meant to keep a domain of exclusive authority[50] or that the federal government was to have whatever power affected interstate commerce[51]. In the context of fair use, we believe that either fair use is to be a minimum of public use, guaranteed regardless of the technology[52], or that it is just an efficient compromise in response to an inefficient technology, to be removed as soon as efficiency can be achieved.
But in both cases, this may make the problem too easy. The best answer in both contexts may be that the question was unresolved at the framing: Perhaps no one thought of the matter, and hence there is no answer to the question of what they would have intended if some central presupposition had changed. And if there was no original answer, we must decide the question by our own lights. As Stefik says of trusted systems — and, we might expect, of the implications of trusted systems — “It is a tool never imagined by the creators of copyright law, or by those who believe laws governing intellectual property cannot be enforced.[53]”
The loss of fair use is a consequence of the perfection of trusted systems. Whether you consider it a problem or not depends on your view of the value of fair use. If you consider it a public value that should exist regardless of the technological regime, then the emergence of this perfection should trouble you. From your perspective, there was a value latent in the imperfection of the old system that has now been erased.
But even if you do not think that the loss of fair use is a problem, trusted systems threaten other values latent in the imperfection of the real world. Consider a second.
I was a student at an English university for a number of years. In the college I attended, there was a “buttery” — a shop inside the college that basically sold alcohol. During the first week I was there I had to buy a large amount of Scotch (a series of unimaginative gifts, as I remember). About a week after I made these purchases, I received a summons from my tutor to come talk with him in his office. When I arrived, the tutor asked me about my purchases. This was, to his mind, an excessive amount of alcohol, and he wanted to know whether I had a good reason for buying it.
Needless to say, I was shocked at the question. Of course, technically, I had made a purchase at the college, and I had not hidden my name when I did so (indeed, I had charged it on my college account), so, formally, I had revealed my alcohol purchases to the college and its agents. Still, it shocked me that this information would be monitored by college authorities and then checked up on. I could see why they did it, and I could see the good that might come from it. It just never would have occurred to me that these data would be used in this way.
If this was an invasion, of course, it was a small one. Later it was easy for me to hide my binges simply by buying from a local store rather than the college buttery. (Though I later learned that the local store rented its space from the college, so who knows what deal they had struck?) And in any case, I was not being punished. The college was just concerned. But the example suggests a more general point: We reveal to the world a certain class of data about ourselves that we ordinarily expect the world not to use. What happens when they use it?
Trusted systems depend on such data — they depend on the ability to know how people use the property that is being protected. To set prices most efficiently, the system ideally should know as much about individuals and their reading habits as possible. It needs to know this data because it needs an efficient way to track use and so to charge for it[54].
But this tracking involves a certain invasion. We live now in a world where we think about what we read in just the way that I thought about what I bought as a student in England — we do not expect that anyone is keeping track. We would be shocked if we learned that the library was keeping tabs on the books that people checked out and then using this data in some monitoring way.
Such tracking, however, is just what trusted systems require. And so the question becomes: Should there be a right against this kind of monitoring? The question is parallel to the question of fair use. In a world where this monitoring could not effectively occur, there was, of course, no such right against it. But now that monitoring can occur, we must ask whether the latent right to read anonymously, given to us before by imperfections in technologies, should be a legally protected right.
Julie Cohen argues that it should, and we can see quite directly how her argument proceeds[55]. Whatever its source, it is a value in this world that we can explore intellectually on our own. It is a value that we can read anonymously, without fear that others will know or watch or change their behavior based on what we read. This is an element of intellectual freedom; it is a part of what makes us as we are[56].
But this element is potentially erased by trusted systems. These systems need to monitor, and this monitoring destroys anonymity. We need to decide whether, and how, to preserve values from today in a context of trusted systems.
This could first be a question of translation: namely, how should changes in technology be accommodated to preserve values from an earlier context in a new context? It is the same question that Brandeis asked about wiretapping[57]. It is the question the Court answers in scores of contexts all the time. It is fundamentally a question about preserving values when contexts change.
In the context of both fair use and reading, Cohen has a consistent answer to this question of translation. She argues that there is a right to resist, or “hack”, trusted systems to the extent that they infringe on traditional fair use. (Others have called this the “Cohen Theorem.”) As for reading, she argues that copyright management schemes must protect a right to read anonymously — that if they monitor, they must be constructed so that they preserve anonymity. The strategy is the same: Cohen identifies a value yielded by an old architecture but now threatened by a new architecture, and then argues in favor of an affirmative right to protect the original value.
But here again we might view the question more ambiguously. I share Cohen’s view, but the argument on the other side is not silly. If it’s permissible to use technology to make copyrighted works available, why isn’t it permissible to gather data about who uses what works? That data gathering is not part of the copyright itself; it is a byproduct of the technology. And as our tradition has never had this technical capacity before, it is hard to say a choice was made about it in the past.
I’ve already described the limits copyright law places on itself. These limits, as I argued, reflect important values. They express the balance that copyright law aims to be.
But what is too often missed in this discussion of balance is any sense of perspective. We focus on the gradual shifts in the law but miss the profound sense in which the significance of the law has changed.
This change is produced by the unintended interaction between the architecture of digital technologies and the architecture of the law.
Copyright law at its core regulates “copies.” In the analog world, there were very few contexts in which one produced “copies.” As Jessica Litman described more than a decade ago,
At the turn of the century, U.S. copyright law was technical, inconsistent, and difficult to understand, but it didn’t apply to very many people or very many things. If one were an author or publisher of books, maps, charts, paintings, sculpture, photographs or sheet music, a playwright or producer of plays, or a printer, the copyright law bore on one’s business. Booksellers, piano-roll and phonograph record publishers, motion picture producers, musicians, scholars, members of Congress, and ordinary consumers could go about their business without ever encountering a copyright problem[58].
Thus there were many ways in which you could use creative work in the analog world without producing a copy.
Digital technology, at its core, makes copies. Copies are to digital life as breathing is to our physical life. There is no way to use any content in a digital context without that use producing a copy. When you read a book stored on your computer, you make a copy (at least in the RAM memory to page through the book). When you do anything with digital content, you technically produce a copy.
This technical fact about digital technologies, tied to the technical architecture of the law, produces a profound shift in the scope or reach of the law of copyright that too many simply miss: While in the analog world, life was sans copyright law; in the digital world, life is subject to copyright law. Every single act triggers the law of copyright. Every single use is either subject to a license or illegal, unless deemed to be “fair use.” The emergence of digital technologies has thus radically increased the domain of copyright law — from regulating a tiny portion of human life, to regulating absolutely every bit of life on a computer.
Now if all you think about is protecting the distribution of professionally created culture, this might not concern you much. If you’re trying to stop “piracy”, then a regime that says every use requires permission is a regime that gives you a fairly broad range of tools for stamping out piracy.
But though you wouldn’t notice this listening to the debates surrounding copyright law just now, in fact, protecting the distribution of professionally created culture is not the only, or even, I suggest, the most important part of culture. And indeed, from a historical perspective, top-down, professionally produced culture is but a tiny part of what makes any culture sing. The 20th century may have been an exception to this rule, but no Congress voted to make professional culture the only legal culture within our society.
Standing alongside professional culture is amateur culture — where amateur doesn’t mean inferior or without talent, but instead culture created by people who produce not for the money, but for the love of what they do. From this perspective, there is amateur culture everywhere — from your dinner table, where your father or sister tell jokes that take off from the latest political scandal or the latest Daily Show; from your basement, where your brother and his three best friends are causing permanent damage to their eardrums as they try to become the next Rolling Stones; from your neighbors who gather each Thursday and Sunday to sing in a church choir; from your neighborhood schools, where kids and teachers create art or music in the course of learning about our culture; from the kids at your neighborhood school, who tear their pants or wear their shirts in some odd way, all as a way to express and make culture.
This amateur culture has always been with us, even if it is to us today, as Dan Hunter and Greg Lastowska put it, “hidden[59]”. It is precisely how the imagination of kids develops[60]; it is how culture has always developed. As Siva Vaidhyanathan writes,
widespread democratic cultural production (peer-to-peer production, one might say) . . . merely echoes how cultural texts have flowed through and been revised by discursive communities everywhere for centuries. Texts often undergo a process similar to a game of “telephone”, through which a text is substantially — sometimes almost unintentionally — distorted through many small revisions. . . . Such radical textual revisions have occurred in other contexts and have helped build political critiques, if not movements. For instance, historian Lawrence Levine (1988) has documented how working-class players and audiences in nineteenth-century America adapted and revised the works of William Shakespeare to their local contexts, concerns and ideologies. And historian Eric Lott (1993) has shown how Uncle Tom’s Cabin was reworked by working-class white communities to aid the cause of racial dominance instead of the Christian liberationist message the book was intended to serve[61].
Importantly, too, this kind of cultural remix has historically been free of regulation. No one would think that as you tell a joke around your dinner table, or sing songs with your friends, or practice to become the next Rolling Stones, you need a lawyer standing next to you, clearing the rights to “use” the culture as you make your creative remix. The law of copyright, historically, has been focused on commercial life. It has left the noncommercial, or beyond commercial, creativity free of legal regulation.
All this has now changed, and digital technologies are responsible. First, and most important, digital technologies have radically expanded the scope of this amateur culture. Now the clever remix of some political event or the latest song by your favorite band are not just something you can share with your friends. Digital technologies have made it simple to capture and share this creativity with the world. The single most important difference between the Internet circa 1999 and the Internet circa today is the explosion of user-generated creativity — from blogs, to podcasts, to videocasts, to mashups, the Internet today is a space of extraordinary creativity.
Second, digital technologies have democratized creativity. Technology has given a wide range of potential creators the capacity to become real. “People are waking from their consumerist coma”, one commentator describes[62]. As DJ Danger Mouse put it at the Web 2.0 conference in 2004,
Mashing is so easy. It takes years to learn how to play the guitar and write your own songs. It takes a few weeks of practice with a turntable to make people dance and smile. It takes a few hours to crank out something good with some software. So with such a low barrier to entry, everyone jumps in and starts immediately being creative[63].
But third, and directly relevant to the story of this chapter, to the extent this creativity finds its expression on the Net, it is now subject to the regulation of copyright law. To the extent it uses others’ creativity, it needs the permission of others. To the extent it builds upon the creativity of others, it needs to be sure that that creativity can be built upon legally. A whole system of regulation has now been grafted upon an economy of creativity that until now has never known regulation. Amateur culture, or bottom up culture, or the culture that lives outside of commercial transactions — all of this is subject to regulation in a way that 30 years ago it was not.
A recent example of this conflict makes the point very concisely. There’s a genre of digital creativity called Anime Music Videos (AMVs). AMVs are remixes of anime cartoons and music. Kids spend hundreds, sometimes thousands of hours reediting the anime cartoons to match them perfectly to music. The result is, in a word, extraordinary. It is among the most creative uses of digital technology that I have seen.
While this genre of creativity is not small, it’s also not huge. Basically one site dominates activity around AMVs. That site has more than 500,000 members, and some 30,000 creators upload AMV content to the site.
In November 2005, one prominent record label, Wind-Up Records, informed this website that it wanted all Wind-Up Records artists removed from the site. That was some 3,000 videos, representing at least 250,000 hours of volunteer work by creators across the world — work that would have just one real effect: to promote the underlying artists’ work.
From the perspective of the law as it is, this is an easy case. What the kids are doing is making a derivative work of the anime; they are distributing full copies of the underlying music; and they are synchronizing the music to video — all without the permission of the copyright owners.
But from the perspective of culture, this should be a very hard case. The creativity demonstrated by this work is extraordinary. I can’t show you that creativity in a book, but the notes point you to an example that you can see[64]. It is noncommercial, amateur creative work — precisely the sort that has never been subject to the regulation of the law, but which now, because it is living in digital context, is monitored, and regulated, by the law.
Here again, I have strong feelings about what the right answer should be. But we should recognize the latent ambiguity this conflict presents:
Because of the changes in digital technology, it is now possible for the law to regulate every single use of creative work in a digital environment. As life increasingly moves into a digital environment, this means that the law will regulate more and more of the use of culture.
Is this consistent with our values?
The answer again could be found first by trying to translate framing values into the current context. From that perspective, it would be extraordinarily difficult to imagine that the framing vision would have included the level of legal regulation that the current regime entails.
Again, that conclusion could be questioned by recognizing that the possibility of such extensive regulation didn’t exist, and so the choice about whether such extensive regulation should be allowed wasn’t made. That choice, when made, should recognize that while there is extensive and new regulation of amateur culture, that regulation creates new wealth for professional culture. There’s a choice to be made about which form of culture we should protect. That choice has not yet been made directly. It is one more choice we have yet to make.
These three examples reveal a common pattern — one that will reach far beyond copyright. At one time we enjoyed a certain kind of liberty. But that liberty was not directly chosen; it was a liberty resulting from the high costs of control[65]. That was the conclusion we drew about fair use — that when the cost of control was high, the space for fair use was great. So too with anonymous reading: We read anonymously in real space not so much because laws protect that right as because the cost of tracking what we read is so great. And it was the same with amateur culture: That flourished free of regulation because regulation could not easily reach it.
When costs of control fall, however, liberty is threatened. That threat requires a choice — do we allow the erosion of an earlier liberty, or do we erect other limits to re-create that original liberty?
The law of intellectual property is the first example of this general point. As the architecture of the Internet changes, it will allow for a greater protection of intellectual property than real-space architectures allowed; this greater protection will force a choice on us that we do not need to make in real space. Should the architecture allow perfect control over intellectual property, or should we build into the architecture an incompleteness that guarantees a certain aspect of public use or a certain space for individual freedom?
Ignoring these questions will not make them go away. Pretending that the framers answered them is no solution either. In this context (and this is just the first) we will need to make a judgment about which values the architecture will protect.
I’ve argued that cyberspace will open up three important choices in the context of intellectual property: whether to allow intellectual property in effect to become completely propertized (for that is what a perfect code regime for protecting intellectual property would do); and whether to allow this regime to erase the anonymity latent in less efficient architectures of control; and whether to allow the expansion of intellectual property to drive out amateur culture. These choices were not made by our framers. They are for us to make now.
I have a view, in this context as in the following three, about how we should exercise that choice. But I am a lawyer. Lawyers are taught to point elsewhere — to the framers, to the United Nations charter, to an act of Congress — when arguing about how things ought to be. Having said that there is no such authority here, I feel as if I ought to be silent.
Cowardly, not silent, however, is how others might see it. They say that I should say what I think. So in each of these three applications (intellectual property, privacy, and free speech), I will offer my view about how these choices should be made. But I do this under some duress and encourage you to simply ignore what I believe. It will be short, and summary, and easy to discard. It is the balance of the book — and, most importantly, the claim that we have a choice to make — that I really want to stick.
Cohen, it seems to me, is plainly right about anonymity, and the Cohen Theorem is inspirational. However efficient the alternative may be, we should certainly architect cyberspaces to ensure anonymity — or more precisely, pseudonymity — first. If the code is going to monitor what I do, then at least it should not know that it is “I” that it is monitoring. I am less troubled if it knows that “14AH342BD7” read such and such; I am deeply troubled if that number is tied back to my name.
Cohen is right for a second reason as well: All of the good that comes from monitoring could be achieved while protecting privacy. It may take a bit more coding to build in routines for breaking traceability; it may take more planning to ensure that privacy is protected. But if those rules are embedded up front, the cost would not be terribly high. It is far cheaper to architect privacy protections now rather than retrofit for them later.
By “the Commons” I mean a resource that anyone within a relevant community can use without seeking the permission of anyone else. Such permission may not be required because the resource is not subject to any legal control (it is, in other words, in the public domain). Or it may not be required because permission to use the resource has already been granted. In either case, to use or to build upon this resource requires nothing more than access to the resource itself[66].
In this sense, the questions about the scope and reach of copyright law ask whether our future will protect the intellectual commons that it did in the past. Again, it did so in the past because the friction of control was too great. But now that that friction is gone, will we preserve or destroy the commons that used to exist?
My view is that it ought to be preserved.
We can architect cyberspace to preserve a commons or not. (Jefferson thought that nature had already done the architecting, but Jefferson wrote before there was code.) We should choose to architect it with a commons. Our past had a commons that could not be designed away; that commons gave our culture great value. What value the commons of the future could bring us is something we are just beginning to see. Intellectual property scholars saw it — long before cyberspace came along — and laid the groundwork for much of the argument we need to have now[67]. The greatest work in the law of cyberspace has been written in the field of intellectual property. In a wide range of contexts, these scholars have made a powerful case for the substantive value of an intellectual commons[68].
James Boyle puts the case most dramatically in his extraordinary book Shamans, Software, and Spleens[69]. Drawing together both cyberspace and noncyberspace questions, he spells out the challenge we face in an information society — particularly the political challenge[70]. Elsewhere he identifies our need for an “environmental movement” in information policy — a rhetoric that gets people to see the broad range of values put at risk by this movement to propertize all information. Boyle’s work has inspired many others to push a similar agenda of freedom[71].
That freedom would limit the law’s regulation over the use and reuse of culture. It would resist perfect control over use; it would free a wide range of reuse. It would build through affirmative protections for freedom the liberty that friction gave us before. It would do so because it believes in the values this freedom stands for, and it would demonstrate the value in that freedom by enabling the communities that freedom would itself enable.
But this freedom could be constructed either through changes in the law or voluntarily. That is, the law could be rebalanced to encourage the freedom thought important, or this property could be redeployed to effect the freedom thought important.
The second strategy was the technique of the Free Software Movement, described in Chapter 8. Using copyright law, Stallman deployed a software license that both preserved the four freedoms of free software, and also required that those modifying and distributing free software distribute the modifications freely. This license thus effects a software commons, since the software is available to all to use, and this software commons has become a critical raw material fueling the digital age.
More recently, Stallman’s idea has been copied by others seeking to rebuild a commons in cyberspace. The Wikipedia project, for example, has built — to the astonishment of most — an extraordinary online encyclopedia solely through the volunteer efforts of thousands, contributing essays and edits in a public wiki. The product of that work is now protected perpetually (yes, I know, only for a “limited time”, but don’t correct me about that little detail) through a copyright license that, like the GPL, requires any modification to be distributed freely as well. (More on Wikipedia in Chapter 12.)
And so too has Creative Commons used private law to build an effective public commons. Again, following Stallman, Creative Commons offers copyright holders a simple way to mark their creative work with the freedoms they intend it to carry. That mark is a license which reserves to the author some rights, while dedicating to the public rights that otherwise would have been held privately. As these licenses are nonexclusive and public, they too effectively build a commons of creative resources that anyone can build upon.
Though I have spent a great deal of my time helping to build the Creative Commons, I still believe private action alone is not enough. Yet there is value in learning something from what this private action produces, as its lesson may help policy makers recraft copyright law in the future.
The conclusion of Part 1 was that code could enable a more regulable cyberspace; the conclusion of Part 2 was that code would become an increasingly important regulator in that more regulable space. Both conclusions were central to the story of the previous chapter. Contrary to the early panic by copyright holders, the Internet will become a space where intellectual property can be more easily protected. As I’ve described, that protection will be effected through code.
Privacy is a surprisingly similar story. Indeed, as Jonathan Zittrain argued in an essay published in the Stanford Law Review[1], the problems of privacy and copyright are exactly the same. With both, there’s a bit of “our” data that “we’ve” lost control over. In the case of copyright, it is the data constituting a copy of our copyrighted work; in the case of privacy, it is the data representing some fact about us. In both cases, the Internet has produced this loss of control: with copyright, because the technology enables perfect and free copies of content; with privacy, as we’ll see in this chapter, because the technology enables perpetual and cheap monitoring of behavior. In both cases, the question policy makers should ask is what mix of law and technology might restore the proper level of control. That level must balance private and public interests: With copyright, the balance is as I described in the last chapter; with privacy, it is as we’ll explore in this chapter.
The big difference between copyright and privacy, however, is the political economy that seeks a solution to each problem. With copyright, the interests threatened are powerful and well organized; with privacy, the interests threatened are diffuse and disorganized. With copyright, the values on the other side of protection (the commons, or the public domain) are neither compelling nor well understood. With privacy, the values on the other side of protection (security, the war against terrorism) are compelling and well understood. The result of these differences, as any political theorist would then predict, is that over the past ten years, while we’ve seen a lot of legislative and technical changes to solve the problems facing copyright, we’ve seen very few that would solve the problems of privacy.
Yet as with copyright, we could restrike the balance protecting privacy. There are both changes in law and changes in technology that could produce a much more private (and secure) digital environment. Whether we will realize these changes depends upon recognizing both the dynamics to regulation in cyberspace and the importance of the value that privacy is.
We will think about three aspects of privacy, and how cyberspace has changed each of them. Two of these three will be the focus of this chapter, but I begin with the third to help orient the balance.
The traditional question of “privacy” was the limit the law placed upon the ability of others to penetrate your private space. What right does the government have to enter your home, or search your papers? What protection does the law of trespass provide against others beyond the government snooping into your private stuff? This is one meaning of Brandeis’s slogan, “the right to be left alone.[2]” From the perspective of the law, it is the set of legal restrictions on the power of others to invade a protected space.
Those legal restrictions were complemented by physical barriers. The law of trespass may well say it’s illegal to enter my house at night, but that doesn’t mean I won’t lock my doors or bolt my windows. Here again, the protection one enjoys is the sum of the protections provided by the four modalities of regulation. Law supplements the protections of technology, the protections built into norms, and the protections from the costliness of illegal penetration.
Digital technologies have changed these protections. The cost of parabolic microphone technology has dropped dramatically; that means it’s easier for me to listen to your conversation through your window. On the other hand, the cost of security technologies to monitor intrusion has also fallen dramatically. The net of these changes is difficult to reckon, but the core value is not rendered ambiguous by this difficulty. The expectation of privacy in what is reasonably understood to be “private” spaces remains unchallenged by new technologies. This sort of privacy doesn’t present a “latent ambiguity.”
A second kind of privacy will seem at first oxymoronic — privacy in public. What kind of protection is there against gathering data about me while I’m on a public street, or boarding an airplane?
The traditional answer was simple: None. By stepping into the public, you relinquished any rights to hide or control what others came to know about you. The facts that you transmitted about yourself were as “free as the air to common use.[3]” The law provided no legal protection against the use of data gathered in public contexts.
But as we’ve seen again and again, just because the law of privacy didn’t protect you it doesn’t follow that you weren’t protected. Facts about you while you are in public, even if not legally protected, are effectively protected by the high cost of gathering or using those facts. Friction is thus privacy’s best friend.
To see the protection that this friction creates, however, we must distinguish between two dimensions along which privacy might be compromised.
There is a part of anyone’s life that is monitored, and there is a part that can be searched. The monitored is that part of one’s daily existence that others see or notice and can respond to, if response is appropriate. As I walk down the street, my behavior is monitored. If I walked down the street in a small village in western China, my behavior would be monitored quite extensively. This monitoring in both cases would be transitory. People would notice, for example, if I were walking with an elephant or in a dress, but if there were nothing special about my walk, if I simply blended into the crowd, then I might be noticed for the moment but forgotten soon after — more quickly in San Francisco, perhaps, than in China.
The searchable is the part of your life that leaves, or is, a record. Scribblings in your diary are a record of your thoughts. Stuff in your house is a record of what you possess. The recordings on your telephone answering machine are a record of who called and what they said. Your hard drive is you. These parts of your life are not ephemeral. They instead remain to be reviewed — at least if technology and the law permit.
These two dimensions can interact, depending upon the technology in each. My every action in a small village may be monitored by my neighbors. That monitoring produces a record — in their memories. But given the nature of the recording technology, it is fairly costly for the government to search that record. Police officers need to poll the neighbors; they need to triangulate on the inevitably incomplete accounts to figure out what parts are true, and what parts are not. That’s a familiar process, but it has its limits. It might be easy to poll the neighbors to learn information to help locate a lost person, but if the government asked questions about the political views of a neighbor, we might expect (hope?) there would be resistance to that. Thus, in principle, the data are there. In practice, they are costly to extract.
Digital technologies change this balance — radically. They not only make more behavior monitorable; they also make more behavior searchable. The same technologies that gather data now gather it in a way that makes it searchable. Thus, increasingly life becomes a village composed of parallel processors, accessible at any time to reconstruct events or track behavior.
Consider some familiar examples:
In Part I, I described the anonymity the Internet originally provided. But let’s be clear about something important: That relative anonymity of the “old days” is now effectively gone. Everywhere you go on the Internet, the fact that IP address xxx.xx x.xxx.xxx went there is recorded. Everywhere you go where you’ve allowed a cookie to be deposited, the fact that the machine carrying that cookie went there is recorded — as well as all the data associated with that cookie. They know you from your mouse droppings. And as businesses and advertisers work more closely together, the span of data that can be aggregated about you becomes endless.
Consider a hypothetical that is completely technically possible under the existing architectures of the Net. You go to a web page of a company you trust, and you give that company every bit of your private data — your name, address, social security number, favorite magazines and TV shows, etc. That company gives you a cookie. You then go to another site, one you don’t trust. You decide not to give that site any personal data. But there’s no way for you to know whether these companies are cooperating about the data they collect. Its perfectly possible they synchronize the cookies data they create. And thus, there’s no technical reason why once you’ve given your data once, it isn’t known by a wide range of sites that you visit.
In the section that follows, we’ll consider more extensively how we should think about privacy in any data I’ve affirmatively provided others, such as my name, address, or social security number. But for the moment, just focus upon the identity data they’ve collected as I move around in “public.” Unless you’ve taken extraordinary steps — installing privacy software on your computer, or disabling cookies, etc. — there’s no reason you should expect that the fact that you visited certain sites, or ran certain searches, isn’t knowable by someone. It is. The layers of technology designed to identify “the customer” have produced endless layers of data that can be traced back to you.
In January 2006, Google surprised the government by doing what no other search company had done: It told the government “no.” The Justice Department had launched a study of pornography on the Net as a way to defend Congress’s latest regulation of pornography. It thus wanted data about how often, and in what form, people search for porn on the Internet. It asked Google to provide 1,000,000 random searches from its database over a specified period. Google — unlike Yahoo! and MSN — refused.
I suspect that when most first heard about this, they asked themselves an obvious question — Google keeps search requests? It does. Curiosity is monitored, producing a searchable database of the curious. As a way to figure out better how to do its job, Google — and every other search engine[4] — keeps a copy of every search it’s asked to make. More disturbingly, Google links that search to a specific IP address, and, if possible, to a Google users’ account. Thus, in the bowels of Google’s database, there is a list of all searches made by you when you were logged into your gmail account, sitting, waiting, for someone to ask to see it.
The government did ask. And in the normal course of things, the government’s request would be totally ordinary. It is unquestioned that the government gets to ask those with relevant evidence to provide it for an ongoing civil or criminal investigation (there are limits, but none really significant). Google has evidence; the government would ordinarily have the right to get it.
Moreover, the government in this case explicitly promised it would not use this evidence for anything more than evaluating patterns of consumption around porn. In particular, it promised it wouldn’t trace any particularly suspicious searches. It would ignore that evidence — which ordinarily it would be free to use for whatever purpose it chose — just so it could get access to aggregate data about searches for porn.
So what’s the problem this example illustrates?
Before search engines, no one had any records of curiosity; there was no list of questions asked. Now there is. People obsessively pepper search engines with questions about everything. The vast majority of these are totally benign ( “mushrooms AND ragout”). Some of them show something less benign about the searcher (“erotic pictures AND children”). Now there’s a list of all these questions, with some providing evidence of at least criminal intent.
The government’s interest in that list will increase. At first, its demands will seem quite harmless — so what if it counts the number of times people ask Google to point them to erotic pictures? Then, when not so harmless, the demands will link to very harmful behavior — searches that suggest terrorism, or abuse. Who could argue against revealing that? Finally, when not so harmless, and when the crime is not so harmful, the demands will simply insist this is an efficient way to enforce the law. “If you don’t like the law, change it. But until you do, let us enforce it.” The progression is obvious, inevitable, and irresistible.
Electronic mail is a text-based message stored in digital form. It is like a transcribed telephone call. When sent from one person to another, e-mail is copied and transmitted from machine to machine; it sits on these different machines until removed either by routines — decisions by machines — or by people.
The content of many e-mail messages is like the content of an ordinary telephone call — unplanned, unthinking, the ordinary chatter of friends. But unlike a telephone call, this content is saved in a searchable form. Companies now invest millions in technologies that scan the conversations of employees that before were effectively private. Both in real time and in retrospect, the content of conversations can become known. On the theory that they “own the computer[5]”, employers increasingly snoop in the e-mail of employees, looking for stuff they deem improper[6].
In principle, such monitoring and searching are possible with telephone calls or letters. In practice, these communications are not monitored. To monitor telephones or regular mail requires time and money — that is, human intervention. And this cost means that most won’t do it. Here again, the costs of control yield a certain kind of freedom.
Controlling employees (or spouses) is one important new use of e-mail technologies. Another is the better delivery of advertisement. Google is again the leader here with its new Gmail service. Gmail can advertise to you as you read your e-mail. But the advance is that the advertisement is triggered by the content of the e-mail. Imagine a television that shifted its advertisement as it heard what you were talking about on the phone. The content of the e-mail — and perhaps the content of your inbox generally — helps determine what is shown to you.
To make this system work well, Google needs you to keep lots of data on its servers. Thus the only thing within Gmail that is difficult to do — and it is really really difficult — is to delete content from a Google Gmail account. Gmail lets you delete one screen at a time. But when you have 20,000 e-mails in your inbox, who has time? Would it be difficult for Gmail to enable a “delete all” function? Of course not. This is Google! Thus, through the clever use of architecture, Google assures more data is kept, and that data then becomes a resource for other purposes. If you ever get involved in a lawsuit, the first question of the lawyer from the other side should be — do you have a Gmail account? Because, if you do, your life sits open for review.
If e-mail becomes a permanent record, why not v-mail? Voice mail systems archive messages and record the communication attributes of the conversations. As technologies for voice recognition improve, so does the ability to search voice records. As voice mail systems shift to digital systems, archiving content on central servers rather than $50 devices connected to the phone at home, they become practical search resources. In principle, every night the government could scan all the stored voice recordings at every telephone company in the nation. This search would impose no burden on the user; it could be targeted on and limited to specific topics, and it could operate in the background without anyone ever knowing.
And why stop with recordings? According to one report, the NSA monitors over 650 million telephone conversations a day[7]. That monitoring is automatic. It used to be of foreigners only, but now apparently the system monitors an extraordinary range of communication, searching for that bit or clue that triggers investigative concern. The system produces something akin to a weather report as well as particularized indicators. There are, for example, measures of “chatter” that may signal a storm.
This monitoring, like each of the examples before, creates no burden for those using a telephone. Those using the phone don’t know something is listening on the other end. Instead, the system works quietly in the background, searching this monitored communication in real time.
In each of the examples so far, someone has chosen to use a technology, and that technology has made their privacy vulnerable. The change is produced as that technology evolves to make it simpler to monitor and search behavior.
But the same evolution is happening outside networks as well. Indeed, it is happening in the quintessentially public place — the streets, or in public venues. This monitoring is the production of the current version of video technology. Originally, video cameras were a relatively benign form of monitoring. Because the product of their monitoring relied solely upon human interpretation, there were relatively few contexts in which it paid to have someone watch. And where someone wasn’t watching in real time, then the use of these technologies is to trace bad behavior after it happens. Few seem upset when a convenience store video camera makes it possible to identify the criminal who has murdered the attendant.
Digital technology has changed the video, however. It is now a tool of intelligence, not just a tool to record. In London, as I’ve described, cameras are spread through the city to monitor which cars drive in the city. This is because nonresidents must pay a special tax to drive in “congestion zones.” The cameras record and interpret license places, and then determine whether the right tax was paid for that car. The objective of the system was to minimize congestion in London. Its consequence is a database of every car that enters London, tied to a particular time and location.
But the more ambitious use of video surveillance is human face recognition. While the technology received some very bad press when first introduced in Tampa[8], the government continues to encourage companies to develop the capacity to identify who someone is while that someone is in a traditionally anonymous place. As one vendor advertises, “face recognition technology is the least intrusive and fastest biometric technology. . . . There is no intrusion or delay, and in most cases the subjects are entirely unaware of the process. They do not feel ‘under surveillance’ or that their privacy has been invaded[9]”.
These technologies aren’t yet reliable. But they continue to be funded by both private investors and the government. Indeed, the government runs evaluation tests bi-annually to rate the reliability of the technologies[10]. There must at least be someone who expects that someday it will possible to use a camera to identify who is in a crowd, or who boarded a train.
Criminals leave evidence behind, both because they’re usually not terribly rational and because it’s extremely hard not to. And technology is only making it harder not to. With DNA technology, it becomes increasingly difficult for a criminal to avoid leaving his mark, and increasingly easy for law enforcement to identify with extremely high confidence whether X did Y.
Some nations have begun to capitalize on this new advantage. And again, Britain is in the lead[11]. Beginning in 1995, the British government started collecting DNA samples to include in a national registry. The program was initially promoted as a way to fight terrorism. But in a decade, its use has become much less discriminating.
In December 2005, while riding public transportation in London, I read the following on a public announcement poster:
Abuse, Assault, Arrest: Our staff are here to help you. Spitting on DLR staff is classified as an assault and is a criminal offence. Saliva Recovery Kits are now held on every train and will be used to identi fy offenders against the national DNA database.
And why not? Spitting may be harmless. But it is insulting. And if the tools exist to identify the perpetrator of the insult, why not use them?
In all these cases, technologies designed either without monitoring as their aim or with just limited monitoring as their capacity have now become expert technologies for monitoring. The aggregate of these technologies produces an extraordinary range of searchable data. And, more importantly, as these technologies mature, there will be essentially no way for anyone living within ordinary society to escape this monitoring. Monitoring to produce searchable data will become the default architecture for public space, as standard as street lights. From the simple ability to trace back to an individual, to the more troubling ability to know what that individual is doing or likes at any particular moment, the maturing data infrastructure produces a panopticon beyond anything Bentham ever imagined.
“Orwell” is the word you’re looking for. And while I believe that analogies to Orwell are just about always useless, let’s make one comparison here nonetheless. While the ends of the government in 1984 were certainly vastly more evil than anything our government would ever pursue, it is interesting to note just how inefficient, relative to the current range of technologies, Orwell’s technologies were. The central device was a “telescreen” that both broadcasted content and monitored behavior on the other side. But the great virtue of the telescreen was that you knew what it, in principle, could see. Winston knew where to hide, because the perspective of the telescreen was transparent[12]. It was easy to know what it couldn’t see, and hence easy to know where to do the stuff you didn’t want it to see.
That’s not the world we live in today. You can’t know whether your search on the Internet is being monitored. You don’t know whether a camera is trying to identify who you are. Your telephone doesn’t make funny clicks as the NSA listens in. Your e-mail doesn’t report when some bot has searched it. The technologies of today have none of the integrity of the technologies of 1984. None are decent enough to let you know when your life is being recorded.
There’s a second difference as well. The great flaw to the design of 1984 was in imagining just how it was that behavior was being monitored. There were no computers in the story. The monitoring was done by gaggles of guards watching banks of televisions. But that monitoring produced no simple way for the guards to connect their intelligence. There was no search across the brains of the guards. Sure, a guard might notice that you’re talking to someone you shouldn’t be talking to or that you’ve entered a part of a city you shouldn’t be in. But there was no single guard who had a complete picture of the life of Winston.
Again, that “imperfection” can now be eliminated. We can monitor everything and search the product of that monitoring. Even Orwell couldn’t imagine that.
I’ve surveyed a range of technologies to identify a common form. In each, the individual acts in a context that is technically public. I don’t mean it should be treated by the law as “public” in the sense that privacy should not be protected there. I’m not addressing that question yet. I mean only that the individual is putting his words or image in a context that he doesn’t control. Walking down 5th Avenue is the clearest example. Sending a letter is another. In both cases, the individual has put himself in a stream of activity that he doesn’t control.
The question for us, then, is what limits there should be — in the name of “privacy” — on the ability to surveil these activities. But even that question puts the matter too broadly. By “surveil”, I don’t mean surveillance generally. I mean the very specific kind of surveillance the examples above evince. I mean what we could call “digital surveillance.”
“Digital surveillance” is the process by which some form of human activity is analyzed by a computer according to some specified rule. The rule might say “flag all e-mail talking about Al Qaeda.” Or it might say “flag all e-mail praising Governor Dean.” Again, at this point I’m not focused upon the normative or legal question of whether such surveillance should be allowed. At this point, we’re just working through definitions. In each of the cases above, the critical feature in each is that a computer is sorting data for some follow-up review by some human. The sophistication of the search is a technical question, but there’s no doubt that its accuracy is improving substantially.
So should this form of monitoring be allowed?
I find when I ask this question framed precisely like this that there are two polar opposite reactions. On the one hand, friends of privacy say that there’s nothing new here. There’s no difference between the police reading your mail, and the police’s computer reading your e-mail. In both cases, a legitimate and reasonable expectation of privacy has been breached. In both cases, the law should protect against that breach.
On the other hand, friends of security insist there is a fundamental difference. As Judge Richard Posner wrote in the Washington Post, in an article defending the Bush Administration’s (extensive[13]) surveillance of domestic communications, “machine collection and processing of data cannot, as such, invade privacy.” Why? Because it is a machine that is processing the data. Machines don’t gossip. They don’t care about your affair with your co-worker. They don’t punish you for your political opinions. They’re just logic machines that act based upon conditions. Indeed, as Judge Posner argues, “this initial sifting, far from invading privacy (a computer is not a sentient being), keeps most private data from being read by any intelligence officer. ” We’re better off having machines read our e-mail, Posner suggests, both because of the security gain, and because the alternative snoop — an intelligence officer — would be much more nosey.
But it would go too far to suggest there isn’t some cost to this system. If we lived in a world where our every communication was monitored (if?), that would certainly challenge the sense that we were “left alone.” We would be left alone in the sense a toddler is left in a playroom — with parents listening carefully from the next room. There would certainly be something distinctively different about the world of perpetual monitoring, and that difference must be reckoned in any account of whether this sort of surveillance should be allowed.
We should also account for the “best intentions” phenomenon. Systems of surveillance are instituted for one reason; they get used for another. Jeff Rosen has cataloged the abuses of the surveillance culture that Britain has become[14]: Video cameras used to leer at women or for sensational news stories. Or in the United States, the massive surveillance for the purpose of tracking “terrorists” was also used to track domestic environmental and antiwar groups[15].
But let’s frame the question in its most compelling form. Imagine a system of digital surveillance in which the algorithm was known and verifiable: We knew, that is, exactly what was being searched for; we trusted that’s all that was being searched for. That surveillance was broad and indiscriminate. But before anything could be done on the basis of the results from that surveillance, a court would have to act. So the machine would spit out bits of data implicating X in some targeted crime, and a court would decide whether that data sufficed either to justify an arrest or a more traditional search. And finally, to make the system as protective as we can, the only evidence that could be used from this surveillance would be evidence directed against the crimes being surveilled for. So for example, if you’re looking for terrorists, you don’t use the evidence to prosecute for tax evasion. I’m not saying what the targeted crimes are; all I’m saying is that we don’t use the traditional rule that allows all evidence gathered legally to be usable for any legal end.
Would such a system violate the protections of the Fourth Amendment? Should it?
The answer to this question depends upon your conception of the value protected by the Fourth Amendment. As I described in Chapter 6, that amendment was targeted against indiscriminate searches and “general warrants” — that is, searches that were not particularized to any particular individual and the immunity that was granted to those engaging in that search. But those searches, like any search at that time, imposed burdens on the person being searched. If you viewed the value the Fourth Amendment protected as the protection from the unjustified burden of this indiscriminate search, then this digital surveillance would seem to raise no significant problems. As framed above, they produce no burden at all unless sufficient evidence is discovered to induce a court to authorize a search.
But it may be that we understand the Fourth Amendment to protect a kind of dignity. Even if a search does not burden anyone, or even if one doesn’t notice the search at all, this conception of privacy holds that the very idea of a search is an offense to dignity. That dignity interest is only matched if the state has a good reason to search before it searches. From this perspective, a search without justification harms your dignity whether it interferes with your life or not.
I saw these two conceptions of privacy play out against each other in a tragically common encounter in Washington, D.C. A friend and I had arranged a “police ride-along” — riding with District police during their ordinary patrol. The neighborhood we patrolled was among the poorest in the city, and around 11:00 p.m. a report came in that a car alarm had been tripped in a location close to ours. When we arrived near the scene, at least five police officers were attempting to hold three youths; three of the officers were holding the suspects flat against the wall, with their legs spread and their faces pressed against the brick.
These three were “suspects” — they were near a car alarm when it went off — and yet, from the looks of things, you would have thought they had been caught holding the Hope diamond.
And then an extraordinary disruption broke out. To the surprise of everyone, and to my terror (for this seemed a tinder box, and what I am about to describe seemed the match), one of the three youths, no older than seventeen, turned around in a fit of anger and started screaming at the cops. “Every time anything happens in this neighborhood, I get thrown against the wall, and a gun pushed against my head. I’ve never done anything illegal, but I’m constantly being pushed around by cops with guns.”
His friend then turned around and tried to calm him down. “Cool it, man, they’re just trying to do their job. It’ll be over in a minute, and everything will be cool.”
“I’m not going to cool it. Why the fuck do I have to live this way? I am not a criminal. I don’t deserve to be treated like this. Someday one of these guns is going to go off by accident — and then I’ll be a fucking statistic. What then?”
At this point the cops intervened, three of them flipping the indignant youth around against the wall, his face again flat against the brick. “This will be over in a minute. If you check out, you’ll be free to go. Just relax.”
In the voice of rage of the first youth was the outrage of dignity denied. Whether reasonable or not, whether minimally intrusive or not, there was something insulting about this experience — all the more insulting when repeated, one imagines, over and over again. As Justice Scalia has written, wondering whether the framers of the Constitution would have considered constitutional the police practice known as a “Terry stop” — stopping and frisking any individual whenever the police have a reasonable suspicion — “I frankly doubt . . . whether the fiercely proud men who adopted our Fourth Amendment would have allowed themselves to be subjected, on mere suspicion of being armed and dangerous, to such indignity[16]”.
And yet again, there is the argument of minimal intrusion. If privacy is a protection against unjustified and excessive disruption, then this was no invasion of privacy. As the second youth argued, the intrusion was minimal; it would pass quickly (as it did — five minutes later, after their identification checked out, we had left); and it was reasonably related to some legitimate end. Privacy here is simply the protection against unreasonable and burdensome intrusions, and this search, the second youth argued, was not so unreasonable and burdensome as to justify the fit of anger (which also risked a much greater danger).
From this perspective, the harm in digital surveillance is even harder to reckon. I’m certain there are those who feel an indignity at the very idea that records about them are being reviewed by computers. But most would recognize a very different dignity at stake here. Unlike those unfortunate kids against the wall, there is no real interference here at all. Very much as with those kids, if nothing is found, nothing will happen. So what is the indignity? How is it expressed?
A third conception of privacy is about neither preserving dignity nor minimizing intrusion. It is instead substantive — privacy as a way to constrain the power of the state to regulate. Here the work of William Stuntz is a guide[17]. Stuntz argues that the real purpose of the Fourth and Fifth Amendments is to make some types of regulation too difficult by making the evidence needed to prosecute such violations effective ly impossible to gather.
This is a hard idea for us to imagine. In our world, the sources of evidence are many — credit card records, telephone records, video cameras at 7-Elevens — so it’s hard for us to imagine any crime that there wouldn’t be some evidence to prosecute. But put yourself back two hundred years when the only real evidence was testimony and things, and the rules of evidence forbade the defendant from testifying at all. Imagine in that context the state wanted to punish you for “sedition.” The only good evidence of sedition would be your writings or your own testimony about your thoughts. If those two sources were eliminated, then it would be practically impossible to prosecute sedition successfully.
As Stuntz argues, this is just what the Fourth and Fifth Amendments do. Combined, they make collecting the evidence for a crime like sedition impossible, thereby making it useless for the state to try to prosecute it. And not just sedition — as Stuntz argues, the effect of the Fourth, Fifth, and Sixth Amendments was to restrict the scope of regulation that was practically possible. As he writes: “Just as a law banning the use of contraceptives would tend to encourage bedroom searches, so also would a ban on bedroom searches tend to discourage laws prohibiting contraceptives[18]”.
But were not such searches already restricted by, for example, the First Amendment? Would not a law punishing seditious libel have been unconstitutional in any case? In fact, that was not at all clear at the founding; indeed, it was so unclear that in 1798 Congress passed the Alien and Sedition Acts, which in effect punished sedition quite directly[19]. Many thought these laws unconstitutional, but the Fourth and Fifth Amendments would have been effective limits on their enforcement, whether the substantive laws were constitutional or not.
In this conception, privacy is meant as a substantive limit on government’s power[20]. Understood this way, privacy does more than protect dignity or limit intrusion; privacy limits what government can do.
If this were the conception of privacy, then digital surveillance could well accommodate it. If there were certain crimes that it was inappropriate to prosecute, we could remove them from the search algorithm. It would be hard to identify what crimes constitutionally must be removed from the algorithm — the First Amendment clearly banishes sedition from the list already. Maybe the rule simply tracks constitutional limitation.
Now the key is to recognize that, in principle, these three distinct conceptions of privacy could yield different results depending on the case. A search, for example, might not be intrusive but might offend dignity. In that case, we would have to choose a conception of privacy that we believed best captured the Constitution’s protection.
At the time of the founding, however, these different conceptions of privacy would not, for the most part, have yielded different conclusions. Any search that reached beyond the substantive limits of the amendment, or beyond the limits of dignity, would also have been a disturbance. Half of the framers could have held the dignity conception and half the utility conception, but because every search would have involved a violation of both, all the framers could have endorsed the protections of the Fourth Amendment.
Today, however, that’s not true. Today these three conceptions could yield very different results. The utility conception could permit efficient searches that are forbidden by the dignity and substantive conceptions. The correct translation (as Brandeis employed the term in the Olmstead wiretapping case) depends on selecting the proper conception to translate.
In this sense, our original protections were the product of what Cass Sunstein calls an “incompletely theorized agreement[21]”. Given the technology of the time, there was no reason to work out which theory underlay the constitutional text; all three were consistent with existing technology. But as the technology has changed, the original context has been challenged. Now that technologies such as the worm can search without disturbing, there is a conflict about what the Fourth Amendment protects.
This conflict is the other side of Sunstein’s incompletely theorized agreement. We might say that in any incompletely theorized agreement ambiguities will be latent, and we can describe contexts where these latencies emerge. The latent ambiguities about the protection of privacy, for example, are being rendered patent by the evolution of technology. And this in turn forces us to choose.
Some will once again try to suggest that the choice has been made — by our Constitution, in our past. This is the rhetoric of much of our constitutional jurisprudence, but it is not very helpful here. I do not think the framers worked out what the amendment would protect in a world where perfectly noninvasive searches could be conducted. They did not establish a constitution to apply in all possible worlds; they established a constitution for their world. When their world differs from ours in a way that reveals a choice they did not have to make, then we need to make that choice.
The story I’ve told so far is about limits on government: What power should the government have to surveil our activities, at least when those activities are in public? That’s the special question raised by cyberspace: What limits on “digital surveillance” should there be? There are, of course, many other more traditional questions that are also important. But my focus was “digital surveillance.”
In this part, I consider a third privacy question that is closely related, but very distinct. This is the question of what presumptive controls we should have over the data that we reveal to others. The issue here is not primarily the control of the government. The question is thus beyond the ordinary reach of the Fourth Amendment. Instead, the target of this control is private actors who have either gathered data about me as they’ve observed me, or collected data from me.
Again, let’s take this from the perspective of real space first. If I hire a private detective to follow you around, I’ve not violated anyone’s rights. If I compile a list of places you’ve been, there’s nothing to stop me from selling that list. You might think this intrusive. You might think it outrageous that the law would allow this to happen. But again, the law traditionally didn’t worry much about this kind of invasion because the costs of such surveillance were so high. Celebrities and the famous may wish the rules were different, but for most of us, for most of our history, there was no need for the law to intervene.
The same point could be made about the data I turned over to businesses or others in the days before the Internet. There was nothing in the law to limit what these entities did with that data. They could sell it to mailing list companies or brokers; they could use it however they wanted. Again, the practical cost of doing things with such data was high, so there wasn’t that much done with this data. And, more importantly, the invasiveness of any such use of data was relatively low. Junk mail was the main product, and junk mail in physical space is not a significant burden.
But here, as with “digital surveillance”, things have changed dramatically. Just a couple stories will give us a taste of the change:
• In the beginning of 2006, the Chicago Sun-Times reported[22] that there were websites selling the records of telephone calls made from cell phones. A blog, AmericaBlog, demonstrated the fact by purchasing the cell phone records of General Wesley Clark. For around $120, the blog was able to prove what most would have thought impossible: that anyone with a credit card could find something so personal as the list (and frequency and duration) of people someone calls on a cell phone.
This conduct was so outrageous that no one really stood up to defend it. But the defense isn’t hard to construct. Wesley Clark “voluntarily” dialed the numbers on his cell phone. He thus voluntarily turned that data over to the cell phone company. Because the cell phone company could sell data, it made it easier for the company to keep prices low(er). Clark benefited from those lower prices. So what’s his complaint?
• A number of years ago I received a letter from AT&T. It was addressed to an old girlfriend, but the letter had not been forwarded. The address was my then- current apartment. AT&T wanted to offer her a new credit card. They were a bit late: She and I had broken up eight years before. Since then, she had moved to Texas, and I had moved to Chicago, to Washington, back to Chicago, on to New Haven, back to Chicago, and finally to Boston, where I had moved twice. My peripateticism, however, did not deter AT &T. With great faith in my constancy, it believed that a woman I had not even seen in many years was living with me in this apartment.
How did AT&T maintain such a belief? Well, floating about in cyberspace is lots of data about me. It has been collected from me ever since I began using credit cards, telephones, and who knows what else. The system continuously tries to update and refine this extraordinary data set — that is, it profiles who I am and, using that profile, determines how it will interact with me.
These are just the tip of the iceberg. Everything you do on the Net produces data. That data is, in aggregate, extremely valuable, more valuable to commerce than it is to the government. The government (in normal times) really cares only that you obey some select set of laws. But commerce is keen to figure out how you want to spend your money, and data does that. With massive amounts of data about what you do and what you say, it becomes increasingly possible to market to you in a direct and effective way. Google Gmail processes the data in your e-mail to see what it should try to sell. Amazon watches what you browse to see what special “Gold Box” offers it can make. There’s an endless list of entities that want to know more about you to better serve (at least) their interests. What limits, or restrictions, ought there to be on them?
We should begin with an obvious point that might help direct an answer. There’s a big difference between (1) collecting data about X to suss out a crime or a criminal, (2) collecting data about X that will be sold to Y simply to reveal facts about X (such as his cell phone calls), and (3) collecting data about X to better market to X. (1) and (2) make X worse off, though if we believe the crime is properly a crime, then with (1), X is not worse off relative to where he should be. (3) in principle could make you better off — it facilitates advertising that is better targeted and better designed to encourage voluntary transactions. I say “in principle” because even though it’s possible that the ads are better targeted, there are also more of them. On balance, X might be worse off with the flood of well-targeted offers than with a few less well-targeted offers. But despite that possibility, the motive of (3) is different from (1) and (2), and that might well affect how we should respond.
So let’s begin with the focus on (3): What is the harm from this sort of “invasion”? Arguments rage on both sides of this question.
The “no harm” side assumes that the balance of privacy is struck at the line where you reveal information about yourself to the public. Sure, information kept behind closed doors or written in a private diary should be protected by the law. But when you go out in public, when you make transactions there or send material there, you give up any right to privacy. Others now have the right to collect data about your public behavior and do with it what suits them.
Why is that idea not troubling to these theorists? The reasons are many:
• First, the harm is actually not very great. You get a discount card at your local grocery store; the store then collects data about what you buy. With that data, the store may market different goods to you or figure out how better to price its products; it may even decide that it should offer different mixes of discounts to better serve customers. These responses, the argument goes, are the likely ones, because the store’s business is only to sell groceries more efficiently.
• Second, it is an unfair burden to force others to ignore what you show them. If data about you are not usable by others, then it is as if you were requiring others to discard what you have deposited on their land. If you do not like others using information about you, do not put it in their hands.
• Third, these data actually do some good. I do not know why Nike thinks I am a good person to tell about their latest sneakers, and I do not know why Keds does not know to call. In both cases, I suspect the reason is bad data about me. I would love it if Nike knew enough to leave me alone. And if these data were better collected and sorted, it would.
• Finally, in general, companies don’t spend money collecting these data to actually learn anything about you. They want to learn about people like you. They want to know your type. In principle, they would be happy to know your type even if they could not then learn who you are. What the merchants want is a way to discriminate — only in the sense of being able to tell the difference between sorts of people.
The other side of this argument, however, also has a point. It begins, again, by noticing the values that were originally protected by the imperfection of monitoring technology. This imperfection helped preserve important substantive values; one such value is the benefit of innocence. At any given time, there are innocent facts about you that may appear, in a particular context or to a particular set, guilty. Peter Lewis, in a New York Times article called “Forget Big Brother”, puts the point well:
Surveillance cameras followed the attractive young blond woman through the lobby of the midtown Manhattan hotel, kept a glassy eye on her as she rode the elevator up to the 23rd floor and peered discreetly down the hall as she knocked at the door to my room. I have not seen the videotapes, but I can imagine the digital readout superimposed on the scenes, noting the exact time of the encounter. That would come in handy if someone were to question later why this woman, who is not my wife, was visiting my hotel room during a recent business trip. The cameras later saw us heading off to dinner and to the theater — a middle aged, married man from Texas with his arm around a pretty East Village woman young enough to be his daughter.
“As a matter of fact”, Lewis writes, “she is my daughter[23]”.
One lesson of the story is the burden of these monitored facts. The burden is on you, the monitored, first to establish your innocence, and second to assure all who might see these ambiguous facts that you are innocent. Both processes, however, are imperfect; say what you want, doubts will remain. There are always some who will not believe your plea of innocence.
Modern monitoring only exacerbates this problem. Your life becomes an ever-increasing record; your actions are forever held in storage, open to being revealed at any time, and therefore at any time demanding a justification.
A second value follows directly from this modern capacity for archiving data. We all desire to live in separate communities, or among or within separate normative spaces. Privacy, or the ability to control data about yourself, supports this desire. It enables these multiple communities and disables the power of one dominant community to norm others into oblivion. Think, for example, about a gay man in an intolerant small town.
The point comes through most clearly when contrasted with an argument advanced by David Brin[24]. Brin argues against this concern with privacy — at least if privacy is defined as the need to block the production and distribution of data about others. He argues against it because he believes that such an end is impossible; the genie is out of the bottle. Better, he suggests, to find ways to ensure that this data-gathering ability is generally available. The solution to your spying on me is not to block your spying, but to let me spy on you — to hold you accountable, perhaps for spying, perhaps for whatever else you might be doing.
There are two replies to this argument. One asks: Why do we have to choose? Why can’t we both control spying and build in checks on the distribution of spying techniques?
The other reply is more fundamental. Brin assumes that this counter spying would be useful to hold others “accountable.” But according to whose norms? “Accountable” is a benign term only so long as we have confidence in the community doing the accounting. When we live in multiple communities, accountability becomes a way for one community to impose its view of propriety on another. Because we do not live in a single community, we do not live by a single set of values. And perfect accountability can only undermine this mix of values.
The imperfection in present monitoring enables this multiplication of normative communities. The ability to get along without perfect recording enables a diversity that perfect knowledge would erase.
A third value arises from a concern about profiling. If you search within Google for “mortgage” in a web search engine, advertising for mortgages appears on your computer screen. The same for sex and for cars. Advertising is linked to the search you submit. Data is collected, but not just about the search. Different sites collect just about every bit of personal information about you that they can[25]. And when you link from the Google search to a web page, the search you just performed is passed along to the next site.
Data collection is the dominant activity of commercial websites. Some 92 percent of them collect personal data from web users, which they then aggregate, sort, and use[26]. Oscar Gandy calls this the “panoptic sort” — a vast structure for collecting data and discriminating on the basis of that data — and it is this discrimination, he says, that ought to concern us[27].
But why should it concern us? Put aside an important class of problems — the misuse of the data — and focus instead on its ordinary use. As I said earlier, the main effect is simply to make the market work more smoothly: Interests and products are matched to people in a way that is better targeted and less intrusive than what we have today. Imagine a world where advertisers could tell which venues paid and which did not; where it was inefficient to advertise with billboards and on broadcasts; where most advertising was targeted and specific. Advertising would be more likely to go to those people for whom it would be useful information. Or so the argument goes. This is discrimination, no doubt, but not the discrimination of Jim Crow. It is the wonderful sort of discrimination that spares me Nike ads.
But beyond a perhaps fleeting concern about how such data affect the individual, profiling raises a more sustained collective concern about how it might affect a community.
That concern is manipulation. You might be skeptical about the power of television advertising to control people’s desires: Television is so obvious, the motives so clear. But what happens when the motive is not so obvious? When options just seem to appear right when you happen to want them? When the system seems to know what you want better and earlier than you do, how can you know where these desires really come from?
Whether this possibility is a realistic one, or whether it should be a concern, are hard and open questions. Steven Johnson argues quite effectively that in fact these agents of choice will facilitate a much greater range and diversity — even, in part, chaos — of choice[28]. But there’s another possibility as well — profiles will begin to normalize the population from which the norm is drawn. The observing will affect the observed. The system watches what you do; it fits you into a pattern; the pattern is then fed back to you in the form of options set by the pattern; the options reinforce the pattern; the cycle begins again.
A second concern is about equality. Profiling raises a question that was latent in the market until quite recently. For much of the nineteenth century in the United States economic thought was animated by an ideal of equality. In the civil space individuals were held to be equal. They could purchase and sell equally; they could approach others on equal terms. Facts about individuals might be known, and some of these facts might disqualify them from some economic transactions — your prior bankruptcy, for example, might inhibit your ability to make transactions in the future. But in the main, there were spaces of relative anonymity, and economic transactions could occur within them[29].
Over time this space of equality has been displaced by economic zonings that aim at segregation[30]. They are laws, that is, that promote distinctions based on social or economic criteria[31]. The most telling example is zoning itself. It was not until this century that local law was used to put people into segregated spaces[32]. At first, this law was racially based, but when racially based zoning was struck down, the techniques of zoning shifted[33].
It is interesting to recall just how contentious this use of law was[34]. To many, rich and poor alike, it was an affront to the American ideal of equality to make where you live depend on how much money you had. It always does, of course, when property is something you must buy. But zoning laws add the support of law to the segregation imposed by the market. The effect is to re-create in law, and therefore in society, distinctions among people.
There was a time when we would have defined our country as a place that aimed to erase these distinctions. The historian Gordon Wood describes this goal as an important element of the revolution that gave birth to the United States[35]. The enemy was social and legal hierarchy; the aim was a society of equality. The revolution was an attack on hierarchies of social rank and the special privileges they might obtain.
All social hierarchies require information before they can make discriminations of rank. Having enough information about people required, historically, fairly stable social orders. Making fine class distinctions — knowing, for instance, whether a well-dressed young man was the gentleman he claimed to be or only a dressed-up tradesman — required knowledge of local fashions, accents, customs, and manners. Only where there was relatively little mobility could these systems of hierarchy be imposed.
As mobility increased, then, these hierarchical systems were challenged. Beyond the extremes of the very rich and very poor, the ability to make subtle distinctions of rank disappeared as the mobility and fluidity of society made them too difficult to track.
Profiling changes all this. An efficient and effective system for monitoring makes it possible once again to make these subtle distinctions of rank. Collecting data cheaply and efficiently will take us back to the past. Think about frequent flyer miles. Everyone sees the obvious feature of frequent flyer miles — the free trips for people who fly frequently. This rebate program is quite harmless on its own. The more interesting part is the power it gives to airlines to discriminate in their services.
When a frequent flyer makes a reservation, the reservation carries with it a customer profile. This profile might include information about which seat she prefers or whether she likes vegetarian food. It also tells the reservation clerk how often this person flies. Some airlines would then discriminate on the basis of this information. The most obvious way is through seat location — frequent flyers get better seats. But such information might also affect how food is allocated on the flight — the frequent flyers with the most miles get first choice; those with the fewest may get no choice.
In the scheme of social justice, of course, this is small potatoes. But my point is more general. Frequent flyer systems permit the re-creation of systems of status. They supply information about individuals that organizations might value, and use, in dispensing services[36]. They make discrimination possible because they restore information that mobility destroyed. They are ways of defeating one benefit of anonymity — the benefit of equality.
Economists will argue that in many contexts this ability to discriminate — in effect, to offer goods at different prices to different people — is overall a benefit[37]. On average, people are better off if price discrimination occurs than if it does not. So we are better off, these economists might say, if we facilitate such discrimination when we can.
But these values are just one side of the equation. Weighed against them are the values of equality. For us they may seem remote, but we should not assume that because they are remote now they were always remote.
Take tipping: As benign (if annoying) as you might consider the practice of tipping, there was a time at the turn of the century when the very idea was an insult. It offended a free citizen’s dignity. As Viviana Zelizer describes it:
In the early 1900s, as tipping became increasingly popular, it provoked great moral and social controversy. In fact, there were nationwide efforts, some successful, by state legislatures to abolish tipping by turning it into a punishable misdemeanor. In countless newspaper editorials and magazine articles, in etiquette books, and even in court, tips were closely scrutinized with a mix of curiosity, amusement, and ambivalence — and often open hostility. When in 1907, the government officially sanctioned tipping by allowing commissioned officers and enlisted men of the United States Navy to include tips as an item in their travel expense vouchers, the decision was denounced as an illegitimate endorsement of graft. Periodically, there were calls to organize anti-tipping leagues[38].
There is a conception of equality that would be corrupted by the efficiency that profiling embraces. That conception is a value to be weighed against efficiency. Although I believe this value is relatively weak in American life, who am I to say? The important point is not about what is strong or weak, but about the tension or conflict that lay dormant until revealed by the emerging technology of profiling.
The pattern should be familiar by now, because we have seen the change elsewhere. Once again, the code changes, throwing into relief a conflict of values. Whereas before there was relative equality because the information that enabled discrimination was too costly to acquire, now it pays to discriminate. The difference — what makes it pay — is the emergence of a code. The code changes, the behavior changes, and a value latent in the prior regime is displaced.
We could react by hobbling the code, thus preserving this world. We could create constitutional or statutory restrictions that prevent a move to the new world. Or we could find ways to reconcile this emerging world with the values we think are fundamental.
I’ve identified two distinct threats to the values of privacy that the Internet will create. The first is the threat from “digital surveillance” — the growing capacity of the government (among others) to “spy” on your activities “in public.” From Internet access, to e-mail, to telephone calls, to walking on the street, digital technology is opening up the opportunity for increasingly perfect burdenless searches.
The second threat comes from the increasing aggregation of data by private (among other) entities. These data are gathered not so much to “spy” as to facilitate commerce. Some of that commerce exploits the source of the data (Wesley Clark’s cell phone numbers). Some of that commerce tries to facilitate commerce with the source of that data (targeted ads).
Against these two different risks, we can imagine four types of responses, each mapping one of the modalities that I described in Chapter 7:
• Law: Legal regulation could be crafted to respond to these threats. We’ll consider some of these later, but the general form should be clear enough. The law could direct the President not to surveil American citizens without reasonable suspicion, for example. (Whether the President follows the law is a separate question.) Or the law could ban the sale of data gathered from customers without express permission of the customers. In either case, the law threatens sanctions to change behavior directly. The aim of the law could either be to enhance the power of individuals to control data about them, or to disable such power (for example, by making certain privacy-related transactions illegal).
• Norms: Norms could be used to respond to these threats. Norms among commercial entities, for example, could help build trust around certain privacy protective practices.
• Markets: In ways that will become clearer below, the market could be used to protect the privacy of individuals.
• Architecture/Code: Technology could be used to protect privacy. Such technologies are often referred to as “Privacy Enhancing Technologies.” These are technologies designed to give the user more technical control over data associated with him or her.
As I’ve argued again and again, there is no single solution to policy problems on the Internet. Every solution requires a mix of at least two modalities. And in the balance of this chapter, my aim is to describe a mix for each of these two threats to privacy.
No doubt this mix will be controversial to some. But my aim is not so much to push any particular mix of settings on these modality dials, as it is to demonstrate a certain approach. I don’t insist on the particular solutions I propose, but I do insist that solutions in the context of cyberspace are the product of such a mix.
The government surveils as much as it can in its fight against whatever its current fight is about. When that surveillance is human — wiretapping, or the like — then traditional legal limits ought to apply. Those limits impose costs (and thus, using the market, reduce the incidence to those most significant); they assure at least some review. And, perhaps most importantly, they build within law enforcement a norm respecting procedure.
When that surveillance is digital, however, then it is my view that a different set of restrictions should apply. The law should sanction “digital surveillance” if, but only if, a number of conditions apply:
The purpose of the search enabled in the algorithm is described.
The function of the algorithm is reviewed.
The purpose and the function match is certified.
No action — including a subsequent search — can be taken against any individual on the basis of the algorithm without judicial review.
With very limited exceptions, no action against any individual can be pursued for matters outside the purpose described. Thus, if you’re looking for evidence of drug dealing, you can’t use any evidence discovered for prosecuting credit card fraud.
That describes the legal restrictions applied against the government in order to enhance privacy. If these are satisfied, then in my view such digital surveillance should not conflict with the Fourth Amendment. In addition to these, there are privacy enhancing technologies (PETs) that should be broadly available to individuals as well. These technologies enable individuals to achieve anonymity in their transactions online. Many companies and activist groups help spread these technologies across the network.
Anonymity in this sense simply means non-traceability. Tools that enable this sort of non-traceability make it possible for an individual to send a message without the content of that message being traced to the sender. If implemented properly, there is absolutely no technical way to trace that message. That kind of anonymity is essential to certain kinds of communication.
It is my view that, at least so long as political repression remains a central feature of too many world governments, free governments should recognize a protected legal right to these technologies. I acknowledge that view is controversial. A less extreme view would acknowledge the differences between the digital world and real world[39], and guarantee a right to pseudonymous communication but not anonymous communication. In this sense, a pseudonymous transaction doesn’t obviously or directly link to an individual without court intervention. But it contains an effective fingerprint that would allow the proper authority, under the proper circumstances, to trace the communication back to its originator.
In this regime, the important question is who is the authority, and what process is required to get access to the identification. In my view, the authority must be the government. The government must subject its demand for revealing the identity of an individual to judicial process. And the executive should never hold the technical capacity to make that link on its own.
Again, no one will like this balance. Friends of privacy will be furious with any endorsement of surveillance. But I share Judge Posner’s view that a sophisticated surveillance technology might actually increase effective privacy, if it decreases the instances in which humans intrude on other humans. Likewise, friends of security will be appalled at the idea that anyone would endorse technologies of anonymity. “Do you know how hard it is to crack a drug lord’s encrypted e-mail communication?” one asked me.
The answer is no, I don’t have a real sense. But I care less about enabling the war on drugs than I do about enabling democracies to flourish. Technologies that enable the latter will enable the former. Or to be less cowardly, technologies that enable Aung San Suu Kyi to continue to push for democracy in Burma will enable Al Qaeda to continue to wage its terrorist war against the United States. I acknowledge that. I accept that might lead others to a less extreme position. But I would urge the compromise in favor of surveillance to go no further than protected pseudonymity.
The problem of controlling the spread or misuse of data is more complex and ambiguous. There are uses of personal data that many would object to. But many is not all. There are some who are perfectly happy to reveal certain data to certain entities, and there are many more who would become happy if they could trust that their data was properly used.
Here again, the solution mixes modalities. But this time, we begin with the technology[40].
As I described extensively in Chapter 4, there is an emerging push to build an Identity Layer onto the Internet. In my view, we should view this Identity Layer as a PET ( private enhancing technology): It would enable individuals to more effectively control the data about them that they reveal. It would also enable individuals to have a trustable pseudonymous identity that websites and others should be happy to accept. Thus, with this technology, if a site needs to know I am over 18, or an American citizen, or authorized to access a university library, the technology can certify this data without revealing anything else. Of all the changes to information practices that we could imagine, this would be the most significant in reducing the extent of redundant or unnecessary data flowing in the ether of the network.
A second PET to enable greater control over the use of data would be a protocol called the Platform for Privacy Preferences (or P3P for short)[41]. P3P would enable a machine-readable expression of the privacy preferences of an individual. It would enable an automatic way for an individual to recognize when a site does not comply with his privacy preferences. If you surf to a site that expresses its privacy policy using P3P, and its policy is inconsistent with your preferences, then depending upon the implementation, either the site or you are made aware of the problem created by this conflict. The technology thus could make clear a conflict in preferences. And recognizing that conflict is the first step to protecting preferences.
The critical part of this strategy is to make these choices machine-readable. If you Google “privacy policy”, you’ll get close to 2.5 billion hits on the Web. And if you click through to the vast majority of them (not that you could do that in this lifetime), you will find that they are among the most incomprehensible legal texts around (and that’s saying a lot). These policies are the product of pre-Internet thinking about how to deal with a policy problem. The government was pushed to “solve” the problem of Internet privacy. Its solution was to require “privacy policies” be posted everywhere. But does anybody read these policies? And if they do, do they remember them from one site to another? Do you know the difference between Amazon’s policies and Google’s?
The mistake of the government was in not requiring that those policies also be understandable by a computer. Because if we had 2.5 billion sites with both a human readable and machine readable statement of privacy policies, then we would have the infrastructure necessary to encourage the development of this PET, P3P. But because the government could not think beyond its traditional manner of legislating — because it didn’t think to require changes in code as well as legal texts — we don’t have that infrastructure now. But, in my view, it is critical.
These technologies standing alone, however, do nothing to solve the problem of privacy on the Net. It is absolutely clear that to complement these technologies, we need legal regulation. But this regulation is of three very different sorts. The first kind is substantive — laws that set the boundaries of privacy protection. The second kind is procedural — laws that mandate fair procedures for dealing with privacy practices. And the third is enabling — laws that make enforceable agreements between individuals and corporations about how privacy is to be respected.
One kind of legislation is designed to limit individual freedom. Just as labor law bans certain labor contracts, or consumer law forbids certain credit arrangements, this kind of privacy law would restrict the freedom of individuals to give up certain aspects of their privacy. The motivation for this limitation could either be substantive or procedural — substantive in that it reflects a substantive judgment about choices individuals should not make, or procedural in that it reflects the view that systematically, when faced with this choice, individuals will choose in ways that they regret. In either case, the role of this type of privacy regulation is to block transactions deemed to weaken privacy within a community.
The most significant normative structure around privacy practices was framed more than thirty years ago by the HEW (Health, Education, Welfare) Advisory Committee on Automated Data Systems. This report set out five principles that were to define the “Code of Fair Information Practices[42]”. These principles require:
There must be no personal data record-keeping systems whose very existence is secret.
There must be a way for a person to find out what information about the person is in a record and how it is used.
There must be a way for a person to prevent information about the person that was obtained for one purpose from being used or made available for other purposes without the person’s consent.
There must be a way for a person to correct or amend a record of identifiable information about the person.
Any organization creating, maintaining, using, or disseminating records of identifiable personal data must assure the reliability of the data for their intended use and must take precautions to prevent misuses of the data.
These principles express important substantive values — for example, that data not be reused beyond an original consent, or that systems for gathering data be reliable — but they don’t interfere with an individual’s choice to release his or her own data for specified purposes. They are in this sense individual autonomy enhancing, and their spirit has guided the relatively thin and ad hoc range of privacy legislation that has been enacted both nationally and at the state level[43].
The real challenge for privacy, however, is how to enable a meaningful choice in the digital age. And in this respect, the technique of the American government so far — namely, to require text-based privacy policy statements — is a perfect example of how not to act. Cluttering the web with incomprehensible words will not empower consumers to make useful choices as they surf the Web. If anything, it drives consumers away from even attempting to understand what rights they give away as the y move from site to site.
P3P would help in this respect, but only if (1) there were a strong push to spread the technology across all areas of the web and (2) the representations made within the P3P infrastructure were enforceable. Both elements require legal action to be effected.
In the first edition of this book, I offered a strategy that would, in my view, achieve both (1) and (2): namely, by protecting personal data through a property right. As with copyright, a privacy property right would create strong incentives in those who want to use that property to secure the appropriate consent. That content could then be channeled (through legislation) through appropriate technologies. But without that consent, the user of the privacy property would be a privacy pirate. Indeed, many of the same tools that could protect copyright in this sense could also be used to protect privacy.
This solution also recognizes what I believe is an important feature of privacy — that people value privacy differently[44]. It also respects those different values. It may be extremely important to me not to have my telephone number easily available; you might not care at all. And as the law’s presumptive preference is to use a legal device that gives individuals the freedom to be different — meaning the freedom to have and have respected wildly different subjective values — that suggests the device we use here is property. A property system is designed precisely to permit differences in value to be respected by the law. If you won’t sell your Chevy Nova for anything less than $10,000, then the law will support you.
The opposite legal entitlement in the American legal tradition is called a “liability rule[45]”. A liability rule also protects an entitlement, but its protection is less individual. If you have a resource protected by a liability rule, then I can take that resource so long as I pay a state-determined price. That price may be more or less than you value it at. But the point is, I have the right to take that resource, regardless.
An example from copyright law might make the point more clearly. A derivative right is the right to build upon a copyrighted work. A traditional example is a translation, or a movie based on a book. The law of copyright gives the copyright owner a property right over that derivative right. Thus, if you want to make a movie out of John Grisham’s latest novel, you have to pay whatever Grisham says. If you don’t, and you make the movie, you’ve violated Grisham’s rights.
The same is not true with the derivative rights that composers have. If a songwriter authorizes someone to record his song, then anyone else has a right to record that song, so long as they follow certain procedures and pay a specified rate. Thus, while Grisham can choose to give only one filmmaker the right to make a film based on his novel, the Beatles must allow anyone to record a song a member of the Beatles composed, so long as that person pays. The derivative right for novels is thus protected by a property rule; the derivative right for recordings by a liability rule.
The law has all sorts of reasons for imposing a liability rule rather than a property rule. But the general principle is that we should use a property rule, at least where the “transaction costs” of negotiating are low, and where there is no contradicting public value[46]. And it is my view that, with a technology like P3P, we could lower transaction costs enough to make a property rule work. That property rule in turn would reinforce whatever diversity people had about views about their privacy — permitting some to choose to waive their rights and others to hold firm.
There was one more reason I pushed for a property right. In my view, the protection of privacy would be stronger if people conceived of the right as a property right. People need to take ownership of this right, and protect it, and propertizing is the traditional tool we use to identify and enable protection. If we could see one fraction of the passion defending privacy that we see defending copyright, we might make progress in protecting privacy.
But my proposal for a property right was resoundingly rejected by critics whose views I respect[47]. I don’t agree with the core of these criticisms. For the reasons powerfully marshaled by Neil Richards, I especially don’t agree with the claim that there would be a First Amendment problem with propertizing privacy[48]. In any case, William McGeveran suggested an alternative that reached essentially the same end that I sought, without raising any of the concerns that most animated the critics[49].
The alternative simply specifies that a representation made by a website through the P3P protocol be considered a binding offer, which, if accepted by someone using the website, becomes an enforceable contract[50]. That rule, tied to a requirement that privacy policies be expressed in a machine-readable form such as P3P, would both (1) spread P3P and (2) make P3P assertions effectively law. This would still be weaker than a property rule, for reasons I will leave to the notes[51]. And it may well encourage the shrink-wrap culture, which raises its own problems. But for my purposes here, this solution is a useful compromise.
To illustrate again the dynamic of cyberlaw: We use law (a requirement of policies expressed in a certain way, and a contract presumption about those expressions) to encourage a certain kind of technology (P3P), so that that technology enables individuals to better achieve in cyberspace what they want. It is LAW helping CODE to perfect privacy POLICY.
This is not to say, of course, that we have no protections for privacy. As we have seen throughout, there are other laws besides federal, and other regulators besides the law. At times these other regulators may protect privacy better than law does, but where they don’t, then in my view law is needed.
The reader who was dissatisfied with my argument in the last chapter is likely to begin asking pointed questions. “Didn’t you reject in the last chapter the very regime you are endorsing here? Didn’t you reject an architecture that would facilitate perfect sale of intellectual property? Isn’t that what you’ve created here?”
The charge is accurate enough. I have endorsed an architecture here that is essentially the same architecture I questioned for intellectual property. Both are regimes for trading information; both make information “like” “real” property. But with copyright, I argued against a fully privatized property regime; with privacy, I am arguing in favor of it. What gives?
The difference is in the underlying values that inform, or that should inform, information in each context. In the context of intellectual property, our bias should be for freedom. Who knows what “information wants[52]”; whatever it wants, we should read the bargain that the law strikes with holders of intellectual property as narrowly as we can. We should take a grudging attitude to property rights in intellectual property; we should support them only as much as necessary to build and support information regimes.
But (at least some kinds of) information about individuals should be treated differently. You do not strike a deal with the law about personal or private information. The law does not offer you a monopoly right in exchange for your publication of these facts. That is what is distinct about privacy: Individuals should be able to control information about themselves. We should be eager to help them protect that information by giving them the structures and the rights to do so. We value, or want, our peace. And thus, a regime that allows us such peace by giving us control over private information is a regime consonant with public values. It is a regime that public authorities should support.
There is a second, perhaps more helpful, way of making the same point. Intellectual property, once created, is non-diminishable. The more people who use it, the more society benefits. The bias in intellectual property is thus, properly, towards sharing and freedom. Privacy, on the other hand, is diminishable. The more people who are given license to tread on a person’s privacy, the less that privacy exists. In this way, privacy is more like real property than it is like intellectual property. No single person’s trespass may destroy it, but each incremental trespass diminishes its value by some amount.
This conclusion is subject to important qualifications, only two of which I will describe here.
The first is that nothing in my regime would give individuals final or complete control over the kinds of data they can sell, or the kinds of privacy they can buy. The P3P regime would in principle enable upstream control of privacy rights as well as individual control. If we lived, for example, in a regime that identified individuals based on jurisdiction, then transactions with the P3P regime could be limited based on the rules for particular jurisdictions.
Second, there is no reason such a regime would have to protect all kinds of private data, and nothing in the scheme so far tells us what should and should not be considered “private” information. There may be facts about yourself that you are not permitted to hide; more important, there may be claims about yourself that you are not permitted to make ( “I am a lawyer”, or, “Call me, I’m a doctor”). You should not be permitted to engage in fraud or to do harm to others. This limitation is an analog to fair use in intellectual property — a limit to the space that privacy may protect.
I started this chapter by claiming that with privacy the cat is already out of the bag. We already have architectures that deny individuals control over what others know about them; the question is what we can do in response.
My response has been: Look to the code, Luke. We must build into the architecture a capacity to enable choice — not choice by humans but by machines. The architecture must enable machine-to-machine negotiations about privacy so that individuals can instruct their machines about the privacy they want to protect.
But how will we get there? How can this architecture be erected? Individuals may want cyberspace to protect their privacy, but what would push cyberspace to build in the necessary architectures?
Not the market. The power of commerce is not behind any such change. Here, the invisible hand would really be invisible. Collective action must be taken to bend the architectures toward this goal, and collective action is just what politics is for. Laissez-faire will not cut it.
The right to free speech is not the right to speak for free. It is not the right to free access to television, or the right that people will not hate you for what you have to say. Strictly speaking — legally speaking — the right to free speech in the United States means the right to be free from punishment by the government in retaliation for at least some (probably most) speech. You cannot be jailed for criticizing the President, though you can be jailed for threatening him; you cannot be fined for promoting segregation, though you will be shunned if you do. You cannot be stopped from speaking in a public place, though you can be stopped from speaking with an FM transmitter. Speech in the United States is protected — in a complex, and at times convoluted, way — but its constitutional protection is a protection against the government.
Nevertheless, a constitutional account of free speech that thought only of government would be radically incomplete. Two societies could have the same “First Amendment” — the same protections against government’s wrath — but if within one dissenters are tolerated while in the other they are shunned, the two societies would be very different free-speech societies. More than government constrains speech, and more than government protects it. A complete account of this — and any — right must consider the full range of burdens and protections.
Consider, for example, the “rights” of the disabled to protection against discrimination as each of the four modalities of Chapter 7 construct them. The law protects the disabled. Social norms don’t. The market provides goods to help the disabled, but they bear the full cost of that help. And until the law intervened, architecture did little to help the disabled integrate into society (think about stairs). The net of these four modalities describes the protection, or “rights”, that in any particular context the disabled have. Law might intervene to strengthen that protection — for example, by regulating architectures so they better integrate the disabled. But for any given “right”, we can use this mix of modalities to describe how well (or not) that “right” is protected.
In the terms of Chapter 7, then, these are modalities of both regulation and protection. That is, they can function both as constraints on behavior and as protections against other constraints. The following figure captures the point.
In the center is the object regulated — the pathetic dot from Chapter 7. Surrounding the individual now is a shield of protection, the net of law/norms/market/architecture that limits the constraints these modalities would otherwise place on the individual. I have not separated the four in the sphere of the shield because obviously there is no direct match between the modality of constraint and the modality of protection. When law as protector conflicts with law as constraint, constitutional law overrides ordinary law.
These modalities function together. Some might undercut others, meaning that the sum of protections might seem to be less significant than the parts. The “right” to promote the decriminalization of drugs in the present context of the war on drugs is an example. The law protects your right to advocate the decriminalization of drugs. The state cannot lock you up if, like George Soros, you start a campaign for the decriminalization of marijuana or if, like the Nobel Prize –winning economist Milton Friedman or the federal judge Richard Posner, you write articles suggesting it. If the First Amendment means anything, it means that the state cannot criminalize speech about law reform.
But that legal protection does not mean that I would suffer no consequences for promoting legalization of drugs. My hometown neighbors would be appalled at the idea, and some no doubt would shun me. Nor would the market necessarily support me. It is essentially impossible to buy time on television for a speech advocating such a reform. Television stations have the right to select their ads (within some limits); mine would most likely be deemed too controversial[1]. Stations also have the FCC — an active combatant in the war on drugs — looking over their shoulders. And even if I were permitted to advertise, I am not George Soros. I do not have millions to spend on such a campaign. I might manage a few off-hour spots on a local station, but I could not afford, for instance, a campaign on the networks during prime time.
Finally, architecture wouldn’t protect my speech very well either. In the United States at least, there are few places where you can stand before the public and address them about some matter of public import without most people thinking you a nut or a nuisance. There is no speakers’ corner in every city; most towns have no town meeting. “America offline”, in this sense, is very much like America Online — not designed to give individuals access to a wide audience to address public matters. Only professionals get to address Americans on public issues — politicians, scholars, celebrities, journalists, and activists, most of whom are confined to single issues. The rest of us have a choice — listen, or be dispatched to the gulag of social lunacy.
Thus, the effective protection for controversial speech is more conditional than a view of the law alone would suggest. Put differently, when more than law is reckoned, the right to be a dissenter is less protected than it could be.
Let’s take this example now to cyberspace. How is the “right” to promote the legalization of drugs in cyberspace protected? Here too, of course, the law protects my right of advocacy — at least in the United States. But it is quite possible that my speech would be illegal elsewhere and that perhaps I could be prosecuted for uttering such speech in cyberspace “in” another country. Speech promoting the Nazi Party, for example, is legal in the United States but not in Germany[2]. Uttering such speech in cyberspace may make one liable in German space as well.
The law therefore is an imperfect protection. Do norms help to protect speech? With the relative anonymity of cyberspace and its growing size, norms do not function well there. Even in cyberspaces where people know each other well, they are likely to be more tolerant of dissident views when they know (or believe, or hope) the dissident lives thousands of miles away.
The market also provides a major protection to speech in cyberspace — relative to real space, market constraints on speech in cyberspace are tiny. Recall how easily Jake Baker became a publisher, with a potential readership greater than the readership of all law books (like this one) published in the last decade. Look at the more than 50 million blogs that now enable millions to express their view of whatever. The low cost of publishing means publishing is no longer a barrier to speaking. As Eben Moglen asks, “Will there be an unpublished poet in the 21st Century?”
But on top of this list of protectors of speech in cyberspace is (once again) architecture. Relative anonymity, decentralized distribution, multiple points of access, no necessary tie to geography, no simple system to identify content, tools of encryption[3] — all these features and consequences of the Internet protocol make it difficult to control speech in cyberspace. The architecture of cyberspace is the real protector of speech there; it is the real “First Amendment in cyberspace”, and this First Amendment is no local ordinance[4].
Just think about what this means. For over 60 years the United States has been the exporter of a certain political ideology, at its core a conception of free speech. Many have criticized this conception: Some found it too extreme, others not extreme enough. Repressive regimes — China, North Korea — rejected it directly; tolerant regimes — France, Hungary — complained of cultural decay; egalitarian regimes — the Scandinavian countries — puzzled over how we could think of ourselves as free when only the rich can speak and pornography is repressed.
This debate has gone on at the political level for a long time. And yet, as if under cover of night, we have now wired these nations with an architecture of communication that builds within their borders a far stronger First Amendment than our ideology ever advanced. Nations wake up to find that their telephone lines are tools of free expression, that e-mail carries news of their repression far beyond their borders, that images are no longer the monopoly of state-run television stations but can be transmitted from a simple modem. We have exported to the world, through the architecture of the Internet, a First Amendment more extreme in code than our own First Amendment in law.
This chapter is about the regulation of speech and the protection of speech in cyberspace — and therefore also in real space. My aim is to obsess about the relationship between architecture and the freedom it makes possible, and about the significance of law in the construction of that architecture. It is to get you to see how this freedom is built — the constitutional politics in the architectures of cyberspace.
I say “politics” because this building is not over. As I have argued (over and over again), there is no single architecture for cyberspace; there is no given or necessary structure to its design. The first-generation Internet might well have breached walls of control. But there is no reason to believe that architects of the second generation will do so, or not to expect a second generation to rebuild control. There is no reason to think, in other words, that this initial flash of freedom will not be short-lived. And there is certainly no justification for acting as if it will not.
We can already see the beginnings of this reconstruction. The architecture is being remade to re-regulate what real-space architecture before made regulable. Already the Net is changing from free to controlled space.
Some of these steps to re-regulate are inevitable; some shift back is unavoidable. Before the change is complete, however, we must understand the freedoms the Net now provides and determine which freedoms we mean to preserve.
And not just preserve. The architecture of the Internet, as it is right now, is perhaps the most important model of free speech since the founding. This model has implications far beyond e-mail and web pages. Two hundred years after the framers ratified the Constitution, the Net has taught us what the First Amendment means. If we take this meaning seriously, then the First Amendment will require a fairly radical restructuring of the architectures of speech off the Net as well[5].
But all of that is getting ahead of the story. In the balance of this chapter, I address four distinct free speech in cyberspace questions. With each, I want to consider how “free speech” is regulated.
These stories do not all have the same constitutional significance. But they all illustrate the dynamic at the core of the argument of this book — how technology interacts with law to create policy.
Floyd Abrams is one of America’s leading First Amendment lawyers. In 1971 he was a young partner at the law firm of Cahill, Gordon[6]. Late in the evening of Monday, June 14, he received a call from James Goodale, in-house counsel for the New York Times. Goodale asked Abrams, together with Alexander Bickel, a Yale Law School professor, to defend the New York Times in a lawsuit that was to be filed the very next day.
The New York Times had just refused the government’s request that it cease all publication of what we now know as the “Pentagon Papers” and return the source documents to the Department of Defense[7]. These papers, mostly from the Pentagon’s “History of U.S. Decision Making Process on Vietnam Policy”, evaluated U.S. policy during the Vietnam War[8]. Their evaluation was very negative, and their conclusions were devastating. The papers made the government look extremely bad and made the war seem unwinnable.
The papers had been given to the New York Times by someone who did think the war was unwinnable; who had worked in the Pentagon and helped write the report; someone who was not anti-war at first but, over time, had come to see the impossibility that the Vietnam War was.
This someone was Daniel Ellsberg. Ellsberg smuggled one of the 15 copies of the papers from a safe at the RAND Corporation to an offsite photocopier. There, he and a colleague, Anthony Russo, photocopied the papers over a period of several weeks[9]. Ellsberg tried without success to make the papers public by having them read into the Congressional Record. He eventually contacted the New York Times reporter Neil Sheehan in the hope that the Times would publish them. Ellsberg knew that this was a criminal act, but for him the war itself was a criminal act; his aim was to let the American people see just what kind of a crime it was.
For two and a half months the Times editors pored over the papers, working to verify their authenticity and accuracy. After an extensive review, the editors determined that they were authentic and resolved to publish the first of a ten-part series of excerpts and stories on Sunday, June 13, 1971[10].
On Monday afternoon, one day after the first installment appeared, Attorney General John Mitchell sent a telegraph to the New York Times stating:
I respectfully request that you publish no further information of this character and advise me that you have made arrangements for the return of these documents to the Department of Defense[11].
When the Times failed to comply, the government filed papers to enjoin the paper from continuing to publish stories and excerpts from the documents[12].
The government’s claims were simple: These papers contained government secrets; they were stolen from the possession of the government; to publish them would put many American soldiers at risk and embarrass the United States in the eyes of the world. This concern about embarrassment was more than mere vanity: Embarrassment, the government argued, would weaken our bargaining position in the efforts to negotiate a peace. Because of the harm that would come from further publication, the Court should step in to stop it.
The argument was not unprecedented. Past courts had stopped the publication of life-threatening texts, especially in the context of war. As the Supreme Court said in Near v. Minnesota, for example, “no one would question but that a government might prevent actual obstruction to its recruiting service or the publication of the sailing dates of transports or the number and location of troops[13]”.
Yet the question was not easily resolved. Standing against precedent was an increasingly clear command: If the First Amendment meant anything, it meant that the government generally cannot exercise the power of prior restraint[14]. “Prior restraint” is when the government gets a court to stop publication of some material, rather than punish the publisher later for what was illegally published. Such a power is thought to present much greater risks to a system of free speech.[15] Attorney General Mitchell was asking the Court to exercise this power of prior restraint.
The Court struggled with the question, but resolved it quickly. It struggled because the costs seemed so high[16], but when it resolved the question, it did so quite squarely against the government. In the Court’s reading, the Constitution gave the New York Times the right to publish without the threat of prior restraint.
The Pentagon Papers is a First Amendment classic — a striking reminder of how powerful a constitution can be. But even classics get old. And in a speech that Abrams gave around the time the first edition to this book was published, Abrams asked an incredible question: Is the case really important anymore? Or has technology rendered this protection of the First Amendment unnecessary?
Abrams’s question was motivated by an obvious point: For the government to succeed in a claim that a printing should be stopped, it must show “irreparable harm” — harm so significant and irreversible that the Court must intervene to prevent it[17]. But that showing depends on the publication not occurring — if the Pentagon Papers had already been published by the Chicago Tribune, the government could have claimed no compelling interest to stop its publication in the New York Times. When the cat is already out of the bag, preventing further publication does not return the cat to the bag.
This point is made clear in a case that came after New York Times — a case that could have been invented by a law professor. In the late 1970s, the Progressive commissioned an article by Howard Morland about the workings of an H-bomb. The Progressive first submitted the manuscript to the Department of Energy, and the government in turn brought an injunction to block its publication. The government’s claim was compelling: to give to the world the secrets of how to build a bomb would make it possible for any terrorist to annihilate any city. On March 26, 1979, Judge Robert Warren of the Western District of Wisconsin agreed and issued a temporary restraining order enjoining the Progressive from publishing the article[18].
Unlike the Pentagon Papers case, this case didn’t race to the Supreme Court. Instead, it stewed, no doubt in part because the district judge hearing the case understood the great risk this publication presented. The judge did stop the publication while he thought through the case. He thought for two and a half months. The publishers went to the Court of Appeals, and to the Supreme Court, asking each to hurry the thinking along. No court intervened.
Until Chuck Hansen, a computer programmer, ran a “Design Your Own H-Bomb” contest and circulated an eighteen-page letter in which he detailed his understanding of how an H-Bomb works. On September 16, 1979, the Press-Connection of Madison, Wisconsin, published the letter. The next day the government moved to withdraw its case, conceding that it was now moot. The compelling interest of the government ended once the secret was out[19].
Note what this sequence implies. There is a need for the constitutional protection that the Pentagon Papers case represents only because there is a real constraint on publishing. Publishing requires a publisher, and a publisher can be punished by the state. But if the essence or facts of the publication are published elsewhere first, then the need for constitutional protection disappears. Once the piece is published, there is no further legal justification for suppressing it.
So, Abrams asks, would the case be important today? Is the constitutional protection of the Pentagon Papers case still essential?
Surprisingly, Floyd Abrams suggests not[20]. Today there’s a way to ensure that the government never has a compelling interest in asking a court to suppress publication. If the New York Times wanted to publish the Pentagon Papers today, it could ensure that the papers had been previously published simply by leaking them to a USENET newsgroup, or one of a million blogs. More quickly than its own newspaper is distributed, the papers would then be published in millions of places across the world. The need for the constitutional protection would be erased, because the architecture of the system gives anyone the power to publish quickly and anonymously.
Thus the architecture of the Net, Abrams suggested, eliminates the need for the constitutional protection. Even better, Abrams went on, the Net protects against prior restraint just as the Constitution did — by ensuring that strong controls on information can no longer be achieved. The Net does what publication of the Pentagon Papers was designed to do — ensure that the truth does not remain hidden.
But there’s a second side to this story.
On July 17, 1996, TWA Flight 800 fell from the sky ten miles off the southern coast of Center Moriches, New York. Two hundred and thirty people were killed. Immediately after the accident the United States launched the (then) largest investigation of an airplane crash in the history of the National Transportation Safety Board (NTSB), spending $27 million to discover the cause of the crash, which eventually was determined to have been a mechanical failure[21].
This was not, however, the view of the Internet. From the beginning, stories circulated about “friendly fire” — missiles that were seen to hit the airplane. Dozens of eyewitnesses reported that they saw a streaking light shoot toward the plane just before it went down. There were stories about missile tests conducted by the Navy seventy miles from the crash site[22]. The Net claimed that there was a cover-up by the U.S. government to hide its involvement in one of the worst civil air disasters in American history.
The government denied these reports. Yet the more the government denied them, the more contrary “evidence” appeared on the Net[23]. And then, as a final straw in the story, there was a report, purportedly by a government insider, claiming that indeed there was a conspiracy — because evidence suggested that friendly fire had shot down TWA 800[24].
The former press secretary to President John F. Kennedy believed this report. In a speech in France, Pierre Salinger announced that his government was hiding the facts of the case, and that he had the proof.
I remember this event well. I was talking to a colleague just after I heard Salinger’s report. I recounted Salinger’s report to this colleague, a leading constitutional scholar from one of the top American law schools. We both were at a loss about what to believe. There were cross-cutting intuitions about credibility. Salinger was no nut, but the story was certainly loony.
Salinger, it turns out, had been caught by the Net. He had been tricked by the flip side of the point Floyd Abrams has made. In a world where everyone can publish, it is very hard to know what to believe. Publishers are also editors, and editors make decisions about what to publish — decisions that ordinarily are driven at least in part by the question, is it true? Statements cannot verify themselves. We cannot always tell, from a sentence reporting a fact about the world, whether that sentence is true[25]. So in addition to our own experience and knowledge of the world, we must rely on structures of reputation that build credibility. When something is published, we associate the claim with the publisher. If the New York Times says that aliens have kidnapped the President, it is viewed differently from a story with the identical words published in the National Enquirer.
When a new technology comes along, however, we are likely to lose our bearings. This is nothing new. It is said that the word phony comes from the birth of the telephone — the phony was the con artist who used the phone to trick people who were familiar with face-to-face communication only. We should expect the same uncertainty in cyberspace, and expect that it too, at first, will shake expectations of credibility.
Abrams’s argument then depends on a feature of the Net that we cannot take for granted. If there were credibility on the Net, the importance of the Pentagon Papers case would indeed be diminished. But if speech on the Net lacks credibility, the protections of the Constitution again become important.
“Credibility”, however, is not a quality that is legislated or coded. It comes from institutions of trust that help the reader separate reliable from unreliable sources. Flight 800 thus raises an important question: How can we reestablish credibility in this space so that it is not lost to the loons[26]?
In the first edition of this book, that question could only be answered hypothetically. But in the time since, we’ve begun to see an answer to this question emerge. And the word at the center of that answer is: Blog.
At this writing, there are more than 50 million weblogs on the Internet. There’s no single way to describe what these blogs are. They differ dramatically, and probably most of what gets written there is just crap. But it is wrong to judge a dynamic by a snapshot. And the structure of authority that this dynamic is building is something very new.
At their best, blogs are instances of amateur journalism — where “amateur”, again, means not second rate or inferior, but one who does what he does for the love of the work and not the money. These journalists write about the world — some from a political perspective, some from the point of view of a particular interest. But they all triangulate across a range of other writers to produce an argument, or a report, that adds something new. The ethic of this space is linking — of pointing, and commenting. And while this linking is not “fair and balanced”, it does produce a vigorous exchange of ideas.
These blogs are ranked. Services such as Technorati constantly count the blog space, watching who links to whom, and which blogs produce the greatest credibility. And these rankings contribute to an economy of ideas that builds a discipline around them. Bloggers get authority from the citation others give them; that authority attracts attention. It is a new reputation system, established not by editors or CEOs of media companies, but by an extraordinarily diverse range of contributors.
And in the end, these amateur journalists have an effect. When TWA flight 800 fell from the sky, there were theories about conspiracies that were filtered through no structure of credibility. Today, there are more structure s of credibility. So when Dan Rather produced a letter on CBS’s 60 Minutes purporting to establish a certain fraud by the President, it took the blogosphere 24 hours to establish this media company’s evidence was faked. More incredibly, it took CBS almost two weeks to acknowledge what blogs had established[27]. The collaborative work of the blogs uncovered the truth, and in the process embarrassed a very powerful media company. But by contrast to the behavior of that media company, they demonstrated something important about how the Net had matured.
This collaboration comes with no guarantees, except the guarantee of a process. The most extraordinary collaborative process in the context of content is Wikipedia. Wikipedia is a free online encyclopedia, created solely by volunteers. Launched at the beginning of 2001, these (literally thousands of) volunteers have now created over 2 million articles. There are nine major language versions (not including the Klingon version), with about half of the total articles in English.
The aim of the Wikipedia is neutrality. The contributors edit, and reedit, to frame a piece neutrally. Sometimes that effort fails — particularly controversial topics can’t help but attract fierce conflict. But in the main, the work is an unbelievable success. With nothing more than the effort of volunteers, the most used, and perhaps the most useful encyclopedia ever written has been created through millions of uncoordinated instances of collaboration.
Wikipedia, however, can’t guarantee its results. It can’t guarantee that, at any particular moment, there won’t be errors in its entries. But of course, no one can make that guarantee. Indeed, in one study that randomly collected entries from Wikipedia and from Encyclopedia Britannica, there were just as many errors in Britannica as in Wikipedia[28].
But Wikipedia is open to a certain kind of risk that Britannica is not — maliciousness. In May 2005, the entry to an article about John Seigenthaler Sr. was defaced by a prankster. Because not many people were monitoring the entry, it took four months before the error was noticed and corrected. Seigenthaler wasn’t happy about this. He, understandably, complained that it was the architecture of Wikipedia that was to blame.
Wikipedia’s architecture could be different. But the lesson here is not its failures. It is instead the extraordinary surprise of Wikipedia’s success. There is an unprecedented collaboration of people from around the world working to converge upon truth across a wide range of topics. That, in a sense, is what science does as well. It uses a different kind of “peer review” to police its results. That “peer review” is no guarantee either — South Koreans, for example, were quite convinced that one of their leading scientists, Hwang Woo-Suk, had discovered a technique to clone human stem cells. They believed it because peer-reviewed journals had reported it. But whether right to believe it or not, the journals were wrong. Woo-Suk was a fraud, and he hadn’t cloned stem cells, or anything else worth the attention of the world.
Blogs don’t coordinate any collaborative process to truth in the way Wikipedia does. In a sense, the votes for any particular position at any particular moment are always uncounted, while at every moment they are always tallied on Wikipedia. But even if they’re untallied, readers of blogs learn to triangulate on the truth. Just as with witnesses at an accident (though better, since these witnesses have reputations), the reader constructs what must be true from a range of views. Cass Sunstein rightly worries that the norms among bloggers have not evolved enough to include internal diversity of citation[29]. That may well be true. But whatever the normal reading practice is for ordinary issues, the diversity of the blogosphere gives readers an extremely wide range of views to consider when any major issue — such as that which stung Salinger — emerges. When tied to the maturing reputation system that constantly tempers influence, this means that it is easier to balance extreme views with the correction that many voices can build.
A credibility can thus emerge, that, while not perfect, is at least differently encumbered. NBC News must worry about its bottom line, because its reporting increasingly responds to it. Blogs don’t have a bottom line. They are — in the main — amateurs. Reputation constrains both, and the competition between the two forms of journalism has increasingly improved each. We have a richer environment for free speech today than five years ago — a commercial press tempered by blogs regulated by a technology of reputation that guides the reader as much as the writer.
Errors will remain. Everyone has a favorite example — mine is the ridiculous story about Al Gore claiming to have “invented the Internet.” The story originated with a CNN interview on March 9, 1999. In that interview, in response to a question about what was different about Gore over Bradley, Gore said the following:
During my service in the United States Congress, I took the initiative in creating the Internet. I took the initiative in moving forward a whole range of initiatives that have proven to be important to our country’s economic growth and environmental protection, improvements in our educational system[30].
As is clear from the context, Gore is stating not that he invented the technology of the Internet, but that he “took the initiative in moving forward a whole range of initiatives” that have been important to the country. But the story was retold as the claim that Gore “invented the Internet.” That’s how the Internet journalist Declan McCullagh repeated it two weeks later: “The vice president offered up a whopper of a tall tale in which he claimed to have invented the Internet. ” That characterization — plainly false — stuck. In a 2003 study of the media’s handling of the story, Chip Health and Jonathan Bendor conclude, “We show that the false version of Gore’s statement dominated the true one in mainstream political discourse by a wide margin. This is a clear failure in the marketplace of ideas, which we document in detail[31]”.
The only redeeming part of this story is that it’s simple to document the falsity — because of the Internet. Seth Finkelstein, a programmer and anti-censorware activist, has created a page on the Internet collecting the original interview and the subsequent reports about it[32]. His is the model of the very best the Internet could be. That virtue, however, didn’t carry too far beyond the Internet.
For all our talk about loving free speech, most of us, deep down, wouldn’t mind a bit of healthy speech regulation, at least in some contexts. Or at least, more of us would be eager for speech regulation today than would have been in 1996. This change is because of two categories of speech that have become the bane of existence to many on the Net: spam and porn.
By “spam” I mean unsolicited commercial e-mail sent in bulk. “Unsolicited”, in the sense that there’s no relationship between the sender and recipient; “commercial” in a sense that excludes political e-mail; “e-mail” in the sense not restricted to e-mail, but that includes every medium of interaction in cyberspace (including blogs); and “bulk” meaning many (you pick the number) missives sent at once.
By “porn”, I mean not obscenity and not child porn, but what the United States Supreme Court calls sexually explicit speech that is “harmful to minors[33]”. This is the category of legally permitted erotic speech — for adults, at least, not for kids. Obscenity and child porn are permitted to no one.
These two types of speech — porn and spam — are very different, but they are similar in the structure of regulation that each demands. Neither kind of speech should be banned by regulation: There are some who are happy to receive spam; there are some who are constitutionally entitled to access porn. But for both kinds of speech, there is a class of individuals who would like the power to block access to each: most of us with respect to spam; parents with respect to porn. This is a desire for a kind of “speech regulation.” The question is how, or whether, the law can support it.
I’m all for this form of speech regulation, properly architected. “But how”, anti-regulation sorts might ask, “can you so easily embrace the idea of regulation? Have you forgotten the important values of free speech? ”
But if the lovers of this form of speech regulation have been reading carefully, they have a quick answer to this charge of censorship. It is clear, upon reflection, that in the sense of Chapter 7, spam and porn have always been regulated in real space. The only question for cyberspace is whether the same effect of those real space regulations can be achieved in cyberspace.
Think first about spam in real space. In the sense of Chapter 7, spam, in real space, is regulated extensively. We can understand that regulation through the four modalities.
First law: Regulations against fraud and misrepresentation constrain the games bulk mailers can play in real space. Contests are heavily regulated (just read the disclaimers on the Publishers’ Clearing House Sweepstakes).
Second, norms regulate bulk mail in real space. There’s a sense of what is appropriate to advertise for; advertisement outside that range is almost self-defeating.
Third, markets regulate bulk mail in real space. The cost of real space mail is high, meaning the returns must be significant before it pays to send bulk mail. That radically reduces the range of bulk mail that gets sent in real space.
And finally, architecture regulates bulk mail in real space. We get our mail just once a day, and it’s fairly simple to segregate bulk from real. It’s also simple to dump the bulk without ever even opening it. The burdens of real-space spam are thus not terribly great.
These factors together restrict the spread of spam in real space. There is less of it than the spammers would like, even if there is more than the rest of us like. These four constraints thus regulate what gets made.
A similar story can be told about porn.
Pornography, in real space, is regulated extensively — again not obscenity and not child porn, but what the Supreme Court calls sexually explicit speech that is “harmful to minors.” Obscenity and child porn are regulated too, but their regulation is different: Obscenity and child porn are banned for all people in real space (United States); porn is banned only for children.
We can also understand porn’s regulation by considering the four modalities of regulation. All four are directed to a common end: to keep porn away from kids while (sometimes) ensuring adults’ access to it.
First, laws do this. Laws in many jurisdictions require that porn not be sold to kids[34]. Since at least 1968, when the Supreme Court decided Ginsberg v. New York[35], such regulation has been consistently upheld. States can require vendors of porn to sell it only to adults; they can also require vendors to check the ID of buyers.
But not only laws channel. Social norms do as well. Norms restrict the sale of porn generally — society for the most part sneers at consumers of porn, and this sneer undoubtedly inhibits its sale. Norms also support the policy of keeping porn away from kids. Porn dealers likely don’t like to think of themselves as people who corrupt. Selling porn to kids is universally seen as corrupting, and this is an important constraint on dealers, as on anyone else.
The market, too, keeps porn away from kids. Porn in real space costs money. Kids do not have much money. Because sellers discriminate on the basis of who can pay, they thus help to discourage children from buying porn.
But then regulations of law, market, and norms all presuppose another regulation that makes the first three possible: the regulation of real-space architecture. In real space it is hard to hide that you are a child. He can try, but without any likely success. Thus, because a kid cannot hide his age, and because porn is largely sold face to face, the architectures of real space make it relatively cheap for laws and norms to be effective.
This constellation of regulations in real space has the effect of controlling, to an important degree, the distribution of porn to kids. The regulation is not perfect — any child who really wants the stuff can get it — but regulation does not need to be perfect to be effective. It is enough that these regulations make porn generally unavailable.
Spam and porn are regulated differently in cyberspace. That is, these same four modalities constrain or enable spam and porn differently in cyberspace.
Let’s begin with porn this time. The first difference is the market. In real space porn costs money, but in cyberspace it need not — at least not much. If you want to distribute one million pictures of “the girl next door” in real space, it is not unreasonable to say that distribution will cost close to $1 million. In cyberspace distribution is practically free. So long as you have access to cyberspace and a scanner, you can scan a picture of “the girl next door” and then distribute the digital image across USENET to many more than one million people for just the cost of an Internet connection.
With the costs of production so low, a much greater supply of porn is produced for cyberspace than for real space. And indeed, a whole category of porn exists in cyberspace that doesn’t in real space — amateur porn, or porn produced for noncommercial purposes. That category of supply simply couldn’t survive in real space.
And then there is demand. Porn in cyberspace can be accessed — often and in many places — for free. Thousands of commercial sites make porn available for free, as a tease to draw in customers. Even more porn is distributed in noncommercial contexts, such as USENET, or free porn websites. Again, this low price translates into much greater demand.
Much of this supply and demand is for a market that, at least in the United States, is constitutionally protected. Adults have a constitutional right in the United States to access porn, in the sense that the government can do nothing that burdens (perhaps unreasonably burdens) access to porn. But there is another market for porn in the United States that is not constitutionally protected. Governments have the right in the United States to block access by kids to porn.
As we saw in the previous section, for that regulation to work, however, there needs to be a relatively simple way to know who is a kid. But as we’ve seen throughout this book, this is an architectural feature that cyberspace doesn’t have. It’s not that kids in cyberspace can easily hide that they are kids. In cyberspace, there is no fact to disguise. You enter without an identity and you identify only what you want — and even that can’t be authenticated with any real confidence. Thus, a kid in cyberspace need not disclose that he is a kid. And therefore he need not suffer the discriminations applied to a child in real space. No one needs to know that Jon is Jonny; therefore, the architecture does not produce the minimal information necessary to make regulation work.
The consequence is that regulations that seek selectively to block access to kids in cyberspace don’t work, and they don’t work for reasons that are very different from the reasons they might not work well in real space. In real space, no doubt, there are sellers who want to break the law or who are not typically motivated to obey it. But in cyberspace, even if the seller wants to obey the law, the law can’t be obeyed. The architecture of cyberspace doesn’t provide the tools to enable the law to be followed.
A similar story can be told about spam: Spam is an economic activity. People send it to make money. The frictions of real space significantly throttle that desire. The costs of sending spam in real space mean that only projects expecting a significant return get sent. As I said, even then, laws and norms add another layer of restriction. But the most significant constraint is cost.
But the efficiency of communication in cyberspace means that the cost of sending spam is radically cheaper, which radically increases the quantity of spam that it is rational to send. Even if you make only a .01% profit, if the cost of sending the spam is close to zero, you still make money.
Thus, as with porn, a different architectural constraint means a radically different regulation of behavior. Both porn and spam are reasonably regulated in real space; in cyberspace, this difference in architecture means neither is effectively regulated at all.
And thus the question that began this section: Is there a way to “regulate” spam and porn to at least the same level of regulation that both face in real space?
Of all the possible speech regulations on the Net (putting copyright to one side for the moment), the United States Congress has been most eager to regulate porn. That eagerness, however, has not yet translated into success. Congress has passed two pieces of major legislation. The first was struck down completely. The second continues to be battered down in its struggle through the courts.
The first statute was the product of a scare. Just about the time the Net was coming into the popular consciousness, a particularly seedy aspect of the Net came into view first. This was porn on the Net. This concern became widespread in the United States early in 1995[36]. Its source was an extraordinary rise in the number of ordinary users of the Net, and therefore a rise in use by kids and an even more extraordinary rise in the availability of what many call porn on the Net. An extremely controversial (and deeply flawed) study published in the Georgetown University Law Review reported that the Net was awash in porn[37]. Time ran a cover story about its availability[38]. Senators and congressmen were bombarded with demands to do something to regulate “cybersmut.”
Congress responded in 1996 with the Communications Decency Act (CDA). A law of extraordinary stupidity, the CDA practically impaled itself on the First Amendment. The law made it a felony to transmit “indecent” material on the Net to a minor or to a place where a minor could observe it. But it gave speakers on the Net a defense — if they took good-faith, “reasonable, effective” steps to screen out children, then they could speak “indecently[39]”.
There were at least three problems with the CDA, any one of which should have doomed it to well-deserved extinction[40]. The first was the scope of the speech it addressed: “Indecency” is not a category of speech that Congress has the power to regulate (at least not outside the context of broadcasting)[41]. As I have already described, Congress can regulate speech that is “harmful to minors”, or Ginsberg speech, but that is very different from speech called “indecent.” Thus, the first strike against the statute was that it reached too far.
Strike two was vagueness. The form of the allowable defenses was clear: So long as there was an architecture for screening out kids, the speech would be permitted. But the architectures that existed at the time for screening out children were relatively crude, and in some cases quite expensive. It was unclear whether, to satisfy the statute, they had to be extremely effective or just reasonably effective given the state of the technology. If the former, then the defenses were no defense at all, because an extremely effective block was extremely expensive; the cost of a reasonably effective block would not have been so high.
Strike three was the government’s own doing. In arguing its case before the Supreme Court in 1997, the government did little either to narrow the scope of the speech being regulated or to expand the scope of the defenses. It stuck with the hopelessly vague, overbroad definition Congress had given it, and it displayed a poor understanding of how the technology might have provided a defense. As the Court considered the case, there seemed to be no way that an identification system could satisfy the statute without creating an undue burden on Internet speakers.
Congress responded quickly by passing a second statute aimed at protecting kids from porn. This was the Child Online Protect ion Act (COPA) of 1998[42]. This statute was better tailored to the constitutional requirements. It aimed at regulating speech that was harmful to minors. It allowed commercial websites to provide such speech so long as the website verified the viewer’s age. Yet in June 2003, the Supreme Court enjoined enforcement of the statute[43].
Both statutes respond to a legitimate and important concern. Parents certainly have the right to protect their kids from this form of speech, and it is perfectly understandable that Congress would want to help parents secure this protection.
But both statutes by Congress are unconstitutional — not, as some suggest, because there is no way that Congress could help parents. Instead both are unconstitutional because the particular way that Congress has tried to help parents puts more of a burden on legitimate speech (for adults that is) than is necessary.
In my view, however, there is a perfectly constitutional statute that Congress could pass that would have an important effect on protecting kids from porn.
To see what that statute looks like, we need to step back a bit from the CDA and COPA to identify what the legitimate objectives of this speech regulation would be.
Ginsberg[44] established that there is a class of speech that adults have a right to but that children do not. States can regulate that class to ensure that such speech is channeled to the proper user and blocked from the improper user.
Conceptually, for such a regulation can work, two questions must be answered:
Is the speaker uttering “regulable” speech — meaning speech “harmful to minors”?
Is the listener entitled to consume this speech — meaning is he a minor?
And with the answers to these questions, the logic of this regulation is:
IF
(speech == regulable)
AND
(listener == minor)
THEN
block access.
Now between the listener and the speaker, clearly the speaker is in a better position to answer question #1. The listener can’t know whether the speech is harmful to minors until the listener encounters the speech. If the listener is a minor, then it is too late. And between the listener and the speaker, clearly the listener is in a better position to answer question #2. On the Internet especially, it is extremely burdensome for the speaker to certify the age of the listener. It is the listener who knows his age most cheaply.
The CDA and COPA placed the burden of answering question #1 on the speaker, and #2 on both the speaker and the listener. A speaker had to determine whether his speech was regulable, and a speaker and a listener had to cooperate to verify the age of the listener. If the speaker didn’t, and the listener was a minor, then the speaker was guilty of a felony.
Real-space law also assigns the burden in exactly the same way. If you want to sell porn in New York, you both need to determine whether the content you’re selling is “harmful to minors”, and you need to determine whether the person you’re selling to is a minor. But real space is importantly different from cyberspace, at least in the high cost of answering question #2: In real space, the answer is almost automatic (again, it’s hard for a kid to hide that he’s a kid). And where the answer is not automatic, there’s a cheap system of identification (a driver’s license, for example). But in cyberspace, any mandatory system of identification constitutes a burden both for the speaker and the listener. Even under COPA, a speaker has to bear the burden of a credit card system, and the listener has to trust a pornographer with his credit card just to get access to constitutionally protected speech.
There’s another feature of the CDA/COPA laws that seems necessary but isn’t: They both place the burden of their regulation upon everyone, including those who have a constitutional right to listen. They require, that is, everyone to show an ID when it is only kids who can constitutionally be blocked.
So compare then the burdens of the CDA/COPA to a different regulatory scheme: one that placed the burden of question #1 (whether the content is harmful to minors) on the speaker and placed the burden of question #2 (whether the listener is a minor) on the listener.
One version of this scheme is simple, obviously ineffective and unfair to the speaker: A requirement that a website blocks access with a page that says “The content on this page is harmful to minors. Click here if you are a minor.” This scheme places the burden of age identification on the kid. But obviously, it would have zero effect in actually blocking a kid. And, less obviously, this scheme would be unfair to speakers. A speaker may well have content that constitutes material “harmful to minors”, but not everyone who offers such material should be labeled a pornographer. This transparent block is stigmatizing to some, and if a less burdensome system were possible, that stigma should also render regulation supporting this unconstitutional.
So what’s an alternative for this scheme that might actually work?
I’m going to demonstrate such a system with a particular example. Once you see the example, the general point will be easier to see as well.
Everyone knows the Apple Macintosh. It, like every modern operating system, now allows users to specify “accounts” on a particular machine. I’ve set one up for my son, Willem (he’s only three, but I want to be prepared). When I set up Willem’s account, I set it up with “parental controls.” That means I get to specify precisely what programs he gets to use, and what access he has to the Internet. The “parental controls” make it (effectively) impossible to change these specifications. You need the administrator’s password to do that, and if that’s kept secret, then the universe the kid gets to through the computer is the universe defined by the access the parent selects.
Imagine one of the programs I could select was a browser with a function we could call “kids-mode-browsing” (KMB). That browser would be programmed to watch on any web page for a particular mark. Let’s call that mark the “harmful to minors” mark, or
So, if the world of the World Wide Web was marked with
How can we get (much of the) world of the Web to mark its harmful to minors content with
This is the role for government. Unlike the CDA or COPA, the regulation required to make this system work — to the extent it works, and more on that below — is simply that speakers mark their content. Speakers would not be required to block access; speakers would not be required to verify age. All the speaker would be required to do is to tag content deemed harmful to minors with the proper tag.
This tag, moreover, would not be a public marking that a website was a porn site. This proposal is not like the (idiotic, imho) proposals that we create a .sex or .xxx domain for the Internet. People shouldn’t have to locate to a red-light district just to have adult material on their site. The
Once the government enacts this law, then browser manufacturers would have an incentive to build this (very simple) filtering technology into their browsers. Indeed, given the open-source Mozilla browser technology — to which anyone could add anything they wanted — the costs of building this modified browser are extremely low. And once the government enacts this law, and browser manufacturers build a browser that recognizes this tag, then parents have would have as strong a reason to adopt platforms that enable them to control where their kids go on the Internet.
Thus, in this solution, the LAW creates an incentive (through penalties for noncompliance) for sites with “harmful to minors” material to change their ARCHITECTURE (by adding
But isn’t that burden on the speaker unconstitutional? It’s hard to see why it would be, if it is constitutional in real space to tell a speaker he must filter kids from his content “harmful to minors.” No doubt there’s a burden. But the question isn’t whether there’s a burden. The constitutional question is whether there is a less burdensome way to achieve this important state interest.
But what about foreign sites? Americans can’t regulate what happens in Russia. Actually, that’s less true than you think. As we’ll see in the next chapter, there’s much that the U.S. government can do and does to effectively control what other countries do.
Still, you might worry that sites in other countries won’t obey American law because it’s not likely we’ll send in the Marines to take out a noncomplying website. That’s certainly true. But to the extent that a parent is concerned about this, as I already described, there is a market already to enable geographic filtering of content. The same browser that filters on
But won’t kids get around this restriction? Sure, of course some will. But the measure of success for legislation (as opposed to missile tracking software) is not 100 percent. The question the legislature asks is whether the law will make things better off[45]. To substantially block access to
But why not simply rely upon filters that parents and libraries install on their computers? Voluntary filters don’t require any new laws, and they therefore don’t require any state-sponsored censorship to achieve their ends.
It is this view that I want to work hardest to dislodge, because built within it are all the mistakes that a pre-cyberlaw understanding brings to the question of regulation in cyberspace.
First, consider the word “censorship.” What this regulation would do is give parents the opportunity to exercise an important choice. Enabling parents to do this has been deemed a compelling state interest. The kids who can’t get access to this content because their parents exercised this choice might call it “censorship”, but that isn’t a very useful application of the term. If there is a legitimate reason to block this form of access, that’s speech regulation. There’s no reason to call it names.
Second, consider the preference for “voluntary filters.” If voluntary filters were to achieve the very same end (blocking H2M speech and only H2M speech), I’d be all for them. But they don’t. As the ACLU quite powerfully described (shortly after winning the case that struck down the CDA partly on the grounds that private filters were a less restrictive means than government regulation):
The ashes of the CDA were barely smoldering when the White House called a summit meeting to encourage Internet users to self-rate their speech and to urge industry leaders to develop and deploy the tools for blocking “inappropriate speech.” The meeting was “voluntary”, of course: the White House claimed it wasn’t holding anyone’s feet to the fire. But the ACLU and others . . . were genuinely alarmed by the tenor of the White House summit and the unabashed enthusiasm for technological fixes that will make it easier to block or render invisible controversial speech. . . . It was not any one proposal or announcement that caused our alarm; rather, it was the failure to examine the longer-term implications for the Internet of rating and blocking schemes[46].
The ACLU’s concern is the obvious one: The filters that the market has created not only filter much more broadly than the legitimate interest the state has here — blocking
My point is not that we should ban filters, or that parents shouldn’t be allowed to block more than H2M speech. My point is that if we rely upon private action alone, more speech will be blocked than if the government acted wisely and efficiently.
And that frames my final criticism: As I’ve argued from the start, our focus should be on the liberty to speak, not just on the government’s role in restricting speech. Thus, between two “solutions” to a particular speech problem, one that involves the government and suppresses speech narrowly, and one that doesn’t involve the government but suppresses speech broadly, constitutional values should tilt us to favor the former. First Amendment values (even if not the First Amendment directly) should lead to favoring a speech regulation system that is thin and accountable, and in which the government’s action or inaction leads only to the suppression of speech the government has a legitimate interest in suppressing. Or, put differently, the fact that the government is involved should not necessarily disqualify a solution as a proper, rights-protective solution.
The private filters the market has produced so far are both expensive and over-inclusive. They block content that is beyond the state’s interest in regulating speech. They are effectively subsidized because there is no less restrictive alternative.
Publicly required filters (which are what the
It has taken key civil rights organizations too long to recognize this private threat to free-speech values. The tradition of civil rights is focused directly on government action alone. I would be the last to say that there’s not great danger from government misbehavior. But there is also danger to free speech from private misbehavior. An obsessive refusal to even consider the one threat against the other does not serve the values promoted by the First Amendment.
But then what about public filtering technologies, like PICS? Wouldn’t PICS be a solution that avoided the “secret list problem” you identified?
PICS is an acronym for the World Wide Web Consortium’s Platform for Internet Content Selection. We have already seen a relative (actually, a child) of PICS in the chapter about privacy: P3P. Like PICS, is a protocol for rating and filtering content on the Net. In the context of privacy, the content was made up of assertions about privacy practices, and the regime was designed to help individuals negotiate those practices.
With online speech the idea is much the same. PICS divides the problem of filtering into two parts — labeling (rating content) and then filtering (blocking content on the basis of the rating). The idea was that software authors would compete to write software that could filter according to the ratings; content providers and rating organizations would compete to rate content. Users would then pick their filtering software and rating system. If you wanted the ratings of the Christian Right, for example, you could select its rating system; if I wanted the ratings of the Atheist Left, I could select that. By picking our raters, we would pick the content we wanted the software to filter.
This regime requires a few assumptions. First, software manufacturers would have to write the code necessary to filter the material. (This has already been done in some major browsers). Second, rating organizations would actively have to rate the Net. This, of course, would be no simple task; organizations have not risen to the challenge of billions of web pages. Third, organizations that rated the Net in a way that allowed for a simple translation from one rating system to another would have a competitive advantage over other raters. They could, for example, sell a rating system to the government of Taiwan and then easily develop a slightly different rating system for the “government” of IBM.
If all three assumptions held true, any number of ratings could be applied to the Net. As envisioned by its authors, PICS would be neutral among ratings and neutral among filters; the system would simply provide a language with which content on the Net could be rated, and with which decisions about how to use that rated material could be made from machine to machine[48].
Neutrality sounds like a good thing. It sounds like an idea that policymakers should embrace. Your speech is not my speech; we are both free to speak and listen as we want. We should establish regimes that protect that freedom, and PICS seems to be just such a regime.
But PICS contains more “neutrality” than we might like. PICS is not just horizontally neutral — allowing individuals to choose from a range of rating systems the one he or she wants; PICS is also vertically neutral — allowing the filter to be imposed at any level in the distributional chain. Most people who first endorsed the system imagined the PICS filter sitting on a user’s computer, filtering according to the desires of that individual. But nothing in the design of PICS prevents organizations that provide access to the Net from filtering content as well. Filtering can occur at any level in the distributional chain — the user, the company through which the user gains access, the ISP, or even the jurisdiction within which the user lives. Nothing in the design of PICS, that is, requires that such filters announce themselves. Filtering in an architecture like PICS can be invisible. Indeed, in some of its implementations invisibility is part of its design[49].
This should set off alarms for those keen to protect First Amendment values — even though the protocol is totally private. As a (perhaps) unintended consequence, the PICS regime not only enables nontransparent filtering but, by producing a market in filtering technology, engenders filters for much more than Ginsberg speech. That, of course, was the ACLU’s legitimate complaint against the original CDA. But here the market, whose tastes are the tastes of the community, facilitates the filtering. Built into the filter are the norms of a community, which are broader than the narrow filter of Ginsberg. The filtering system can expand as broadly as the users want, or as far upstream as sources want.
The H2M+KMB solution alternative is much narrower. It enables a kind of private zoning of speech. But there would be no incentive for speakers to block out listeners; the incentive of a speaker is to have more, not fewer, listeners. The only requirements to filter out listeners would be those that may constitutionally be imposed — Ginsberg speech requirements. Since they would be imposed by the state, these requirements could be tested against the Constitution, and if the state were found to have reached too far, it could be checked.
The difference between these two solutions, then, is in the generalizability of the regimes. The filtering regime would establish an architecture that could be used to filter any kind of speech, and the desires for filtering then could be expected to reach beyond a constitutional minimum; the zoning regime would establish an architecture for blocking that would not have this more general purpose.
Which regime should we prefer?
Notice the values implicit in each regime. Both are general solutions to particular problems. The filtering regime does not limit itself to Ginsberg speech; it can be used to rate, and filter, any Internet content. And the zoning regime, in principle, is not limited to zoning only for Ginsberg speech. The
At least in principle. We should be asking, however, what incentives are there to extend the solution beyond the problem. And what resistance is there to such extensions?
Here we begin to see the important difference between the two regimes. When your access is blocked because of a certificate you are holding, you want to know why. When you are told you cannot enter a certain site, the claim to exclude is checked at least by the person being excluded. Sometimes the exclusion is justified, but when it is not, it can be challenged. Zoning, then, builds into itself a system for its own limitation. A site cannot block someone from the site without that individual knowing it[50].
Filtering is different. If you cannot see the content, you cannot know what is being blocked. Content could be filtered by a PICS filter somewhere upstream and you would not necessarily know this was happening. Nothing in the PICS design requires truth in blocking in the way that the zoning solution does. Thus, upstream filtering becomes easier, less transparent, and less costly with PICS.
This effect is even clearer if we take apart the components of the filtering process. Recall the two elements of filtering solutions — labeling content, and then blocking based on that labeling. We might well argue that the labeling is the more dangerous of the two elements. If content is labeled, then it is possible to monitor who gets what without even blocking access. That might well raise greater concerns than blocking, since blocking at least puts the user on notice.
These possibilities should trouble us only if we have reason to question the value of filtering generally, and upstream filtering in particular. I believe we do. But I must confess that my concern grows out of yet another latent ambiguity in our constitutional past.
There is undeniable value in filtering. We all filter out much more than we process, and in general it is better if we can select our filters rather than have others select them for us. If I read the New York Times rather than the Wall Street Journal, I am selecting a filter according to my understanding of the values of both newspapers. Obviously, in any particular case, there cannot be a problem with this.
But there is also a value in confronting the unfiltered. We individually may want to avoid issues of poverty or of inequality, and so we might prefer to tune those facts out of our universe. But it would be terrible from the standpoint of society if citizens could simply tune out problems that were not theirs, because those same citizens have to select leaders to manage these very problems[51].
In real space we do not have to worry about this problem too much because filtering is usually imperfect. However much I’d like to ignore homelessness, I cannot go to my bank without confronting homeless people on the street; however much I’d like to ignore inequality, I cannot drive to the airport without passing through neighborhoods that remind me of how unequal a nation the United States is. All sorts of issues I’d rather not think about force themselves on me. They demand my attention in real space, regardless of my filtering choices.
Of course, this is not true for everyone. The very rich can cut themselves off from what they do not want to see. Think of the butler on a 19th-century English estate, answering the door and sending away those he thinks should not trouble his master. Those people lived perfectly filtered lives. And so do some today.
But most of us do not. We must confront the problems of others and think about issues that affect our society. This exposure makes us better citizens[52]. We can better deliberate and vote on issues that affect others if we have some sense of the problems they face.
What happens, then, if the imperfections of filtering disappear? What happens if everyone can, in effect, have a butler? Would such a world be consistent with the values of the First Amendment?
Some believe that it would not be. Cass Sunstein, for example, has argued quite forcefully that the framers embraced what he calls a “Madisonian” conception of the First Amendment[53]. This Madisonian conception rejects the notion that the mix of speech we see should solely be a function of individual choice[54]. It insists, Sunstein claims, on ensuring that we are exposed to the range of issues we need to understand if we are to function as citizens. It therefore would reject any architecture that makes consumer choice trump. Choice is not a bad circumstance in the Madisonian scheme, but it is not the end of the matter. Ithiel de Sola Pool makes a very similar point:
What will it mean if audiences are increasingly fractionated into small groups with special interests? What will it mean if the agenda of national fads and concerns is no longer effectively set by a few mass media to which everyone is exposed? Such a trend raises for society the reverse problems from those posed by mass conformism. The cohesion and effective functioning of a democratic society depends upon some sort of public agora in which everyone participates and where all deal with a common agenda of problems, however much they may argue over the solutions[55].
On the other side are scholars such as Geoffrey Stone, who insists just as strongly that no such paternalistic ideal is found anywhere in the conception of free speech embraced by our framers[56]. The amendment, he says, is merely concerned with banning state control of private choice. Since enabling private choice is no problem under this regime, neither is perfect filtering.
This conflict among brilliant University of Chicago law professors reveals another latent ambiguity, and, as with other such ambiguity, I do not think we get far by appealing to Madison. To use Sunstein against Sunstein, the framers’ First Amendment was an incompletely theorized agreement, and it is better simply to confess that it did not cover the case of perfect filtering. The framers couldn’t imagine a PICS-enabled world; they certainly didn’t agree upon the scope of the First Amendment in such a world. If we are to support one regime over another, we must do so by asserting the values we want to embrace rather than claiming they have already been embraced.
So what values should we choose? In my view, we should not opt for perfect filtering[57]. We should not design for the most efficient system of censoring — or at least, we should not do this in a way that allows invisible upstream filtering. Nor should we opt for perfect filtering so long as the tendency worldwide is to overfilter speech. If there is speech the government has an interest in controlling, then let that control be obvious to the users. A political response is possible only when regulation is transparent.
Thus, my vote is for the regime that is least transformative of important public values. A zoning regime that enables children to self-identify is less transformative than a filtering regime that in effect requires all speech to be labeled. A zoning regime is not only less transformative but less enabling (of other regulation) — it requires the smallest change to the existing architecture of the Net and does not easily generalize to a far more significant regulation.
I would opt for a zoning regime even if it required a law and the filtering solution required only private choice. If the state is pushing for a change in the mix of law and architecture, I do not care that it is pushing with law in one context and with norms in the other. From my perspective, the question is the result, not the means — does the regime produced by these changes protect free speech values?
Others are obsessed with this distinction between law and private action. They view regulation by the state as universally suspect and regulation by private actors as beyond the scope of constitutional review. And, to their credit, most constitutional law is on their side.
But as I’ve hinted before, and defend more below, I do not think we should get caught up in the lines that lawyers draw. Our question should be the values we want cyberspace to protect. The lawyers will figure out how.
The annoying skeptic who keeps noting my “inconsistencies” will like to pester me again at this point. In the last chapter, I embraced an architecture for privacy that is in essence the architecture of PICS. P3P, like PICS, would enable machine-to-machine negotiation about content. The content of P3P is rules about privacy practices, and with PICS it is rules about content. But how, the skeptic asks, can I oppose one yet favor the other?
The answer is the same as before: The values of speech are different from the values of privacy; the control we want to vest over speech is less than the control we want to vest over privacy. For the same reasons that we disable some of the control over intellectual property, we should disable some of the control over speech. A little bit of messiness or friction in the context of speech is a value, not a cost.
But are these values different just because I say they are? No. They are only different if we say they are different. In real space we treat them as different. My core argument is that we choose how we want to treat them in cyberspace.
Spam is perhaps the most theorized problem on the Net. There are scores of books addressing how best to deal with the problem. Many of these are filled with ingenious technical ideas for ferreting out spam, from advanced Bayesian filter techniques to massive redesigns of the e-mail system.
But what is most astonishing to me as a lawyer (and depressing to me as the author of Code) is that practically all of these works ignore one important tool with which the problem of spam could be addressed: the law. It’s not that they weigh the value of the law relative to, for example, Bayesian filters or the latest in heuristic techniques, and conclude it is less valuable than these other techniques. It’s that they presume the value of the law is zero — as if spam were a kind of bird flu which lived its own life totally independently of what humans might want or think.
This is an extraordinary omission in what is, in effect, a regulatory strategy. As I have argued throughout this book, the key to good policy in cyberspace is a proper mix of modalities, not a single silver bullet. The idea that code alone could fix the problem of spam is silly — code can always be coded around, and, unless the circumventers are not otherwise incentivized, they will code around it. The law is a tool to change incentives, and it should be a tool used here as well.
Most think the law can’t play a role here because they think spammers will be better at evading the law than they are at evading spam filters. But this thinking ignores one important fact about spam. “Spam” is not a virus. Or at least, when talking about “spam”, I’m not talking about viruses. My target in this part is communication that aims at inducing a commercial transaction. Many of these transactions are ridiculous — drugs to stop aging, or instant weight loss pills. Some of these transactions are quite legitimate — special sales of overstocked products, or invitations to apply for credit cards. But all of these transactions aim in the end to get something from you: Money. And crucially, if they aim to get money from you, then there must be someone to whom you are giving your money. That someone should be the target of regulation.
So what should that regulation be?
The aim here, as with porn, should be to regulate to the end of assuring what we could call “consensual communication.” That is, the only purpose of the regulation should be to block nonconsensual communication, and enable consensual communication. I don’t believe that purpose is valid in every speech context. But in this context — private e-mail, or blogs, with limited bandwidth resources, with the costs of the speech born by the listener — it is completely appropriate to regulate to enable individuals to block commercial communications that they don’t want to receive.
So how could that be done?
Today, the only modality that has any meaningful effect upon the supply of spam is code. Technologists have demonstrated extraordinary talent in devising techniques to block spam. These techniques are of two sorts — one which is triggered by the content of the message, and one which is triggered by the behavior of the sender.
The technique that is focused upon content is an array of filtering technologies designed to figure out what the meaning of the message is. As Jonathan Zdziarski describes, these techniques have improved dramatically. While early heuristic filtering techniques had error rates around 1 in 10, current Bayesian techniques promise up to 99.5% – 99.95% accuracy[58].
But the single most important problem with these techniques is the arms race that they produce[59]. Spammers have access to the same filters that network administrators use to block spam — at least if the filters are heuristic[60]. They can therefore play with the message content until it can defeat the filter. That then requires filter writers to change the filters. Some do it well; some don’t. The consequence is that the filters are often over and under inclusive — blocking much more than they should or not blocking enough.
The second code-based technique for blocking spam focuses upon the e-mail practices of the sender — meaning not the person sending the e-mail, but the “server” that is forwarding the message to the recipient. A large number of network vigilantes — by which I mean people acting for the good in the world without legal regulation — have established lists of good and bad e-mail servers. These blacklists are compiled by examining the apparent rules the e-mail server uses in deciding whether to send e-mail. Those servers that don’t obey the vigilante’s rules end up on a blacklist, and people subscribing to these blacklists then block any e-mail from those servers.
This system would be fantastic if there were agreement about how best to avoid “misuse” of servers. But there isn’t any such agreement. There are instead good faith differences among good people about how best to control spam[61]. These differences, however, get quashed by the power of the boycott. Indeed, in a network, a boycott is especially powerful. If 5 out of 100 recipients of your e-mail can’t receive it because of the rules your network administrator adopts for your e-mail server, you can be sure the server’s rules — however sensible — will be changed. And often, there’s no appeal of the decision to be included on a blacklist. Like the private filtering technologies for porn, there’s no likely legal remedy for wrongful inclusion on a blacklist. So many types of e-mail services can’t effectively function because they don’t obey the rules of the blacklists.
Now if either or both of these techniques were actually working to stop spam, I would accept them. I’m particularly troubled by the process-less blocking of blacklists, and I have personally suffered significant embarrassment and costs when e-mail that wasn’t spam was treated as spam. Yet these costs might be acceptable if the system in general worked.
But it doesn’t. The quantity of spam continues to increase. The Raducatu Group “predicts that by 2007, 70% of all e-mail will be spam”[62]. And while there is evidence that the rate of growth in spam is slowing, there’s no good evidence the pollution of spam is abating[63]. The only federal legislative response, the CAN-SPAM Act, while preempting many innovative state solutions, is not having any significant effect[64].
Not only are these techniques not blocking spam, they are also blocking legitimate bulk e-mail that isn’t — at least from my perspective[65] — spam. The most important example is political e-mail. One great virtue of e-mail was that it would lower the costs of social and political communication. That in turn would widen the opportunity for political speech. But spam-blocking technologies have now emerged as a tax on these important forms of social speech. They have effectively removed a significant promise the Internet originally offered.
Thus, both because regulation through code alone has failed, and because it is actually doing harm to at least one important value that the network originally served, we should consider alternatives to code regulation alone. And, once again, the question is, what mix of modalities would best achieve the legitimate regulatory end?
Begin with the problem: Why is spam so difficult to manage? The simple reason is that it comes unlabeled. There’s no simple way to know that the e-mail you’ve received is spam without opening the e-mail.
That’s no accident. Spammers know that if you knew an e-mail was spam, you wouldn’t open it. So they do everything possible to make you think the e-mail you’re receiving is not spam.
Imagine for a moment that we could fix this problem. Imagine a law that required spam to be labeled, and imagine that law worked. I know this is extremely difficult to imagine, but bear with me for a moment. What would happen if every spam e-mail came with a specified label in its subject line — something like ADV in the subject line[66].
Well, we know what would happen initially. Everyone (or most of us) would either tell our e-mail client or ask our e-mail service to block all e-mail with ADV in the subject line. It would be glorious moment in e-mail history, a return to the days before spam.
But the ultimate results of a regulation are not always its initial results. And it’s quite clear with this sort of regulation, initial results would be temporary. If there’s value in unsolicited missives to e-mail inboxes, then this initial block would be an incentive to find different ways into an inbox. And we can imagine any number of different ways:
Senders could get recipients to opt-into receiving such e-mail. The opt-in would change the e-mail from unsolicited to solicited. It would no longer be spam.
Senders could add other tags to the subject line. For example, if this spam were travel spam, the tags could be ADV Travel. Then recipients could modify their filter to block all ADV traffic except Travel e-mails.
Senders could begin to pay recipients for receiving e-mails. As some have proposed, the e-mail could come with an attachment worth a penny, or something more. Recipients could select to block all ADVs except those carrying cash.
The key to each of these modified results is that the recipient is now receiving commercial e-mail by choice, not by trick. This evolution from the initial regulation thus encourages more communication, but only by encouraging consensual communication. Nonconsensual communication — assuming again the regulation was obeyed — would be (largely) eliminated.
So in one page, I’ve solved the problem of spam — assuming, that is, that the labeling rule is obeyed. But that, of course, is an impossible assumption. What spammer would comply with this regulation, given the initial effect is to radically shrink his market?
To answer this question, begin by returning to the obvious point about spam, as opposed to viruses or other malware. Spammers are in the business to make money. Money-seekers turn out to be relatively easy creatures to regulate. If the target of regulation is in it for the money, then you can control his behavior by changing his incentives. If ignoring a regulation costs more than obeying it, then spammers (on balance) will obey it. Obeying it may mean changing spamming behavior, or it may mean getting a different job. Either way, change the economic incentives, and you change spamming behavior.
So how can you change the incentives of spammers through law? What reason is there to believe any spammer would pay attention to the law?
People ask that question because they realize quite reasonably that governments don’t spend much time prosecuting spammers. Governments have better things to do (or so they think). So even a law that criminalized spam is not likely to scare many spammers.
But what we need here is the kind of creativity in the adaptation of the law that coders evince when they build fantastically sophisticated filters for spam. If law as applied by the government is not likely to change the incentives of spammers, we should find law that is applied in a way that spammers would fear.
One such innovation would be a well-regulated bounty system. The law would require spam to be marked with a label. That’s the only requirement. But the penalty for not marking the spam with a label is either state prosecution, or prosecution through a bounty system. The FTC would set a number that it estimates would recruit a sufficient number of bounty hunters. Those bounty hunters would then be entitled to the bounty if they’re the first, or within the first five, to identify a responsible party associated with a noncomplying e-mail.
But how would a bounty hunter do that? Well, the first thing the bounty hunter would do is determine whether the regulation has been complied with. One part of that answer is simple; the other part, more complex. Whether a label is attached is simple. Whether the e-mail is commercial e-mail will turn upon a more complex judgment.
Once the bounty hunter is convinced the regulation has been breached, he or she must then identify a responsible party. And the key here is to follow an idea Senator John McCain introduced into the only spam legislation Congress has passed to date, the CAN-SPAM Act. That idea is to hold responsible either the person sending the e-mail, or the entity for which the spam is an advertisement.
In 99 percent of the cases, it will be almost impossible to identify the person sending the spam. The techniques used by spammers to hide that information are extremely sophisticated[67].
But the entity for which the spam is an advertisement is a different matter. Again, if the spam is going to work, there must be someone to whom I can give my money. If it is too difficult to give someone my money, then the spam won’t return the money it needs to pay.
So how can I track the entity for which the spam is an advertisement?
Here the credit card market would enter to help. Imagine a credit card — call it the “bounty hunters’ credit card” — that when verified, was always declined. But when that credit card was used, a special flag was attached to the transaction, and the credit card holder would get a report about the entity that attempted the charge. The sole purpose of this card would be to ferret out and identify misbehavior. Credit card companies could charge something special for this card or charge for each use. They should certainly charge to make it worthwhile for them. But with these credit cards in hand, bounty hunters could produce useable records about to whom money was intended to be sent. And with that data, the bounty hunter could make his claim for the bounty.
But what’s to stop some malicious sort from setting someone else up? Let’s say I hate my competitor, Ajax Cleaners. So I hire a spammer to send out spam to everyone in California, promoting a special deal at Ajax Cleaners. I set up an account so Ajax received the money, and then I use my bounty credit card to nail Ajax. I show up at the FTC to collect my bounty; the FTC issues a substantial fine to Ajax. Ajax goes out of business.
This is a substantial concern with any bounty system. But it too can be dealt with through a careful reckoning of incentives. First, and obviously, the regulation should make such fraud punishable by death. (Ok, not death, but by a significant punishment). And second, any person or company charged with a violation of this spam statute could assert, under oath, that it did not hire or direct any entity to send spam on its behalf. If such an assertion is made, then the company would not be liable for any penalty. But the assertion would include a very substantial penalty if it is proven false — a penalty that would include forfeiture of both personal and corporate assets. A company signing such an oath once would likely be given the benefit of the doubt. But a company or individual signing such an oath more than once would be a target for investigation by the government. And by this stage, the exposure that the spammers would be facing would be enough to make spamming a business that no longer pays.
Here again, then, the solution is a mixed modality strategy. A LAW creates the incentive for a certain change in the CODE of spam (it now comes labeled). That law is enforced through a complex set of MARKET and NORM-based incentives — both the incentive to be a bounty hunter, which is both financial and normative (people really think spammers are acting badly), as well as the incentive to produce bounty credit cards. If done right, the mix of these modalities would change the incentives spammers face. And, if done right, the change could be enough to drive most spammers into different businesses.
Of course there are limits to this strategy. It won’t work well with foreign sites. Nor with spammers who have ideological (or pathological) interests. But these spammers could then be the target of the code-based solutions that I described at the start. Once the vast majority of commercially rational spam is eliminated, the outside cases can be dealt with more directly.
This has been a long section, but it makes a couple important points. The first is a point about perspective: to say whether a regulation “abridges the freedom of speech, or of the press” we need a baseline for comparison. The regulations I describe in this section are designed to restore the effective regulation of real space. In that sense, in my view, they don’t “abridge” speech.
Second, these examples show how doing nothing can be worse for free-speech values than regulating speech. The consequence of no legal regulation to channel porn is an explosion of bad code regulation to deal with porn. The consequence of no effective legal regulation to deal with spam is an explosion of bad code that has broken e-mail. No law, in other words, sometimes produces bad code. Polk Wagner makes the same point: “law and software together define the regulatory condition. Less law does not necessarily mean more freedom[68]”. As code and law are both regulators (even if different sorts of regulators) we should be avoiding bad regulation of whatever sort.
Third, these examples evince the mixed modality strategy that regulating cyberspace always is. There is no silver bullet — whether East Coast code or West Coast code. There is instead a mix of techniques — modalities that must be balanced to achieve a particular regulatory end. That mix must reckon the interaction among regulators. The question, as Polk Wagner describes it, is for an equilibrium. But the law has an important role in tweaking that mix to assure the balance that advances a particular policy.
Here, by regulating smartly, we could avoid the destructive code-based regulation that would fill the regulatory gap. That would, in turn, advance free speech interests.
The third context in which to consider the special relevance of cyberspace to free speech follows directly from Chapter 10. As I describe there, the interaction between the architecture of copyright law and the architecture of digital networks produces an explosion of creativity within reach of copyright never contemplated by any legislature.
The elements in that change are simple. Copyright law regulates, at a minimum, “copies.” Digital networks function by making “copies”: There’s no way to use a work in a digital environment without making a copy. Thus, every single use of creative work in a digital environment triggers, in theory at least, copyright.
This is a radical change from life in real space. In real space, there are any number of ways to “use” a creative work without triggering the law of copyright. When you retell a joke to friends, the law of copyright is not invoked — no “copy” is made, and to friends, no public performance occurs. When you loan a friend your book, the law of copyright is not triggered. When you read a book, the law of copyright would never take notice. Practically every single ordinary use of culture in real space is free of the regulation of copyright. Copyright targets abnormal uses — such as “publishing” or public performances.
The gap between normal and abnormal uses began to close as the technologies for “copying” were democratized. Xerox created the first blip; cassette tape recorders were close behind. But even these technologies were the exception, never the rule. They raised copyright questions, but they didn’t inject copyright into the center of ordinary life.
Digital technologies have. As more and more of ordinary life moves onto the Internet, more and more of ordinary life is subject to copyright. The functional equivalent to activities from real space that were essentially unregulated is now subject to copyright’s rule in cyberspace. Creativity activity that never needed to grapple with copyright regulation must now, to be legal, clear a whole host of hurdles, some of which, because of the insanely inefficient property system that copyright is, are technically impossible. A significant portion of creative activity has now moved from a free culture to a permission culture. And the question for the values of free speech is whether that expanded regulation should be allowed to occur unchecked.
Again, I have my own (overly strong) views about the matter[69]. I continue to be astonished that a Court so keen to avoid “raising the costs of being a producer of sexual materials troubling to the majority”[70] is apparently oblivious to the way copyright law raises the costs of being a producer of creative and critical speech.
But for our purposes here, we should simply note once again a latent ambiguity in our constitutional tradition. As the Supreme Court has held, the First Amendment imposes important limitations on the scope of copyright. Among those are at least the requirements that copyright not regulate “ideas”, and that copyright be subject to “fair use.”
But these “traditional First Amendment safeguards” were developed in a context in which copyright was the exception, not the rule. We don’t yet have a tradition in which every single use of creative work is subject to copyright’s reach. Digital technologies have produced that world. But most of the rest of the world has not yet woken up to it.
So what should First Amendment values be in this world? One view is that the First Amendment should have no role in this world — beyond the minimal protections of the “idea/expression” distinction and the requirement of “fair use.” In this view, the scope of Congress’s regulation of creative activities is, subject to these minimal conditions, plenary. Any creative act reduced to a tangible form could be subject to the monopoly right of copyright. And as every creative act in digital context is reduced to a tangible form, this view means that everything in the digital world could be made subject to copyright.
The opposite view rejects this unlimited scope for copyright. While the monopoly right of copyright makes sense in certain commercial contexts, or more broadly, makes sense where it is necessary to “promote . . . progress”, there is no legitimate reason to burden the vast majority of creative expression with the burdens of copyright law. That a kid making a video book report needs to clear permissions with the author of the book, or that friends making a mashup of a favorite artist can’t do so unless the label has granted them permission, extends the reach of copyright beyond any legitimate purpose.
But between these two views, it is plain that the Framers never made a choice. They were never confronted with the option that copyright could (efficiently) control every single use of a creative work. Any control possible in 1790 would have been radically too burdensome. And while I have my bets about how they would vote, given their strong antipathy to monopolies and the very restrictive IP clause they enacted, that’s nothing more than a bet. If there’s a choice to be made here, it is a choice they didn’t make. It is instead a choice that we must make: Whether the values of free speech restrict this radical increase in the scope of copyright’s regulation.
So far my arguments about architecture have been about architectures in cyberspace. In this final story, I blur the borders a bit. I want to use the architecture of cyberspace to show something important about the regulation of broadcasting.
The Federal Communications Commission regulates speech. If I wanted to broadcast a political speech on FM radio at a frequency of 98.6 MHz in San Francisco, the FCC would have me prosecuted[71]. To speak on 98.6 in San Francisco, I need a license, because to speak using these radio frequencies without a license is a crime. It is a crime despite the fact that the Constitution says, “Congress shall make no law . . . abridging the freedom of speech, or of the press. ” What gives?
The answer rests on a deeply held assumption at the core of our jurisprudence governing broadcasting technologies: Only a fixed amount of “spectrum” is available for broadcasting, and the only way to facilitate broadcasting using that spectrum is to allocate slices of it to users, who are then the ones entitled to use their allocated spectrum within a particular geographical region. Without allocation, there would be chaos, the assumption goes. And chaos would kill broadcasting.
This view first came on the constitutional scene after Congress passed the Radio Act of 1927[72]. In 1926 Secretary of Commerce Herbert Hoover gave up the practice of controlling broadcasting after a number of circuit courts held that he did not have the power to do so. If he did not have the power, he said, then the invisible hand would have to govern. But Hoover was no real friend of the invisible hand. He predicted what would happen when he withdrew federal jurisdiction — chaos — and some suggest his aim was to help bring about just what he predicted. Stations would override other stations, he said; broadcasting would be a mess. When some confusion did arise, Hoover used this to justify new federal regulation[73].
Congress then rode to the rescue by authorizing the FCC to regulate spectrum in a massively invasive way. Only the licensed could speak; what they said would be controlled by their license; they had to speak in the public interest; they had to share their resource with their opponents. In short, Congress said, broadcasting had to be regulated in the same way the Soviet Union regulated wheat[74]. We had no choice. As Justice Felix Frankfurter said in upholding the regime, such sovietism was compelled by the “nature” of radio[75].
From the beginning, however, there have been skeptics of this view. Not skeptics about the idea that spectrum must be regulated, but about the manner by which it is regulated. Is it really necessary to have a central agency allocate what in effect are property rights? As these skeptics argued, the common law had done just fine before the federal government entered. It could also do fine if the government simply made spectrum a kind of tradable property right. Ronald Coase was most famous for pushing for a regime in which spectrum was auctioned rather than licensed[76]. And Coase’s idea caught on — fifty years later. In the United States, the FCC now auctions huge chunks of the broadcasting spectrum. Just this year, it is positioning itself to sell prime real estate spectrum — the part that used to broadcast UHF television.
Now under either scenario — either when the FCC allocates spectrum or when it allocates property rights to spectrum — there is a role for the government. That role is most extensive when the FCC allocates spectrum: Then the FCC must decide who should get what. When spectrum is property, the FCC need only enforce the boundaries that the property right establishes. It is, in a way, a less troubling form of government action than the government deciding who it likes best.
Both forms of government regulation, however, produce a “press” (at least the press that uses spectrum) that is very different from the “press” at the founding. In 1791, the “press” was not the New York Times or the Wall Street Journal. It was not comprised of large organizations of private interests, with millions of readers associated with each organization. Rather, the press was much like the Internet today. The cost of a printing press was low, the readership was slight, the government subsidized its distribution, and anyone (within reason) could become a publisher. An extraordinary number did[77].
Spectrum licenses and spectrum property, however, produce a very different market. The cost of securing either becomes a barrier to entry. It would be like a rule requiring a “newspaper license” in order to publish a newspaper. If that license was expensive, then fewer could publish[78].
Of course, under our First Amendment it would be impossible to imagine the government licensing newspapers (at least if that license was expensive and targeted at the press). That’s because we all have a strong intuition that we want competition to determine which newspapers can operate, not artificial governmental barriers. And we all intuitively know that there’s no need for the government to “rationalize” the newspaper market. People are capable of choosing among competing newspapers without any help from the government.
So what if the same were true about spectrum? Most of us haven’t any clue about how what we call “spectrum” works. The weird sounds and unstable reception of our FM and AM radios make us think some kind of special magic happens between the station and receiver. Without that magic, radio waves would “interfere” with each other. Some special coordination is thought necessary to avoid such “collision” and the inevitable chaos that would result. Radio waves, in this view, are delicate invisible airplanes, which need careful air traffic controllers to make sure disaster doesn’t strike.
But what most of us think we know about radio is wrong. Radio waves aren’t butterflies. They don’t need the protection of the federal bureaucrats to do their work. And as technology that is totally familiar to everyone using the Internet demonstrates, there is in fact very little reason for either spectrum-licenses or spectrum-property. The invisible hand, here, can do all the work.
To get a clue about how, consider two contexts, at least one of which everyone is familiar with. No doubt, radio waves are different from sound waves. But for our purposes here, the following analogy works.
Imagine you’re at a party. There are 50 people in the room, and each of them is talking. Each is therefore producing sound waves. But though these many speakers produce different sound waves, we don’t have any trouble listening to the person speaking next to us. So long as no one starts shouting, we can manage to hear quite well. More generally, a party (at least early in the evening) is comprised of smart speakers and listeners who coordinate their speaking so that most everyone in the room can communicate without any real trouble.
Radios could function similarly — if the receiver and transmitter were analogously intelligent. Rather than the dumb receivers that ordinary FM or AM radio relies upon, smart radios could figure out what to listen to and communicate with just as people at a party learn to focus on the conversation they’re having.
The best evidence of this is the second example I offer to dislodge the common understanding of how spectrum works. This example is called “WiFi.” WiFi is the popular name of a particular set of protocols that together enable computers to “share” bands of unlicensed spectrum. The most popular of these bands are in the 2.5 GHz and 5 GHz range. WiFi enables a large number of computers to use that spectrum to communicate.
Most of the readers of this book have no doubt come across WiFi technology. I see it every day I teach: a room full of students, each with a laptop, the vast majority on the Internet — doing who knows what. The protocols within each machine enable them all to “share” a narrow band of spectrum. There is no government or regulator that tells which machine when it can speak, any more than we need the government to make sure that people can communicate at cocktail parties.
These examples are of course small and limited. But there is literally a whole industry now devoted to spreading the lesson of this technology as broadly as possible. Some theorists believe the most efficient use of all spectrum would build upon these models — using ultra-wide-band technologies to maximize the capacity of radio spectrum. But even those who are skeptical of spectrum utopia are coming to see that our assumptions about how spectrum must be allocated are driven by ignorance about how spectrum actually works.
The clearest example of this false assumption is the set of intuitions we’re likely to have about the necessary limitations in spectrum utilization. These assumptions are reinforced by the idea of spectrum-property. The image we’re likely to have is of a resource that can be overgrazed. Too many users can clog the channels, just as too many cattle can overgraze a field.
Congestion is certainly a possible consequence of spectrum usage. But the critical point to recognize — and again, a point that echoes throughout this book — is that the possibility congestion depends upon the design. WiFi networks can certainly become congested. But a different architecture for “sharing” spectrum need not. Indeed, under this design, more users don’t deplete capacity — they increase it[79].
The key to making this system possible is for every receiver to become a node in the spectrum architecture. Users then wouldn’t be just consumers of someone else’s broadcast. Instead, receivers are now also broadcasters. Just as peer-to-peer technologies such as BitTorrent harness the bandwidth of users to share the cost of distributing content, users within a certain mesh-network architecture for spectrum could actually increase the spectrum capacity of the network. Under this design, then, the more who use the spectrum, the more spectrum there is for others to use — producing not a tragedy of the commons, but a comedy of the commons.
The basic architecture of this mesh system imagines every computer in the system is both a receiver and a transmitter. Of course, in one sense, that’s what these machines already are — a computer attached to a WiFi network both receives transmissions from and sends transmissions to the broadcasting node. But that architecture is a 1-to-many broadcasting architecture. The mesh architecture is something different. In a mesh architecture, each radio can send packets of data to any other radio within the mesh. Or, put differently, each is a node in the network. And with every new node, the capacity of the network could increase. In a sense, this is precisely the architecture of much of the Internet. Machines have addresses; they collect packets addressed to that machine from the Net[80]. Your machine shares the Net with every other machine, but the Net has a protocol about sharing this commons. Once this protocol is agreed on, no further regulation is required.
We don’t have go too deep into the technology to recognize the question that I mean this section to pose: If technology makes it possible for radios to share the spectrum — without either spectrum-licenses or spectrum-property — then what justification does the government have for imposing either burden on the use of spectrum? Or, to link it back to the beginning of this section, if spectrum users could share spectrum without any coordination by the government, why is it any more justified to impose a property system on spectrum than it is for the government to charge newspapers for the right to publish?
No doubt, the architecture that enables sharing is not totally free of government regulation. The government may well require that only certified devi ces be used in this network (as the FCC already does with any device that can radiate within a range of spectrum). It may push the technology to the capacity, increasing mesh architecture. It may even reasonably impose nuisance-like limits on the power of any transmitter. But beyond these simple regulations, the government would not try to limit who could use the spectrum. It would not ban the use of spectrum for people who hadn’t either paid or been licensed.
So here we have two architectures for spectrum — one where spectrum is allocated, and one where spectrum (like the market for newspapers) is shared. Which is more consistent with the First Amendment’s design?
Here, finally, we have an example of a translation that works. We have a choice between an architecture that is the functional equivalent of the architecture of the American framing and an architecture equivalent to the Soviet framing. One architecture distributes power and facilitates speech; the other concentrates power and raises the price of speech. Between these two, the American framers made a choice. The state was not to be in the business of licensing speakers either directly or indirectly. Yet that is just the business that the current rule for spectrum allocation allows.
A faithful reading of the framers’ Constitution, my colleague Yochai Benkler and I have argued[81], would strike down the regime of spectrum allocation[82]. A faithful reading would reject an architecture that so strongly concentrates power. The model for speech that the framers embraced was the model of the Internet — distributed, noncentralized, fully free and diverse. Of course, we should choose whether we want a faithful reading — translation does not provide its own normative support. But if fidelity is our aim, this is its answer.
What I described at the start of the book as modalities of constraint I have redescribed in this chapter as modalities of protection. While modalities of constraint can be used as swords against the individual (powers), modalities of protection can be used as shields (rights).
In principle we might think about how the four modalities protect speech, but I have focused here on architectures. Which architectures protect what speech? How does changing an architecture change the kind of speech being protected?
I have not tried to be comprehensive. But I have pushed for a view that addresses the relationship between architectures and speech globally and uses constitutional values to think not just about what is permitted given a particular architecture, but also about which architectures are permitted. Our real-space constitution should inform the values of our cyberspace constitution. At the least, it should constrain the state in its efforts to architect cyberspace in ways that are inconsistent with those values.
Let’s pause for a moment and look back over these three chapters. There is a pattern to the problems they present — a way of understanding how all three problems are the same.
In one sense, each has asked: How much control should we allow over information, and by whom should this control be exercised? There is a battle between code that protects intellectual property and fair use; there is a battle between code that might make a market for privacy and the right to report facts about individuals regardless of that market; there is a battle between code that enables perfect filtering of speech and architectures that ensure some messiness about who gets what. Each case calls for a balance of control.
My vote in each context may seem to vary. With respect to intellectual property, I argue against code that tracks reading and in favor of code that guarantees a large space for an intellectual commons. In the context of privacy, I argue in favor of code that enables individual choice — both to encrypt and to express preferences about what personal data is collected by others. Code would enable that choice; law could inspire that code. In the context of free speech, however, I argue against code that would perfectly filter speech — it is too dangerous, I claim, to allow perfect choice there. Better choice, of course, is better, so code that would empower better systems of reputation is good, as is code that would widen the legitimate range of broadcasting.
The aim in all three contexts is to work against centralized structures of choice. In the context of filtering, however, the aim is to work against structures that are too individualized as well.
You may ask whether these choices are consistent. I think they are, but it’s not important that you agree. You may believe that a different balance makes sense — more control for intellectual property or filtering perhaps, and less for privacy. My real interest is in conveying the necessity of such balancing and of the values implicit in the claim that we will always require a balance. Always there is a competition between the public and private; always the rights of the private must be balanced against the interests of the public. Always a choice must be made about how far each side will be allowed to reach. These questions are inherent to public law: How will a particular constellation of constitutional values be reckoned? How will a balance be struck in particular factual contexts?
I have argued this point while neglecting to specify who is responsible for any given imbalance. There are those who would say that there is too much filtering, or not enough privacy, or too much control over intellectual property, but these are not public concerns unless the government is responsible for these imbalances. Constitutional value in the United States extends only so far as state action extends. And I have not shown just how state action extends to these contexts.
I do not intend to. In my view, our tradition reveals at least an ambiguity about how far constitutional values are to extend. In a world where only governments are regulators, keeping the Constitution’s authority limited to state action makes some sense. But when the modalities of regulation are multiplied, there is no reason to ignore the reach of constitutional values. Our framers made no choice about this; there is no reason why regulation through code cannot be informed by constitutional values. No argument has been made for why this part of our life should be cut off from the limitations and protections traditionally provided by the Constitution.
Code strikes the balance between individual and collective rights that I have highlighted so far. In the next chapter, a different balance is struck — one again made salient by code. However, this time the balance is not between the state and the individual but between the state and the implicit regulations of the architectures of cyberspace. Now the threat is to a traditional sovereignty. How do we translate that tradition to fit a world where code is law?