8 Escape from the City of Ghettos

In order to find his own self, [a person] also needs to live in a milieu where the possibility of many different value systems is explicitly recognized and honored. More specifically, he needs a great variety of choices so that he is not misled about the nature of his own person.

—Christopher Alexander et al., A Pattern Language

In theory, there’s never been a structure more capable of allowing all of us to shoulder the responsibility for understanding and managing our world than the Internet. But in practice, the Internet is headed in a different direction. Sir Tim Berners-Lee, the creator of the World Wide Web, captured the gravity of this threat in a recent call to arms in the pages of Scientific American titled “Long Live the Web.” “The Web as we know it,” he wrote, “is being threatened…. Some of its most successful inhabitants have begun to chip away at its principles. Large social-networking sites are walling off information posted by their users from the rest of the Web…. Governments—totalitarian and democratic alike—are monitoring people’s online habits, endangering important human rights. If we, the Web’s users, allow these and other trends to proceed unchecked, the Web could be broken into fragmented islands.”

In this book, I’ve argued that the rise of pervasive, embedded filtering is changing the way we experience the Internet and ultimately the world. At the center of this transformation is the fact that for the first time it’s possible for a medium to figure out who you are, what you like, and what you want. Even if the personalizing code isn’t always spot-on, it’s accurate enough to be profitable, not just by delivering better ads but also by adjusting the substance of what we read, see, and hear.

As a result, while the Internet offers access to a dazzling array of sources and options, in the filter bubble we’ll miss many of them. While the Internet can give us new opportunities to grow and experiment with our identities, the economics of personalization push toward a static conception of personhood. While the Internet has the potential to decentralize knowledge and control, in practice it’s concentrating control over what we see and what opportunities we’re offered in the hands of fewer people than ever before.

Of course, there are some advantages to the rise of the personalized Internet. I enjoy using Pandora, Netflix, and Facebook as much as the next person. I appreciate Google’s shortcuts through the information jungle (and couldn’t have written this book without them). But what’s troubling about this shift toward personalization is that it’s largely invisible to users and, as a result, out of our control. We are not even aware that we’re seeing increasingly divergent images of the Internet. The Internet may know who we are, but we don’t know who it thinks we are or how it’s using that information. Technology designed to give us more control over our lives is actually taking control away.

Ultimately, Sun Microsystems cofounder Bill Joy told me, information systems have to be judged on their public outcomes. “If what the Internet does is spread around a lot of information, fine, but what did that cause to happen?” he asked. If it’s not helping us solve the really big problems, what good is it? “We really need to address the core issues: climate change, political instability in Asia and the Middle East, demographic problems, and the decline of the middle class. In the context of problems of this magnitude, you’d hope that a new constituency would emerge, but there’s a distraction overlay—false issues, entertainment, gaming. If our system, with all the freedom of choice, is not addressing the problems, something’s wrong.”

Something is wrong with our media. But the Internet isn’t doomed, for a simple reason: This new medium is nothing if not plastic. Its great strength, in fact, is its capacity for change. Through a combination of individual action, corporate responsibility, and governmental regulation, it’s still possible to shift course.

“We create the Web,” Sir Tim Berners-Lee wrote. “We choose what properties we want it to have and not have. It is by no means finished (and it’s certainly not dead).” It’s still possible to build information systems that introduce us to new ideas, that push us in new ways. It’s still possible to create media that show us what we don’t know, rather than reflecting what we do. It’s still possible to erect systems that don’t trap us in an endless loop of self-flattery about our own interests or shield us from fields of inquiry that aren’t our own.

First, however, we need a vision—a sense of what to aim for.

The Mosaic of Subcultures

In 1975, architect Christopher Alexander and a team of colleagues began publishing a series of books that would change the face of urban planning, design, and programming. The most famous volume, A Pattern Language, is a guidebook that reads like a religious text. It’s filled with quotes and aphorisms and hand-drawn sketches, a bible guiding devotees toward a new way of thinking about the world.

The question that had consumed Alexander and his team during eight years of research was the question of why some places thrived and “worked” while others didn’t—why some cities and neighborhoods and houses flourished, while others were grim and desolate. The key, Alexander argued, was that design has to fit its literal and cultural context. And the best way to ensure that, they concluded, was to use a “pattern language,” a set of design specifications for human spaces.

Even for nonarchitects, the book is an entrancing read. There’s a pattern that describes the ideal nook for kids (the ceiling should be between 2 feet 6 inches and 4 feet high), and another for High Places “where you can look down and survey your world.” “Every society which is alive and whole,” Alexander wrote, “will have its own unique and distinct pattern language.”

Some of the book’s most intriguing sections illuminate the patterns that successful cities are built on. Alexander imagines two metropolises—the “heterogeneous city,” where people are mixed together irrespective of lifestyle and background, and the “city of ghettos,” where people are grouped together tightly by category. The heterogeneous city “seems rich,” Alexander writes, but “actually it dampens all significant variety, and arrests most of the possibilities for differentiation.” Though there’s a diverse mix of peoples and cultures, all of the parts of the city are diverse in the same way. Shaped by the lowest common cultural denominators, the city looks the same everywhere you go.

Meanwhile, in the city of ghettos, some people get trapped in the small world of a single subculture that doesn’t really represent who they are. Without connections and overlap between communities, subcultures that make up the city don’t evolve. As a result, the ghettos breed stagnation and intolerance.

But Alexander offers a third possibility: a happy medium between closed ghettos and the undifferentiated mass of the heterogeneous city. He called it the mosaic of subcultures. In order to achieve this kind of city, Alexander explains, designers should encourage neighborhoods with cultural character, “but though these subcultures must be sharp and distinct and separate, they must not be closed; they must be readily accessible to one another, so that a person can move easily from one to another, and can settle in the one which suits him best.” Alexander’s mosaic is based on two premises about human life: First, a person can only fully become him- or herself in a place where he or she “receives support for his idiosyncrasies from the people and values which surround him.” And second, as the quotation at the beginning of this chapter suggests, you have to see lots of ways of living in order to choose the best life for yourself. This is what the best cities do: They cultivate a vibrant array of cultures and allow their citizens to find their way to the neighborhoods and traditions in which they’re most at home.

Alexander was writing about cities, but what’s beautiful about A Pattern Language is that it can be applied to any space in which humans gather and live—including the Internet. Online communities and niches are important. They’re the places where new ideas and styles and themes and even languages get formed and tested. They’re the places where we can feel most at home. An Internet built like the heterogeneous city described by Alexander wouldn’t be a very pleasant place to be—a whirling chaos of facts and ideas and communications. But by the same token, nobody wants to live in the city of ghettos—and that’s where personalization, if it’s too acute, will take us. At its worst, the filter bubble confines us to our own information neighborhood, unable to see or explore the rest of the enormous world of possibilities that exist online. We need our online urban planners to strike a balance between relevance and serendipity, between the comfort of seeing friends and the exhilaration of meeting strangers, between cozy niches and wide open spaces.

What Individuals Can Do

Social-media researcher danah boyd was right when she warned that we are at risk of the “psychological equivalent of obesity.” And while creating a healthy information diet requires action on the part of the companies that supply the food, that doesn’t work unless we also change our own habits. Corn syrup vendors aren’t likely to change their practices until consumers demonstrate that they’re looking for something else.

Here’s one place to start: Stop being a mouse.

On an episode of the radio program This American Life, host Ira Glass investigates how to build a better mousetrap. He talks to Andy Woolworth, the man at the world’s largest mousetrap manufacturer who fields ideas for new trap designs. The proposed ideas vary from the impractical (a trap that submerges the mouse in antifreeze, which then needs to be thrown out by the bucket) to the creepy (a design that kills rodents using, yes, gas pellets).

But the punch line is that they’re all unnecessary. Woolworth has an easy job, because the existing traps are very cheap and work within a day 88 percent of the time. Mousetraps work because mice generally establish a food-seeking route within ten feet of where they are, returning to it up to thirty times a day. Place a trap in its vicinity, and chances are very good that you’ll catch your mouse.

Most of us are pretty mouselike in our information habits. I admittedly am: There are three or four Web sites that I check frequently each day, and I rarely vary them or add new ones to my repertoire. “Whether we live in Calcutta or San Francisco,” Matt Cohler told me, “we all kinda do the same thing over and over again most of the time. And jumping out of that recursion loop is not easy to do.” Habits are hard to break. But just as you notice more about the place you live when you take a new route to work, varying your path online dramatically increases your likelihood of encountering new ideas and people.

Just by stretching your interests in new directions, you give the personalizing code more breadth to work with. Someone who shows interest in opera and comic books and South African politics and Tom Cruise is harder to pigeonhole than someone who just shows interest in one of those things. And by constantly moving the flashlight of your attention to the perimeter of your understanding, you enlarge your sense of the world.

Going off the beaten track is scary at first, but the experiences we have when we come across new ideas, people, and cultures are powerful. They make us feel human. Serendipity is a shortcut to joy.

For some of the “identity cascade” problems discussed in chapter 5, regularly erasing the cookies your Internet browser uses to identify who you are is a partial cure. Most browsers these days make erasing cookies pretty simple—you just select Options or Preferences and then choose Erase cookies. And many personalized ad networks are offering consumers the option to opt out. I’m posting an updated and more detailed list of places to opt out on the Web site for this book, www.thefilterbubble.com.

But because personalization is more or less unavoidable, opting out entirely isn’t a particularly viable route for most of us. You can run all of your online activities in an “incognito” window, where less of your personal information is stored, but it’ll be increasingly impractical—many services simply won’t work the way they’re supposed to. (This is why, as I describe below, I don’t think the Do Not Track list currently under consideration by the FTC is a viable strategy.) And of course, Google personalizes based on your Internet address, location, and a number of other factors even if you’re entirely logged out and on a brand-new laptop.

A better approach is to choose to use sites that give users more control and visibility over how their filters work and how they use your personal information.

For example, consider the difference between Twitter and Facebook. In many ways, the two sites are very similar. They both offer people the opportunity to share blips of information and links to videos, news, and photographs. They both offer the opportunity to hear from the people you want to hear from and screen out the people you don’t.

But Twitter’s universe is based on a few very simple, mostly transparent rules—what one Twitter supporter called “a thin layer of regulation.” Unless you go out of your way to lock your account, everything you do is public to everyone. You can subscribe to anyone’s feed that you like without their permission, and then you see a time-ordered stream of updates that includes everything everyone you’re following says.

In comparison, the rules that govern Facebook’s information universe are maddeningly opaque and seem to change almost daily. If you post a status update, your friends may or may not see it, and you may or may not see theirs. (This is true even in the Most Recent view that many users assume shows all of the updates—it doesn’t.) Different types of content are likely to show up at different rates—if you post a video, for example, it’s more likely to be seen by your friends than a status update. And the information you share with the site itself is private one day and public the next. There’s no excuse, for example, for asking users to declare which Web sites they’re “fans” of with the promise that it’ll be shown only to their friends, and then releasing that information to the world, as Facebook did in 2009.

Because Twitter operates on the basis of a few simple, easily understandable rules, it’s also less susceptible to what venture capitalist Brad Burnham (whose Union Square Ventures was Twitter’s primary early investor) calls the tyranny of the default. There’s great power in setting the default option when people are given a choice. Dan Ariely, the behavioral economist, illustrates the principle with a chart showing organ donation rates in different European countries. In England, the Netherlands, and Austria, the rates hover around 10 percent to 15 percent, but in France, Germany, and Belgium, donation rates are in the high 90s. Why? In the first set of countries, you have to check a box giving permission for your organs to be donated. In the second, you have to check a box to say you won’t give permission.

If people will let defaults determine the fate of our friends who need lungs and hearts, we’ll certainly let them determine how we share information a lot of the time. That’s not because we’re stupid. It’s because we’re busy, have limited attention with which to make decisions, and generally trust that if everyone else is doing something, it’s OK for us to do it too. But this trust is often misplaced. Facebook has wielded this power with great intentionality—shifting the defaults on privacy settings in order to encourage masses of people to make their posts more public. And because software architects clearly understand the power of the default and use it to make their services more profitable, their claim that users can opt out of giving their personal information seems somewhat disingenuous. With fewer rules and a more transparent system, there are fewer defaults to set.

Facebook’s PR department didn’t return my e-mails requesting an interview (perhaps because MoveOn’s critical view of Facebook’s privacy practices is well known). But it would probably argue that it gives its users far more choice and control about how they use the service than Twitter does. And it’s true that Facebook’s options control panel lists scores of different options for Facebook users.

But to give people control, you have to make clearly evident what the options are, because options largely exist only to the degree that they’re perceived. This is the problem many of us used to face in programming our VCRs: The devices had all sorts of functions, but figuring out how to make them do anything was an afternoon-long exercise in frustration. When it comes to important tasks like protecting your privacy and adjusting your filters online, saying that you can figure it out if you read the manual for long enough isn’t a sufficient answer.

In short, at the time of this writing, Twitter makes it pretty straightforward to manage your filter and understand what’s showing up and why, whereas Facebook makes it nearly impossible. All other things being equal, if you’re concerned about having control over your filter bubble, better to use services like Twitter than services like Facebook.

We live in an increasingly algorithmic society, where our public functions, from police databases to energy grids to schools, run on code. We need to recognize that societal values about justice, freedom, and opportunity are embedded in how code is written and what it solves for. Once we understand that, we can begin to figure out which variables we care about and imagine how we might solve for something different.

For example, advocates looking to solve the problem of political gerrymandering—the backroom process of carving up electoral districts to favor one party or another—have long suggested that we replace the politicians involved with software. It sounds pretty good: Start with some basic principles, input population data, and out pops a new political map. But it doesn’t necessarily solve the basic problem, because what the algorithm solves for has political consequences: Whether the software aims to group by cities or ethnic groups or natural boundaries can determine which party keeps its seats in Congress and which doesn’t. And if the public doesn’t pay close attention to what the algorithm is doing, it could have the opposite of the intended effect—sanctioning a partisan deal with the imprimatur of “neutral” code.

In other words, it’s becoming more important to develop a basic level of algorithmic literacy. Increasingly, citizens will have to pass judgment on programmed systems that affect our public and national life. And even if you’re not fluent enough to read through thousands of lines of code, the building-block concepts—how to wrangle variables, loops, and memory—can illuminate how these systems work and where they might make errors.

Especially at the beginning, learning the basics of programming is even more rewarding than learning a foreign language. With a few hours and a basic platform, you can have that “Hello, World!” experience and start to see your ideas come alive. And within a few weeks, you can be sharing these ideas with the whole Web. Mastery, as in any profession, takes much longer, but the payoff for a limited investment in coding is fairly large: It doesn’t take long to become literate enough to understand what most basic bits of code are doing.

Changing our own behavior is a part of the process of bursting the filter bubble. But it’s of limited use unless the companies that are propelling personalization forward change as well.

What Companies Can Do

It’s understandable that, given their meteoric rises, the Googles and Facebooks of the online world have been slow to realize their responsibilities. But it’s critical that they recognize their public responsibility soon. It’s no longer sufficient to say that the personalized Internet is just a function of relevance-seeking machines doing their job.

The new filterers can start by making their filtering systems more transparent to the public, so that it’s possible to have a discussion about how they’re exercising their responsibilities in the first place.

As Larry Lessig says, “A political response is possible only when regulation is transparent.” And there’s more than a little irony in the fact that companies whose public ideologies revolve around openness and transparency are so opaque themselves.

Facebook, Google, and their filtering brethren claim that to reveal anything about their algorithmic processes would be to give away business secrets. But that defense is less convincing than it sounds at first. Both companies’ primary advantage lies in the extraordinary number of people who trust them and use their services (remember lock-in?). According to Danny Sullivan’s Search Engine Land blog, Bing’s search results are “highly competitive” with Google’s, but it has a fraction of its more powerful rival’s users. It’s not a matter of math that keeps Google ahead, but the sheer number of people who use it every day. PageRank and the other major pieces of Google’s search engine are “actually one of the world’s worst kept secrets,” says Google fellow Amit Singhal.

Google has also argued that it needs to keep its search algorithm under tight wraps because if it was known it’d be easier to game. But open systems are harder to game than closed ones, precisely because everyone shares an interest in closing loopholes. The open-source operating system Linux, for example, is actually more secure and harder to penetrate with a virus than closed ones like Microsoft’s Windows or Apple’s OS X.

Whether or not it makes the filterers’ products more secure or efficient, keeping the code under tight wraps does do one thing: It shields these companies from accountability for the decisions they’re making, because the decisions are difficult to see from the outside. But even if full transparency proves impossible, it’s possible for these companies to shed more light on how they approach sorting and filtering problems.

For one thing, Google and Facebook and other new media giants could draw inspiration from the history of newspaper ombudsmen, which became a newsroom topic in the mid-1960s.

Philip Foisie, an executive at the Washington Post company, wrote one of the most memorable memos arguing for the practice. “It is not enough to say,” he suggested, “that our paper, as it appears each morning, is its own credo, that ultimately we are our own ombudsman. It has not proven to be, possibly cannot be. Even if it were, it would not be viewed as such. It is too much to ask the reader to believe that we are capable of being honest and objective about ourselves.” The Post found his argument compelling, and hired its first ombudsman in 1970.

“We know the media is a great dichotomy,” said the longtime Sacramento Bee ombudsman Arthur Nauman in a speech in 1994. On the one hand, he said, media has to operate as a successful business that provides a return on investment. “But on the other hand, it is a public trust, a kind of public utility. It is an institution invested with enormous power in the community, the power to affect thoughts and actions by the way it covers the news—the power to hurt or help the common good.” It is this spirit that the new media would do well to channel. Appointing an independent ombudsman and giving the world more insight into how the powerful filtering algorithms work would be an important first step.

Transparency doesn’t mean only that the guts of a system are available for public view. As the Twitter versus Facebook dichotomy demonstrates, it also means that individual users intuitively understand how the system works. And that’s a necessary precondition for people to control and use these tools—rather than having the tools control and use us.

To start with, we ought to be able to get a better sense of who these sites think we are. Google claims to make this possible with a “dashboard”—a single place to monitor and manage all of this data. In practice, its confusing and multitiered design makes it almost impossible for an average user to navigate and understand. Facebook, Amazon, and other companies don’t allow users to download a complete compilation of their data in the United States, though privacy laws in Europe force them to. It’s an entirely reasonable expectation that data that users provide to companies ought to be available to us, and that this expectation is one that, according to the University of California at Berkeley, most Americans share. We ought to be able to say, “You’re wrong. Perhaps I used to be a surfer, or a fan of comics, or a Democrat, but I’m not any more.”

Knowing what information the personalizers have on us isn’t enough. They also need to do a much better job explaining how they use the data—what bits of information are personalized, to what degree, and on what basis. A visitor to a personalized news site could be given the option of seeing how many other visitors were seeing which articles—even perhaps a colorcoded visual map of the areas of commonality and divergence. Of course, this requires admitting to the user that personalization is happening in the first place, and there are strong reasons in some cases for businesses not to do so. But they’re mostly commercial reasons, not ethical ones.

The Interactive Advertising Bureau is already pushing in this direction. An industry trade group for the online advertising community, the IAB has concluded that unless personalized ads disclose to users how they’re personalized, consumers will get angry and demand federal regulation. So it’s encouraging its members to include a set of icons on every ad to indicate what personal data the ad draws on and how to change or opt out of this feature set. As content providers incorporate the personalization techniques pioneered by direct marketers and advertisers, they should consider incorporating these safeguards as well.

Even then, sunlight doesn’t solve the problem unless it’s coupled with a focus in these companies on optimizing for different variables: more serendipity, a more humanistic and nuanced sense of identity, and an active promotion of public issues and cultivation of citizenship.

As long as computers lack consciousness, empathy, and intelligence, much will be lost in the gap between our actual selves and the signals that can be rendered into personalized environments. And as I discussed in chapter 5, personalization algorithms can cause identity loops, in which what the code knows about you constructs your media environment, and your media environment helps to shape your future preferences. This is an avoidable problem, but it requires crafting an algorithm that prioritizes “falsifiability,” that is, an algorithm that aims to disprove its idea of who you are. (If Amazon harbors a hunch that you’re a crime novel reader, for example, it could actively present you with choices from other genres to fill out its sense of who you are.)

Companies that hold great curatorial power also need to do more to cultivate public space and citizenship. To be fair, they’re already doing some of this: Visitors to Facebook on November 2, 2010, were greeted by a banner asking them to indicate if they’d voted. Those who had voted shared this news with their friends; because some people vote because of social pressure, it’s quite possible that Facebook increased the number of voters. Likewise, Google has been doing strong work to make information about polling locations more open and easily available, and featured its tool on its home page on the same day. Whether or not this is profit-seeking behavior (a “find your polling place” feature would presumably be a terrific place for political advertising), both projects drew the attention of users toward political engagement and citizenship.

A number of the engineers and technology journalists I talked to raised their eyebrows when I asked them if personalizing algorithms could do a better job on this front. After all, one said, who’s to say what’s important? For Google engineers to place a value on some kinds of information over others, another suggested, would be unethical—though of course this is precisely what the engineers themselves do all the time.

To be clear, I don’t yearn to go back to the good old days when a small group of all-powerful editors unilaterally decided what was important. Too many actually important stories (the genocide in Rwanda, for example) fell through the cracks, while too many actually unimportant ones got front-page coverage. But I also don’t think we should jettison that approach altogether. Yahoo News suggests there is some possibility for middle ground: The team combines algorithmic personalization with old-school editorial leadership. Some stories are visible to everyone because they’re surpassingly important. Others show up for some users and not others. And while the editorial team at Yahoo spends a lot of time interpreting click data and watching which articles do well and which don’t, they’re not subservient to it. “Our editors think of the audience as people with interests, as opposed to a flood of directional data,” a Yahoo News employee told me. “As much as we love the data, it’s being filtered by human beings who are thinking about what the heck it means. Why didn’t the article on this topic we think is important for our readers to know about do better? How do we help it find a larger audience?”

And then there are fully algorithmic solutions. For example, why not rely on everyone’s idea of what’s important? Imagine for a moment that next to each Like button on Facebook was an Important button. You could tag items with one or the other or both. And Facebook could draw on a mix of both signals—what people like, and what they think really matters—to populate and personalize your news feed. You’d have to bet that news about Pakistan would be seen more often—even accounting for everyone’s quite subjective definition of what really matters. Collaborative filtering doesn’t have to lead to compulsive media: The whole game is in what values the filters seek to pull out. Alternately, Google or Facebook could place a slider bar running from “only stuff I like” to “stuff other people like that I’ll probably hate” at the top of search results and the News Feed, allowing users to set their own balance between tight personalization and a more diverse information flow. This approach would have two benefits: It would make clear that there’s personalization going on, and it would place it more firmly in the user’s control.

There’s one more thing the engineers of the filter bubble can do. They can solve for serendipity, by designing filtering systems to expose people to topics outside their normal experience. This will often be in tension with pure optimization in the short term, because a personalization system with an element of randomness will (by definition) get fewer clicks. But as the problems of personalization become better known, it may be a good move in the long run—consumers may choose systems that are good at introducing them to new topics. Perhaps what we need is a kind of anti-Netflix Prize—a Serendipity Prize for systems that are the best at holding readers’ attention while introducing them to new topics and ideas.

If this shift toward corporate responsibility seems improbable, it’s not without precedent. In the mid-1800s, printing a newspaper was hardly a reputable business. Papers were fiercely partisan and recklessly ideological. They routinely altered facts to suit their owners’ vendettas of the day, or just to add color. It was this culture of crass commercialism and manipulation that Walter Lippmann railed against in Liberty and the News.

But as newspapers became highly profitable and highly important, they began to change. It became possible, in a few big cities, to run papers that weren’t just chasing scandal and sensation—in part, because their owners could afford not to. Courts started to recognize a public interest in journalism and rule accordingly. Consumers started to demand more scrupulous and rigorous editing.

Urged on by Lippmann’s writings, an editorial ethic began to take shape. It was never shared universally or followed as well as it could have been. It was always compromised by the business demands of newspapers’ owners and shareholders. It failed outright repeatedly—access to power brokers compromised truth telling, and the demands of advertisers overcame the demands of readers. But in the end, it succeeded, somehow, in seeing us through a century of turmoil.

The torch is now being passed to a new generation of curators, and we need them to pick it up and carry it with pride. We need programmers who will build public life and citizenship into the worlds they create. And we need users who will hold them to it when the pressure of monetization pulls them in a different direction.

What Governments and Citizens Can Do

There’s plenty that the companies that power the filter bubble can do to mitigate the negative consequences of personalization—the ideas above are just a start. But ultimately, some of these problems are too important to leave in the hands of private actors with profit-seeking motives. That’s where governments come in.

Ultimately, as Eric Schmidt told Stephen Colbert, Google is just a company. Even if there are ways of addressing these issues that don’t hurt the bottom line—which there may well be—doing so simply isn’t always going to be a top-level priority. As a result, after we’ve each done our part to pop the filter bubble, and after companies have done what they’re willing to do, there’s probably a need for government oversight to ensure that we control our online tools and not the other way around.

In his book Republic.com, Cass Sunstein suggested a kind of “fairness doctrine” for the Internet, in which information aggregators have to expose their audiences to both sides. Though he later changed his mind, the proposal suggests one direction for regulation: Just require curators to behave in a public-oriented way, exposing their readers to diverse lines of argument. I’m skeptical, for some of the same reasons Sunstein abandoned the idea: Curation is a nuanced, dynamic thing, an art as much as a science, and it’s hard to imagine how regulating editorial ethics wouldn’t inhibit a great deal of experimentation, stylistic diversity, and growth.

As this book goes to press, the U.S. Federal Trade Commission is proposing a Do Not Track list, modeled after the highly successful Do Not Call list. At first blush, it sounds pretty good: It would set up a single place to opt out of the online tracking that fuels personalization. But Do Not Track would probably offer a binary choice—either you’re in or you’re out—and services that make money on tracking might simply disable themselves for Do Not Track list members. If most of the Internet goes dark for these people, they’ll quickly leave the list. And as a result, the process could backfire—“proving” that people don’t care about tracking, when in fact what most of us want is more nuanced ways of asserting control.

The best leverage point, in my view, is in requiring companies to give us real control over our personal information. Ironically, although online personalization is relatively new, the principles that ought to support this leverage have been clear for decades. In 1973, the Department of Housing, Education, and Welfare under Nixon recommended that regulation center on what it called Fair Information Practices:


• You should know who has your personal data, what data they have, and how it’s used.

• You should be able to prevent information collected about you for one purpose from being used for others.

• You should be able to correct inaccurate information about you.

• Your data should be secure.


Nearly forty years later, the principles are still basically right, and we’re still waiting for them to be enforced. We can’t wait much longer: In a society with an increasing number of knowledge workers, our personal data and “personal brand” are worth more than they ever have been. Especially if you’re a blogger or a writer, if you make funny videos or music, or if you coach or consult for a living, your online data trail is one of your most valuable assets. But while it’s illegal to use Brad Pitt’s image to sell a watch without his permission, Facebook is free to use your name to sell one to your friends.

In courts around the world, information brokers are pushing this view—“everyone’s better off if your online life is owned by us.” They argue that the opportunities and control that consumers get by using their free tools outweigh the value of their personal data. But consumers are entirely unequipped to make this calculation—while the control you gain is obvious, the control you lose (because, say, your personal data is used to deny you an opportunity down the road) is invisible. The asymmetry of understanding is vast.

To make matters worse, even if you carefully read a company’s privacy policy and decide that giving over rights to your personal information is worth it under those conditions, most companies reserve the right to change the rules of the game at any time. Facebook, for example, promised its users that if they made a connection with a Page, that information would only be shared with their friends. But in 2010, it decided that all of that data should be made fully public; a clause in Facebook’s privacy policy (as with many corporate privacy policies) allows it to change the rules retroactively. In effect, this gives them nearly unlimited power to dispatch personal data as they see fit.

To enforce Fair Information Practices, we need to start thinking of personal data as a kind of personal property and protecting our rights in it. Personalization is based on an economic transaction in which consumers are at an inherent disadvantage: While Google may know how much your race is worth to Google, you don’t. And while the benefits are obvious (free e-mail!), the drawbacks (opportunities and content missed) are invisible. Thinking of personal information as a form of property would help make this a fairer market.

Although personal information is property, it’s a special kind of property, because you still have a vested interest in your own data long after it’s been exposed. You probably wouldn’t want consumers to be able to sell all of their personal data, in perpetuity. France’s “moral laws,” in which artists retain some control over what’s done with a piece after it’s been sold, might be a better template. (Speaking of France, while European laws are much closer to Fair Information Practices in protecting personal information, by many accounts the enforcement is much worse, partly because it’s much harder for individuals to sue for breaches of the laws.)

Marc Rotenberg, executive director of the Electronic Privacy Information Center, says, “We shouldn’t have to accept as a starting point that we can’t have free services on the Internet without major privacy violations.” And this isn’t just about privacy. It’s also about how our data shapes the content and opportunities we see and don’t see. And it’s about being able to track and manage this constellation of data that represents our lives with the same ease that companies like Acxiom and Facebook already do.

Silicon Valley technologists sometimes portray this as an unwinnable fight—people have lost control of their personal data, they’ll never regain it, and they just have to grow up and live with it. But legal requirements on personal information need not be foolproof in order to work, any more than legal requirements not to steal are useless because people sometimes still steal things and get away with it. The force of law adds friction to the transmission of some kinds of information—and in many cases, a little friction changes a lot.

And there are laws that do protect personal information even in this day and age. The Fair Credit Reporting Act, for example, ensures that credit agencies have to disclose their credit reports to consumers and notify consumers when they’re discriminated against on the basis of reports. That’s not much, but given that previously consumers couldn’t even see if their credit report contained errors (and 70 percent do, according to U.S. PIRG), it’s a step in the right direction.

A bigger step would be putting in place an agency to oversee the use of personal information. The EU and most other industrial nations have this kind of oversight, but the United States has lingered behind, scattering responsibilities for protecting personal information among the Federal Trade Commission, the Commerce Department, and other agencies. As we enter the second decade of the twenty-first century, it’s past time to take this concern seriously.

None of this is easy: Private data is a moving target, and the process of balancing consumers and citizens’ interests against those of these companies will take a lot of fine-tuning. At worst, new laws could be more onerous than the practices they seek to prevent. But that’s an argument for doing this right and doing it soon, before the companies who profit from private information have even greater incentives to try to block it from passing.

Given the money to be made and the power that money holds over the American legislative system, this shift won’t be easy. So to rescue our digital environment from itself, we’ll ultimately need a new constituency of digital environmentalists—citizens of this new space we’re all building who band together to protect what’s great about it.

In the next few years, the rules that will govern the next decade or more of online life will be written. And the big online conglomerates are lining up to help write them. The communications giants who own the Internet’s physical infrastructure have plenty of political clout. AT&T outranks oil companies and pharmaceutical companies as one of the top four corporate contributors to American politics. Intermediaries like Google get the importance of political influence, too: Eric Schmidt is a frequent White House visitor, and companies like Microsoft, Google, and Yahoo have spent millions on influence-mongering in Washington, D.C. Given all of the Web 2.0 hype about empowerment, it’s ironic that the old adage still applies: In the fight for control of the Internet, everyone’s organized but the people.

But that’s only because most of us aren’t in the fight. People who use the Internet and are invested in its future outnumber corporate lobbyists by orders of magnitude. There are literally hundreds of millions of us across all demographics—political, ethnic, socioeconomic, and generational—who have a personal stake in the outcome. And there are plenty of smaller online enterprises that have every interest in ensuring a democratic, public-spirited Web. If the great mass of us decide that an open, public-spirited Internet matters and speak up about it—if we join organizations like Free Press (a nonpartisan grassroots lobby for media reform) and make phone calls to Congress and ask questions at town hall meetings and contribute donations to the representatives who are leading the way—the lobbyists don’t stand a chance.

As billions come online in India and Brazil and Africa, the Internet is transforming into a truly global place. Increasingly, it will be the place where we live our lives. But in the end, a small group of American companies may unilaterally dictate how billions of people work, play, communicate, and understand the world. Protecting the early vision of radical connectedness and user control should be an urgent priority for all of us.

Загрузка...