5 The Public Is Irrelevant

The presence of others who see what we see and hear what we hear assures us of the reality of the world and ourselves.

—Hannah Arendt

It is an axiom of political science in the United States that the only way to neutralize the influence of the newspapers is to multiply their number.

—Alexis de Tocqueville

On the night of May 7, 1999, a B-2 stealth bomber left Whiteman Air Force Base in Missouri. The aircraft flew on an easterly course until it reached the city of Belgrade in Serbia, where a civil war was under way. Around midnight local time, the bomber delivered its cargo: four GPSGUIDED bombs, into which had been programmed an address that CIA documents identified as a possible arms warehouse. In fact, the address was the Yugoslavian Chinese Embassy. The building was demolished, and three Chinese diplomats were killed.

The United States immediately apologized, calling the event an accident. On Chinese state TV, however, an official statement called the bombing a “barbaric attack and a gross violation of Chinese sovereignty.” Though President Bill Clinton tried to reach Chinese President Jiang Zemin, Zemin repeatedly rejected his calls; Clinton’s videotaped apology to the Chinese people was barred from Chinese media for four days.

As anti-U.S. riots began to break out in the streets, China’s largest newspaper, the People’s Daily, created an online chat forum called the Anti-Bombing Forum. Already, in 1999, chat forums were huge in China—much larger than they’ve ever been in the United States. As New York Times journalist Tom Downey explained a few years later, “News sites and individual blogs aren’t nearly as influential in China, and social networking hasn’t really taken off. What remain most vital are the largely anonymous online forums… that are much more participatory, dynamic, populist and perhaps even democratic than anything on the English-language Internet.” Tech writer Clive Thompson quotes Shanthi Kalathil, a researcher at the Carnegie Endowment, who says that the Anti-Bombing Forum helped to legitimize the Chinese government’s position that the bombing was deliberate among “an elite, wired section of the population.” The forum was a form of crowd-sourced propaganda: Rather than just telling Chinese citizens what to think, it lifted the voices of thousands of patriots aligned with the state.

Most of the Western reporting on Chinese information management focuses on censorship: Google’s choice to remove, temporarily, search results for “Tiananmen Square,” or Microsoft’s decision to ban the word “democracy” from Chinese blog posts, or the Great Firewall that sits between China and the outside world and sifts through every packet of information that enters or exits the country. Censorship in China is real: There are plenty of words that have been more or less stricken from the public discourse. When Thompson asks whether the popular Alibaba engine would show results for dissident movements, CEO Jack Ma shook his head. “No! We are a business!” he said. “Shareholders want to make money. Shareholders want us to make the customer happy. Meanwhile we do not have any responsibilities saying we should do this or that political thing.”

In practice, the firewall is not so hard to circumvent. Corporate virtual private networks—Internet connections encrypted to prevent espionage—operate with impunity. Proxies and firewall workarounds like Tor connect in-country Chinese dissidents with even the most hard-core antigovernment Web sites. But to focus exclusively on the firewall’s inability to perfectly block information is to miss the point. China’s objective isn’t so much to blot out unsavory information as to alter the physics around it—to create friction for problematic information and to route public attention to progovernment forums. While it can’t block all of the people from all of the news all of the time, it doesn’t need to.

“What the government cares about,” Atlantic journalist James Fallows writes, “is making the quest for information just enough of a nuisance that people generally won’t bother.” The strategy, says Xiao Qiang of the University of California at Berkeley, is “about social control, human surveillance, peer pressure, and self-censorship.” Because there’s no official list of blocked keywords or forbidden topics published by the government, businesses and individuals censor themselves to avoid a visit from the police. Which sites are available changes daily. And while some bloggers suggest that the system’s unreliability is a result of faulty technology (“the Internet will override attempts to control it!”), for the government this is a feature, not a bug. James Mulvenon, the head of the Center for Intelligence Research and Analysis, puts it this way: “There’s a randomness to their enforcement, and that creates a sense that they’re looking at everything.”

Lest that sensation be too subtle, the Public Security Bureau in Shenzhen, China, developed a more direct approach: Jingjing and Chacha, the cartoon Internet Police. As the director of the initiative told the China Digital Times, he wanted “to let all Internet users know that the Internet is not a place beyond law [and that] the Internet Police will maintain order in all online behavior.” Icons of the male-female pair, complete with jaunty flying epaulets and smart black shoes, were placed on all major Web sites in Shenzhen; they even had instant-message addresses so that six police officers could field questions from the online crowds.

“People are actually quite free to talk about [democracy],” Google’s China point man, Kai-Fu Lee, told Thompson in 2006. “I don’t think they care that much. Hey, U.S. democracy, that’s a good form of government. Chinese government, good and stable, that’s a good form of government. Whatever, as long as I get to go to my favorite Web site, see my friends, live happily.” It may not be a coincidence that the Great Firewall stopped blocking pornography recently. “Maybe they are thinking that if Internet users have some porn to look at, then they won’t pay so much attention to political matters,” Michael Anti, a Beijing-based analyst, told the AP.

We usually think about censorship as a process by which governments alter facts and content. When the Internet came along, many hoped it would eliminate censorship altogether—the flow of information would simply be too swift and strong for governments to control. “There’s no question China has been trying to crack down on the Internet,” Bill Clinton told the audience at a March 2000 speech at Johns Hopkins University. “Good luck! That’s sort of like trying to nail Jell-O to the wall.”

But in the age of the Internet, it’s still possible for governments to manipulate the truth. The process has just taken a different shape: Rather than simply banning certain words or opinions outright, it’ll increasingly revolve around second-order censorship—the manipulation of curation, context, and the flow of information and attention. And because the filter bubble is primarily controlled by a few centralized companies, it’s not as difficult to adjust this flow on an individual-by-individual basis as you might think. Rather than decentralizing power, as its early proponents predicted, in some ways the Internet is concentrating it.

Lords of the Cloud

To get a sense of how personalization might be used for political ends, I talked to a man named John Rendon.

Rendon affably describes himself as an “information warrior and perception manager.” From the Rendon Group’s headquarters in Washington, D.C.’s, Dupont Circle, he provides those services to dozens of U.S. agencies and foreign governments. When American troops rolled into Kuwait City during the first Iraq war, television cameras captured hundreds of Kuwaitis joyfully waving American flags. “Did you ever stop to wonder,” he asked an audience later, “how the people of Kuwait City, after being held hostage for seven long and painful months, were able to get handheld American flags? And for that matter, the flags of other coalition countries? Well, you now know the answer. That was one of my jobs.”

Much of Rendon’s work is confidential—he enjoys a level of beyond–Top Secret clearance that even high-level intelligence analysts sometimes fail to get. His role in George W. Bush–era pro-U.S. propaganda in Iraq is unclear: While some sources claim he was a central figure in the effort, Rendon denies any involvement. But his dream is quite clear: Rendon wants to see a world where television “can drive the policy process,” where “border patrols [are] replaced by beaming patrols,” and where “you can win without fighting.”

Given all that, I was a bit surprised when the first weapon he referred me to was a very quotidian one: a thesaurus. The key to changing public opinion, Rendon said, is finding different ways to say the same thing. He described a matrix, with extreme language or opinion on one side and mild opinion on the other. By using sentiment analysis to figure out how people in a country felt about an event—say, a new arms deal with the United States—and identify the right synonyms to move them toward approval, you could “gradually nudge a debate.” “It’s a lot easier to be close to what reality is” and push it in the right direction, he said, than to make up a new reality entirely.

Rendon had seen me talk about personalization at an event we both attended. Filter bubbles, he told me, provided new ways of managing perceptions. “It begins with getting inside the algorithm. If you could find a way to load your content up so that only your content gets pulled by the stalking algorithm, then you’d have a better chance of shaping belief sets,” he said. In fact, he suggested, if we looked in the right places, we might be able to see traces of this kind of thing happening now—sentiment being algorithmically shifted over time.

But if the filter bubble might make shifting perspectives easier in a future Iraq or Panama, Rendon was clearly concerned about the impact of self-sorting and personalized filtering for democracy at home. “If I’m taking a photo of a tree,” he said, “I need to know what season we’re in. Every season it looks different. It could be dying, or just losing its leaves in autumn.” To make good decisions, context is crucial—that’s why the military is so focused on what they call “360-degree situational awareness.” In the filter bubble, you don’t get 360 degrees—and you might not get more than one.

I returned to the question about using algorithms to shift sentiment. “How does someone game the system when it’s all about self-generated, self-reinforcing information flows? I have to think about it more,” Rendon said, “But I think I know how I’d do it.”

“How?” I asked.

He paused, then chuckled: “Nice try.” He’d already said too much.

The campaign of propaganda that Walter Lippmann railed against in World War I was a massive undertaking: To “goose-step the truth,” hundreds of newspapers nationwide had to be brought onboard. Now that every blogger is a publisher, the task seems nearly impossible. In 2010, Google chief Eric Schmidt echoed this sentiment, arguing in the journal Foreign Affairs that the Internet eclipses intermediaries and governments and empowers individuals to “consume, distribute, and create their own content without government control.”

It’s a convenient view for Google—if intermediaries are losing power, then the company’s merely a minor player in a much larger drama. But in practice, a great majority of online content reaches people through a small number of Web sites—Google foremost among them. These big companies represent new loci of power. And while their multinational character makes them resistant to some forms of regulation, they can also offer one-stop shopping for governments seeking to influence information flows.

As long as a database exists, it’s potentially accessible by the state. That’s why gun rights activists talk a lot about Alfred Flatow. Flatow was an Olympic gymnast and German Jew who in 1932 registered his gun in accordance with the laws of the waning Weimar Republic. In 1938, German police came to his door. They’d searched through the record, and in preparation for the Holocaust, they were rounding up Jews with handguns. Flatow was killed in a concentration camp in 1942.

For National Rifle Association members, the story is a powerful cautionary tale about the dangers of a national gun registry. As a result of Flatow’s story and thousands like it, the NRA has successfully blocked a national gun registry for decades. If a fascistic anti-Semitic regime came into power in the United States, it’d be hard put to identify gun-holding Jews using its own databases.

But the NRA’s focus may have been too narrow. Fascists aren’t known for carefully following the letter of the law regarding extragovernmental databases. And using the data that credit card companies use—or for that matter, building models based on the thousands of data points Acxiom tracks—it’d be a simple matter to predict with significant accuracy who has a gun and who does not.

Even if you aren’t a gun advocate, the story is worth paying attention to. The dynamics of personalization shift power into the hands of a few major corporate actors. And this consolidation of huge masses of data offers governments (even democratic ones) more potential power than ever.

Rather than housing their Web sites and databases internally, many businesses and start-ups now run on virtual computers in vast server farms managed by other companies. The enormous pool of computing power and storage these networked machines create is known as the cloud, and it allows clients much greater flexibility. If your business runs in the cloud, you don’t need to buy more hardware when your processing demands expand: You just rent a greater portion of the cloud. Amazon Web Services, one of the major players in the space, hosts thousands of Web sites and Web servers and undoubtedly stores the personal data of millions. On one hand, the cloud gives every kid in his or her basement access to nearly unlimited computing power to quickly scale up a new online service. On the other, as Clive Thompson pointed out to me, the cloud “is actually just a handful of companies.” When Amazon booted the activist Web site WikiLeaks off its servers under political pressure in 2010, the site immediately collapsed—there was nowhere to go.

Personal data stored in the cloud is also actually much easier for the government to search than information on a home computer. The FBI needs a warrant from a judge to search your laptop. But if you use Yahoo or Gmail or Hotmail for your e-mail, you “lose your constitutional protections immediately,” according to a lawyer for the Electronic Freedom Foundation. The FBI can just ask the company for the information—no judicial paperwork needed, no permission required—as long as it can argue later that it’s part of an “emergency.” “The cops will love this,” says privacy advocate Robert Gellman about cloud computing. “They can go to a single place and get everybody’s documents.”

Because of the economies of scale in data, the cloud giants are increasingly powerful. And because they’re so susceptible to regulation, these companies have a vested interest in keeping government entities happy. When the Justice Department requested billions of search records from AOL, Yahoo, and MSN in 2006, the three companies quickly complied. (Google, to its credit, opted to fight the request.) Stephen Arnold, an IT expert who worked at consulting firm Booz Allen Hamilton, says that Google at one point housed three officers of “an unnamed intelligence agency” at its headquarters in Mountain View. And Google and the CIA have invested together in a firm called Recorded Future, which focuses on using data connections to predict future real-world events.

Even if the consolidation of this data-power doesn’t result in more governmental control, it’s worrisome on its own terms.

One of the defining traits of the new personal information environment is that it’s asymmetrical. As Jonathan Zittrain argues in The Future of the Internet—and How to Stop It, “nowadays, an individual must increasingly give information about himself to large and relatively faceless institutions, for handling and use by strangers—unknown, unseen, and all too frequently, unresponsive.”

In a small town or an apartment building with paper-thin walls, what I know about you is roughly the same as what you know about me. That’s a basis for a social contract, in which we’ll deliberately ignore some of what we know. The new privacyless world does away with that contract. I can know a lot about you without your knowing I know. “There’s an implicit bargain in our behavior,” search expert John Battelle told me, “that we haven’t done the math on.”

If Sir Francis Bacon is right that “knowledge is power,” privacy proponent Viktor Mayer-Schonberger writes that what we’re witnessing now is nothing less than a “redistribution of information power from the powerless to the powerful.” It’d be one thing if we all knew everything about each other. It’s another when centralized entities know a lot more about us than we know about each other—and sometimes, more than we know about ourselves. If knowledge is power, then asymmetries in knowledge are asymmetries in power.

Google’s famous “Don’t be evil” motto is presumably intended to allay some of these concerns. I once explained to a Google search engineer that while I didn’t think the company was currently evil, it seemed to have at its fingertips everything it needed to do evil if it wished. He smiled broadly. “Right,” he said. “We’re not evil. We try really hard not to be evil. But if we wanted to, man, could we ever!”

Friendly World Syndrome

Most governments and corporations have used the new power that personal data and personalization offer fairly cautiously so far—China, Iran, and other oppressive regimes being the obvious exceptions. But even putting aside intentional manipulation, the rise of filtering has a number of unintended yet serious consequences for democracies. In the filter bubble, the public sphere—the realm in which common problems are identified and addressed—is just less relevant.

For one thing, there’s the problem of the friendly world. Communications researcher George Gerbner was one of the first theorists to look into how media affect our political beliefs, and in the mid-1970s, he spent a lot of time thinking about shows like Starsky and Hutch. It was a pretty silly program, filled with the shared clichés of seventies cop TV—the bushy moustaches, the twanging soundtracks, the simplistic goodversus-evil plots. And it was hardly the only one—for every Charlie’s Angels or Hawaii Five-O that earned a place in cultural memory, there are dozens of shows, like The Rockford Files, Get Christie Love, and Adam-12, that are unlikely to be resuscitated for ironic twenty-first-century remakes.

But Gerbner, a World War II veteran–turned–communications theorist who became dean of the Annenberg School of Communication, took these shows seriously. Starting in 1969, he began a systematic study of the way TV programming affects how we think about the world. As it turned out, the Starsky and Hutch effect was significant. When you asked TV watchers to estimate the percentage of the adult workforce that was made up of cops, they vastly overguessed relative to non–TV watchers with the same education and demographic background. Even more troubling, kids who saw a lot of TV violence were much more likely to be worried about real-world violence.

Gerbner called this the mean world syndrome: If you grow up in a home where there’s more than, say, three hours of television per day, for all practical purposes, you live in a meaner world—and act accordingly—than your next-door neighbor who lives in the same place but watches less television. “You know, who tells the stories of a culture really governs human behavior,” Gerbner later said.

Gerbner died in 2005, but he lived long enough to see the Internet begin to break that stranglehold. It must have been a relief: Although our online cultural storytellers are still quite consolidated, the Internet at least offers more choice. If you want to get your local news from a blogger rather than a local TV station that trumpets crime rates to get ratings, you can.

But if the mean world syndrome poses less of a risk these days, there’s a new problem on the horizon: We may now face what persuasion-profiling theorist Dean Eckles calls a friendly world syndrome, in which some of the biggest and most important problems fail to reach our view at all.

While the mean world on television arises from a cynical “if it bleeds, it leads” approach to programming, the friendly world generated by algorithmic filtering may not be as intentional. According to Facebook engineer Andrew Bosworth, the team that developed the Like button originally considered a number of options—from stars to a thumbs up sign (but in Iran and Thailand, it’s an obscene gesture). For a month in the summer of 2007, the button was known as the Awesome button. Eventually, however, the Facebook team gravitated toward Like, which is more universal.

That Facebook chose Like instead of, say, Important is a small design decision with far-reaching consequences: The stories that get the most attention on Facebook are the stories that get the most Likes, and the stories that get the most Likes are, well, more likable.

Facebook is hardly the only filtering service that will tend toward an antiseptically friendly world. As Eckles pointed out to me, even Twitter, which has a reputation for putting filtering in the hands of its users, has this tendency. Twitter users see most of the tweets of the folks they follow, but if my friend is having an exchange with someone I don’t follow, it doesn’t show up. The intent is entirely innocuous: Twitter is trying not to inundate me with conversations I’m not interested in. But the result is that conversations between my friends (who will tend to be like me) are overrepresented, while conversations that could introduce me to new ideas are obscured.

Of course, friendly doesn’t describe all of the stories that pierce the filter bubble and shape our sense of the political world. As a progressive political news junkie, I get plenty of news about Sarah Palin and Glenn Beck. The valence of this news, however, is very predictable: People are posting it to signal their dismay with Beck’s and Palin’s rhetoric and to build a sense of solidarity with their friends, who presumably feel the same way. It’s rare that my assumptions about the world are shaken by what I see in my news feed.

Emotional stories are the ones that generally thrive in the filter bubble. The Wharton School study on the New York Times’s Most Forwarded List, discussed in chapter 2, found that stories that aroused strong feelings—awe, anxiety, anger, happiness—were much more likely to be shared. If television gives us a “mean world,” filter bubbles give us an “emotional world.”

One of the troubling side effects of the friendly world syndrome is that some important public problems will disappear. Few people seek out information about homelessness, or share it, for that matter. In general, dry, complex, slow-moving problems—a lot of the truly significant issues—won’t make the cut. And while we used to rely on human editors to spotlight these crucial problems, their influence is now waning.

Even advertising isn’t necessarily a foolproof way of alerting people to public problems, as the environmental group Oceana found out. In 2004, Oceana was running a campaign urging Royal Caribbean to stop dumping its raw sewage into the sea; as part of the campaign, it took out a Google ad that said “Help us protect the world’s oceans. Join the fight!” After two days, Google pulled the ads, citing “language advocating against the cruise line industry” that was in violation of their general guidelines about taste. Apparently, advertisers that implicated corporations in public issues weren’t welcome.

The filter bubble will often block out the things in our society that are important but complex or unpleasant. It renders them invisible. And it’s not just the issues that disappear. Increasingly, it’s the whole political process.

The Invisible Campaign

When George W. Bush came out of the 2000 election with far fewer votes than Karl Rove expected, Rove set in motion a series of experiments in microtargeted media in Georgia—looking at a wide range of consumer data (“Do you prefer beer or wine?”) to try to predict voting behavior and identify who was persuadable and who could be easily motivated to get to the polls. Though the findings are still secret, legend has it that the methods Rove discovered were at the heart of the GOP’s successful get-out-the-vote strategy in 2002 and 2004.

On the left, Catalist, a firm staffed by former Amazon engineers, has built a database of hundreds of millions of voter profiles. For a fee, organizing and activist groups (including MoveOn) query it to help determine which doors to knock on and to whom to run ads. And that’s just the start. In a memo for fellow progressives, Mark Steitz, one of the primary Democratic data gurus, recently wrote that “targeting too often returns to a bombing metaphor—dropping message from planes. Yet the best data tools help build relationships based on observed contacts with people. Someone at the door finds out someone is interested in education; we get back to that person and others like him or her with more information. Amazon’s recommendation engine is the direction we need to head.” The trend is clear: We’re moving from swing states to swing people.

Consider this scenario: It’s 2016, and the race is on for the presidency of the United States. Or is it?

It depends on who you are, really. If the data says you vote frequently and that you may have been a swing voter in the past, the race is a maelstrom. You’re besieged with ads, calls, and invitations from friends. If you vote intermittently, you get a lot of encouragement to get out to the polls.

But let’s say you’re more like an average American. You usually vote for candidates from one party. To the data crunchers from the opposing party, you don’t look particularly persuadable. And because you vote in presidential elections pretty regularly, you’re also not a target for “get out the vote” calls from your own. Though you make it to the polls as a matter of civic duty, you’re not that actively interested in politics. You’re more interested in, say, soccer and robots and curing cancer and what’s going on in the town where you live. Your personalized news feeds reflect those interests, not the news from the latest campaign stop.

In a filtered world, with candidates microtargeting the few persuadables, would you know that the campaign was happening at all?

Even if you visit a site that aims to cover the race for a general audience, it’ll be difficult to tell what’s going on. What is the campaign about? There is no general, top-line message, because the candidates aren’t appealing to a general public. Instead, there are a series of message fragments designed to penetrate personalized filters.

Google is preparing for this future. Even in 2010, it staffed a round-the-clock “war room” for political advertising, aiming to be able to quickly sign off on and activate new ads even in the wee hours of October nights. Yahoo is conducting a series of experiments to determine how to match the publicly available list of who voted in each district with the click signals and Web history data it picks up on its site. And data-aggregation firms like Rapleaf in San Francisco are trying to correlate Facebook social graph information with voting behavior—so that they can show you the political ad that best works for you based on the responses of your friends.

The impulse to talk to voters about the things they’re actually interested in isn’t a bad one—it’d be great if mere mention of the word politics didn’t cause so many eyes to glaze over. And certainly the Internet has unleashed the coordinated energy of a whole new generation of activists—it’s easier than ever to find people who share your political passions. But while it’s easier than ever to bring a group of people together, as personalization advances it’ll become harder for any given group to reach a broad audience. In some ways, personalization poses a threat to public life itself.

Because the state of the art in political advertising is half a decade behind the state of the art in commercial advertising, most of this change is still to come. But for starters, filter-bubble politics could effectively make even more of us into single issue voters. Like personalized media, personalized advertising is a two-way street: I may see an ad about, say, preserving the environment because I drive a Prius, but seeing the ad also makes me care more about preserving the environment. And if a congressional campaign can determine that this is the issue on which it’s most likely to persuade me, why bother filling me in on all of the other issues?

In theory, market dynamics will continue to encourage campaigns to reach out to nonvoters. But an additional complication is that more and more companies are also allowing users to remove advertisements they don’t like. For Facebook and Google, after all, seeing ads for ideas or services you don’t like is a failure. Because people tend to dislike ads containing messages they disagree with, this creates even less space for persuasion. “If a certain number of anti-Mitt Republicans saw an ad for Mitt Romney and clicked ‘offensive, etc.,’” writes Vincent Harris, a Republican political consultant, “they could block ALL of Mitt Romney’s ads from being shown, and kill the entire online advertising campaign regardless of how much money the Romney campaign wanted to spend on Facebook.” Forcing candidates to come up with more palatable ways to make their points might result in more thoughtful ads—but it also might also drive up the cost of these ads, making it too costly for campaigns to ever engage the other side.

The most serious political problem posed by filter bubbles is that they make it increasingly difficult to have a public argument. As the number of different segments and messages increases, it becomes harder and harder for the campaigns to track who’s saying what to whom. TV is a piece of cake to monitor in comparison—you can just record the opposition’s ads in each cable district. But how does a campaign know what its opponent is saying if ads are only targeted to white Jewish men between twenty-eight and thirty-four who have expressed a fondness for U2 on Facebook and who donated to Barack Obama’s campaign?

When a conservative political group called Americans for Job Security ran ads in 2010 falsely accusing Representative Pete Hoekstra of refusing to sign a no-new-taxes pledge, he was able to show TV stations the signed pledge and have the ads pulled off the air. It’s not great to have TV station owners be the sole arbitrators of truth—I’ve spent a fair amount of time arguing with them myself—but it is better to have some bar for truthfulness than none at all. It’s unclear that companies like Google have the resources or the interest to play truthfulness referee on the hundreds of thousands of different ads that will run in election cycles to come.

As personal political targeting increases, not only will it be more difficult for campaigns to respond to and fact-check each other, it’ll be more challenging for journalists as well. We may see an environment where the most important ads aren’t easily accessible to journalists and bloggers—it’s easy enough for campaigns to exclude them from their targeting and difficult for reporters to fabricate the profile of a genuine swing voter. (One simple solution to this problem would simply be to require campaigns to immediately disclose all of their online advertising materials and to whom each ad is targeted. Right now, the former is spotty and the latter is undisclosed.)

It’s not that political TV ads are so great. For the most part, they’re shrill, unpleasant, and unlikable. If we could, most of us would tune them out. But in the broadcast era, they did at least three useful things. They reminded people that there was an election in the first place. They established for everyone what the candidates valued, what their campaigns were about, what their arguments were: the parameters of the debate. And they provided a basis for a common conversation about the political decision we faced—something you could talk about in the line at the supermarket.

For all of their faults, political campaigns are one of the primary places where we debate our ideas about our nation. Does America condone torture? Are we a nation of social Darwinists or of social welfare? Who are our heroes, and who are our villains? In the broadcast era, campaigns have helped to delineate the answers to those questions. But they may not do so for very much longer.

Fragmentation

The aim of modern political marketing, consumer trends expert J. Walker Smith tells Bill Bishop in The Big Sort, is to “drive customer loyalty—and in marketing terms, drive the average transaction size or improve the likelihood that a registered Republican will get out and vote Republican. That’s a business philosophy applied to politics that I think is really dangerous, because it’s not about trying to form a consensus, to get people to think about the greater good.”

In part, this approach to politics is on the rise for the same reason the filter bubble is: Personalized outreach gives better bang for the political buck. But it’s also a natural outcome of a well-documented shift in how people in industrialized countries think about what’s important. When people don’t have to worry about having their basic needs met, they care a lot more about having products and leaders that represent who they are.

Professor Ron Inglehart calls this trend postmaterialism, and it’s a result of the basic premise, he writes, that “you place the greatest subjective value on the things in short supply.” In surveys spanning forty years and eighty countries, people who were raised in affluence—who never had to worry about their physical survival—behaved in ways strikingly different from those of their hungry parents. “We can even specify,” Inglehart writes in Modernization and Postmodernization, “with far better than random success, what issues are likely to be most salient in the politics of the respective types of societies.”

While there are still significant differences from country to country, postmaterialists share some important traits. They’re less reverent about authority and traditional institutions—the appeal of authoritarian strongmen appears to be connected to a basic fear for survival. They’re more tolerant of difference: One especially striking chart shows a strong correlation between level of life satisfaction and comfort with living next door to someone who’s gay. And while earlier generations emphasize financial achievement and order, postmaterialists value self-expression and “being yourself.”

Somewhat confusingly, postmaterialism doesn’t mean anticonsumption. Actually, the phenomenon is at the bedrock of our current consumer culture: Whereas we once bought things because we needed them to survive, now we mostly buy things as a means of self-expression. And the same dynamics hold for political leadership: Increasingly, voters evaluate candidates on whether they represent an aspirational version of themselves.

The result is what marketers call brand fragmentation. When brands were primarily about validating the quality of a product—“Dove soap is pure and made of the best ingredients”—advertisements focused more on the basic value proposition. But when brands became vehicles for expressing identity, they needed to speak more intimately to different groups of people with divergent identities they wanted express. And as a result, they started to splinter. Which is why what’s happened to Pabst Blue Ribbon beer is a good way of understanding the challenges faced by Barack Obama.

In the early 2000s, Pabst was struggling financially. It had maxed out among the white rural population that formed the core of its customer base, and it was selling less than 1 million barrels of beer a year, down from 20 million in 1970. If Pabst wanted to sell more beer, it had to look elsewhere, and Neal Stewart, a midlevel marketing manager, did. Stewart went to Postland, Oregon, where Pabst numbers were surprisingly strong and an ironic nostalgia for white working-class culture (remember trucker hats?) was widespread. If Pabst couldn’t get people to drink its watery brew sincerely, Stewart figured, maybe they could get people to drink it ironically. Pabst began to sponsor hipster events—gallery openings, bike messenger races, snowboarding competitions, and the like. Within a year, sales were way up—which is why, if you walk into a bar in certain Brooklyn neighborhoods, Pabst is more likely to be available than other low-end American beers.

That’s not the only excursion in reinvention that Pabst did. In China, where it is branded a “world-famous spirit,” Pabst has made itself into a luxury beverage for the cosmopolitan elite. Advertisements compare it to “Scotch whisky, French brandy, Bordeaux wine,” and present it in a fluted champagne glass atop a wooden cask. A bottle runs about $44 in U.S. currency.

What’s interesting about the Pabst story is that it’s not rebranding of the typical sort, in which a product aimed at one group is “repositioned” to appeal to another. Plenty of white working-class men still drink Pabst sincerely, an affirmation of down-home culture. Urban hipsters drink it with a wink. And wealthy Chinese yuppies drink it as a champagne substitute and a signifier of conspicuous consumption. The same beverage means very different things to different people.

Driven by the centrifugal pull of different market segments—each of which wants products that represent its identity—political leadership is fragmenting in much the same way as PBR. Much has been made of Barack Obama’s chameleonic political style. “I serve as a blank screen,” he wrote in The Audacity of Hope in 2006, “on which people of vastly different political stripes project their own views.” Part of that is a result of Obama’s intrinsic political versatility. But it’s also a plus in an age of fragmentation.

(To be sure, the Internet can also facilitate consolidation, as Obama learned when his comment about people “clinging to guns and religion” to donors in San Francisco was reported by the Huffington Post and became a top campaign talking point against him. At the same time, Williamsburg hipsters who read the right blogs can learn about Pabst’s Chinese marketing scheme. But while this makes fragmentation a more perilous process and cuts against authenticity, it doesn’t fundamentally change the calculus. It just makes it more of an imperative to target well.)

The downside of this fragmentation, as Obama has learned, is that it is harder to lead. Acting different with different political constituencies isn’t new—in fact, it’s probably about as old as politics itself. But the overlap—content that remains constant between all of those constituencies—is shrinking dramatically. You can stand for lots of different kinds of people or stand for something, but doing both is harder every day.

Personalization is both a cause and an effect of the brand fragmentation process. The filter bubble wouldn’t be so appealing if it didn’t play to our postmaterial desire to maximize self-expression. But once we’re in it, the process of matching who we are to content streams can lead to the erosion of common experience, and it can stretch political leadership to the breaking point.

Discourse and Democracy

The good news about postmaterial politics is that as countries become wealthier, they’ll likely become more tolerant, and their citizens will be more self-expressive. But there’s a dark side to it too. Ted Nordhaus, a student of Inglehart’s who focuses on postmaterialism in the environmental movement, told me that “the shadow that comes with postmaterialism is profound self-involvement…. We lose all perspective on the collective endeavors that have made the extraordinary lives we live possible.” In a postmaterial world where your highest task is to express yourself, the public infrastructure that supports this kind of expression falls out of the picture. But while we can lose sight of our shared problems, they don’t lose sight of us.

A few times a year when I was growing up, the nine-hundred-person hamlet of Lincolnville, Maine, held a town meeting. This was my first impression of democracy: A few hundred residents crammed into the grade school auditorium or basement to discuss school additions, speed limits, zoning regulations, and hunting ordinances. In the aisle between the rows of gray metal folding chairs was a microphone on a stand, where people would line up to say their piece.

It was hardly a perfect system: Some speakers droned on; others were shouted down. But it gave all of us a sense of the kinds of people that made up our community that we wouldn’t have gotten anywhere else. If the discussion was about encouraging more businesses along the coast, you’d hear from the wealthy summer vacationers who enjoyed their peace and quiet, the back-to-the-land hippies with antidevelopment sentiments, and the families who’d lived in rural poverty for generations and saw the influx as a way up and out. The conversation went back and forth, sometimes closing toward consensus, sometimes fragmenting into debate, but usually resulting in a decision about what to do next.

I always liked how those town meetings worked. But it wasn’t until I read On Dialogue that I fully understood what they accomplished.

Born to Hungarian and Lithuanian Jewish furniture store owners in Wilkes-Barre, Pennsylvania, David Bohm came from humble roots. But when he arrived at the University of California–Berkeley, he quickly fell in with a small group of theoretical physicists, under the direction of Robert Oppenheimer, who were racing to build the atomic bomb. By the time he died at seventy-two in October 1992, many of his colleagues would remember Bohm as one of the great physicists of the twentieth century.

But if quantum math was his vocation, there was another matter that took up much of Bohm’s time. Bohm was preoccupied with the problems created by advanced civilization, especially the possibility of nuclear war. “Technology keeps on advancing with greater and greater power, either for good or for destruction,” he wrote. “What is the source of all this trouble? I’m saying that the source is basically in thought.” For Bohm, the solution became clear: It was dialogue. In 1992, one of his definitive texts on the subject was published.

To communicate, Bohm wrote, literally means to make something common. And while sometimes this process of making common involves simply sharing a piece of data with a group, more often it involves the group’s coming together to create a new, common meaning. “In dialogue,” he writes, “people are participants in a pool of common meaning.”

Bohm wasn’t the first theorist to see the democratic potential of dialogue. Jurgen Habermas, the dean of media theory for much of the twentieth century, had a similar view. For both, dialogue was special because it provided a way for a group of people to democratically create their culture and to calibrate their ideas in the world. In a way, you couldn’t have a functioning democracy without it.

Bohm saw an additional reason why dialogue was useful: It provided people with a way of getting a sense of the whole shape of a complex system, even the parts that they didn’t directly participate in. Our tendency, Bohm says, is to rip apart and fragment ideas and conversations into bits that have no relation to the whole. He used the example of a watch that has been shattered: Unlike the parts that made up the watch previously, the pieces have no relation to the watch as a whole. They’re just little bits of glass and metal.

It’s this quality that made the Lincolnville town meetings something special. Even if the group couldn’t always agree on where to go, the process helped to develop a shared map for the terrain. The parts understood our relationship to the whole. And that, in turn, makes democratic governance possible.

The town meetings had another benefit: They equipped us to deal more handily with the problems that did emerge. In the science of social mapping, the definition of a community is a set of nodes that are densely interconnected—my friends form a community if they all don’t know just me but also have independent relationships with one another. Communication builds stronger community.

Ultimately, democracy works only if we citizens are capable of thinking beyond our narrow self-interest. But to do so, we need a shared view of the world we cohabit. We need to come into contact with other peoples’ lives and needs and desires. The filter bubble pushes us in the opposite direction—it creates the impression that our narrow self-interest is all that exists. And while this is great for getting people to shop online, it’s not great for getting people to make better decisions together.

“The prime difficulty” of democracy, John Dewey wrote, “is that of discovering the means by which a scattered, mobile, and manifold public may so recognize itself as to define and express its interests.” In the early days of the Internet, this was one of the medium’s great hopes—that it would finally offer a medium whereby whole towns—and indeed countries—could co-create their culture through discourse. Personalization has given us something very different: a public sphere sorted and manipulated by algorithms, fragmented by design, and hostile to dialogue.

Which begs an important question: Why would the engineers who designed these systems want to build them this way?

Загрузка...