7 What You Want, Whether You Want It or Not

There will always be plenty of things to compute in the detailed affairs of millions of people doing complicated things.

—computing pioneer Vannevar Bush, 1945

All collected data had come to a final end. Nothing was left to be collected. But all collected data had yet to be completely correlated and put together in all possible relationships.

—from Isaac Asimov’s short story “The Last Question”

I recently received a friend invitation on Facebook from someone whose name I didn’t recognize, a curvy-figured girl with big eyes and thick lashes. Clicking to figure out who she was (and, I’ll admit, to look more closely), I read over her profile. It didn’t tell me a lot about her, but it seemed like the profile of someone I might plausibly know. A few of our interests were the same.

I looked again at the eyes. They were a little too big.

In fact, when I looked more closely, I realized her profile picture wasn’t even a photograph—it had been rendered by a 3-D graphics program. There was no such person. My new attractive would-be friend was a figment of software, crawling through friend connections to harvest data from Facebook users. Even the list of movies and books she liked appeared to have been ripped from the lists of her “friends.”

For lack of a better word, let’s call her an advertar—a virtual being with a commercial purpose. As the filter bubble’s membrane becomes thicker and harder to penetrate, advertars could become a powerful adaptive strategy. If I only get the news from my code and my friends, the easiest way to get my attention might be friends who are code.

The technologies that support personalization will only get more powerful in the years ahead. Sensors that can pick up new personal signals and data streams will become even more deeply embedded in the surface of everyday life. The server farms that support the Googles and Amazons will grow, while the processors inside them shrink; that computing power will be unleashed to make increasingly precise guesses about our preferences and even our interior lives. Personalized “augmented reality” technologies will project an overlay over our experience of the real world, not just the digital one. Even Nicholas Negroponte’s intelligent agents may make a comeback. “Markets are strong forces,” says Bill Joy, the legendary programmer who cofounded Sun Microsystems. “They take you somewhere very quickly. And if where they take you is not where you want to go, you’ve got a problem.”

In 2002, the sci-fi movie Minority Report featured personalized holographic advertisements that accosted pedestrians as they walked down the street. In Tokyo, the first Minority Report–style personalized billboard has gone up outside of the NEC corporation’s headquarters (minus, for now, the holography). It’s powered by the company’s PanelDirector software, which scans the faces of passersby and matches them to a database of ten thousand stored photos to make guesses about their age and gender. When a young woman steps in front of the display, it responds instantly by showing her ads tailored to her. IBM’s on the case, too; its prototype advertising displays use remotely readable identity cards to greet viewers by name.

In Reality Hunger, a book-length essay composed entirely of text fragments and reworked quotations, David Shields makes the case for the growing movement of artists who are “breaking larger and larger chunks of ‘reality’ into their work.” Shields’s examples are far-ranging, including The Blair Witch Project, Borat, and Curb Your Enthusiasm; karaoke, VH1’s Behind the Music, and public access TV; The Eminem Show and The Daily Show, documentary and mockumentary. These pieces, he says, are the most vital art of our time, part of a new mode characterized by “a deliberate unartiness” and “a blurring (to the point of invisibility) of any distinction between fiction and nonfiction: the lure and blur of the real.” Truthiness, in Shields’s view, is the future of art.

As goes art, so goes technology. The future of personalization—and of computing itself—is a strange amalgam of the real and the virtual. It’s a future where our cities and our bedrooms and all of the spaces in between exhibit what researchers call “ambient intelligence.” It’s a future where our environments shift around us to suit our preferences and even our moods. And it’s a future where advertisers will develop ever more powerful and reality-bending ways to make sure their products are seen.

The days when the filter bubble disappears when we step away from our computers, in other words, are numbered.

The Robot with Gaydar

Stanford Law professor Ryan Calo thinks a lot about robots, but he doesn’t spend much time musing about a future of cyborgs and androids. He’s more interested in Roombas, the little robotic vacuum cleaners currently on the market. Roomba owners name their machines like pets. They delight in watching the little bumbling devices wander around the room. Roombas provoke an emotional response, even a sense of relationship. And in the next few years, they’ll be joined by a small army of consumer-electronic brethren.

The increasing prevalence of humanlike machines in everyday life presents us with new dilemmas in personalization and privacy. The emotions provoked by “humanness,” both virtually (advertars) and in reality (humanlike robots) are powerful. And when people begin to relate to machines as we do to humans, we can be convinced to reveal implicit information that we would never directly give away.

For one thing, the presence of humanoid faces changes behavior, compelling people to behave more like they’re in public. The Chinese experiment with Jingjing and Chacha, the cartoon Internet police, is one example of this power. On the one hand, Calo points out, people are much less likely to volunteer private information when being interrogated by a virtual agent than when simply filling out a form. This is part of why the intelligent-agent craze didn’t work out the first time around: In many cases, it’s easier to get people to share personal information if they feel as though they’re privately entering it into an impersonal machine rather than sharing it with people.

On the other hand, when Harvard researchers Terence Burnham and Brian Hare asked volunteers to play a game in which they could choose to donate money or keep it, a picture of the friendly looking robot Kismet increased donations by 30 percent. Humanlike agents tend to make us clam up on the intimate details of our lives, because they make us feel as if we’re actually around other people. For elderly folks living alone or a child recovering in a hospital, a virtual or robotic friend can be a great relief from loneliness and boredom.

This is all to the good. But humanlike agents also have a great deal of power to shape our behavior. “Computers programmed to be polite, or to evidence certain personalities,” Calo writes, “have profound effects on the politeness, acceptance, and other behavior of test subjects.” And because they engage with people, they can pull out implicit information that we’d never intend to divulge. A flirty robot, for example, might be able to read subconscious cues—eye contact, body language—to quickly identify personality traits of its interlocutor.

The challenge, Calo says, is that it’s hard to remember that humanlike software and hardware aren’t human at all. Advertars or robotic assistants may have access to the whole set of personal data that exists online—they may know more about you, more precisely, than your best friend. And as persuasion and personality profiling get better, they’ll develop an increasingly nuanced sense of how to shift your behaviors.

Which brings us back to the advertar. In an attention-limited world, lifelike, and especially humanlike, signals stand out—we’re hardwired to pay attention to them. It’s far easier to ignore a billboard than an attractive person calling your name. And as a result, advertisers may well decide to invest in technology that allows them to insert human advertisements into social spaces. The next attractive man or woman who friends you on Facebook could turn out to be an ad for a bag of chips.

As Calo puts it, “people are not evolved to twentieth-century technology. The human brain evolved in a world in which only humans exhibited rich social behaviors, and a world in which all perceived objects were real physical objects.” Now all that’s shifting.

The Future Is Already Here

The future of personalization is driven by a simple economic calculation. Signals about our personal behavior and the computing power necessary to crunch through them are becoming cheaper than ever to acquire. And as that cost collapses, strange new possibilities come within reach.

Take facial recognition. Using MORIS, a $3,000 iPhone app, the police in Brockton, Massachusetts, can snap a photo of a suspect and check his or her identity and criminal record in seconds. Tag a few pictures with Picasa, Google’s photo-management tool, and the software can already pick out who’s who in a collection of photos. And according to Eric Schmidt, the same is true of Google’s cache of images from the entire Web. “Give us 14 images of you,” he told a crowd of technologists at the Techonomy Conference in 2010, “and we can find other images of you with ninety-five percent accuracy.”

As of the end of 2010, however, this feature isn’t available in Google Image Search. Face.com, an Israeli start-up, may offer the service before the search giant does. It’s not every day that a company develops a highly useful and world-changing technology and then waits for a competitor to launch it first. But Google has good reason to be concerned: The ability to search by face will shatter many of our cultural illusions about privacy and anonymity.

Many of us will be caught in flagrante delicto. It’s not just that your friends (and enemies) will be able to easily find pictures other people have taken of you—as if the whole Internet has been tagged on Facebook. They will also be able to find pictures other people took of other people, in which you happen to be walking by or smoking a cigarette in the background.

After the data has been crunched, the rest is easy. Want to search for two people—say your boyfriend and that overly friendly intern you suspect him of dallying with, or your employee and that executive who’s been trying to woo him away? Easy. Want to build a Facebook-style social graph by looking at who appears most often with whom? A cinch. Want to see which of your coworkers posted profiles on anonymous dating sites—or, for that matter, photos of themselves in various states of undress? Want to see what your new friend used to look like in his drugged out days? Want to find mobsters in the Witness Protection program, or spies in deep cover? The possibilities are nearly limitless.

To be sure, doing face recognition right takes an immense amount of computing power. The tool in Picasa is slow—on my laptop, it crunches for minutes. So for the time being, it may be too expensive to do it well for the whole Web. But face recognition has Moore’s law, one of the most powerful laws in computing, on its side: Every year, as processor speed per dollar doubles, it’ll get twice as cheap to do. Sooner or later, mass face recognition—perhaps even in real time, which would allow for recognition on security and video feeds—will roll out.

Facial recognition is especially significant because it’ll create a kind of privacy discontinuity. We’re used to a public semianonymity—while we know we may be spotted in a club or on the street, it’s unlikely that we will be. But as security-camera and camera-phone pictures become searchable by face, that expectation will slip away. Shops with cameras facing the doors—and aisles—will be able to watch precisely where individual customers wander, what they pick up, and how this correlates with the data already collected about them by firms like Acxiom. And this powerful set of data—where you go and what you do, as indicated by where your face shows up in the bitstream—can be used to provide ever more custom-tailored experiences.

It’s not just people that will be easier than ever to track. It’s also individual objects—what some researchers are calling the “Internet of things.”

As sci-fi author William Gibson once said, “The future is already here—it’s just not very evenly distributed.” It shows up in some places before others. And one of the places this particular aspect of the future has shown up first, oddly enough, is the Coca-Cola Village Amusement Park, a holiday village, theme park, and marketing event that opens seasonally in Israel. Sponsored by Facebook and Coke, the teenagers attending the park in the summer of 2010 were given bracelets containing a tiny piece of circuitry that allowed them to Like real-world objects. Wave the bracelet at the entrance to a ride, for example, and a status update posted to your account testifies that you’re about to embark. Take a picture of your friends with a special camera and wave the bracelet at it, and the photo’s posted with your identity already tagged.

Embedded in each bracelet is a radio-frequency identification (RFID) chip. RFID chips don’t need batteries, and there’s only one way to use them: call-and-response. Provide a little wireless electromagnetic power, and the chip chirps out a unique identifying code. Correlate the code with, say, a Facebook account, and you’re in business. A single chip can cost as little as $.07, and they’ll cost far less in the years to come.

Suddenly it’s possible for businesses to track each individual object they make across the globe. Affix a chip to an individual car part, and you can watch as the part travels to the car factory, gets assembled into a car, and makes its way to the show floor and then someone’s garage. No more inventory shrinkage, no more having to recall whole models of products because of the errors of one factory.

Conversely, RFID provides a framework by which a home could automatically inventory every object inside it—and track which objects are in which rooms. With a powerful enough signal, RFID could be a permanent solution to the lost-keys problem—and bring us face-to-face with what Forbes writer Reihan Salam calls “the powerful promise of a real world that can be indexed and organized as cleanly and coherently as Google has indexed and organized the Web.”

This phenomenon is called ambient intelligence. It’s based on a simple observation: The items you own, where you put them, and what you do with them is, after all, a great signal about what kind of person you are and what kind of preferences you have. “In the near future,” writes a team of ambient intelligence experts led by David Wright, “every manufactured product—our clothes, money, appliances, the paint on our walls, the carpets on our floors, our cars, everything—will be embedded with intelligence, networks of tiny sensors and actuators, which some have termed ‘smart dust.’”

And there’s a third set of powerful signals that is getting cheaper and cheaper. In 1990, it cost about $10 to sequence a single base pair—one “letter”—of DNA. By 1999, that number had dropped to $.90. In 2004, it crossed the $.01 threshold, and now, as I write in 2010, it costs one ten-thousandth of $.01. By the time this book comes out, it’ll undoubtedly cost exponentially less. By some point mid-decade, we ought to be able to sequence any random whole human genome for less than the cost of a sandwich.

It seems like something out of Gattaca, but the allure of adding this data to our profiles will be strong. While it’s increasingly clear that our DNA doesn’t determine everything about us—other cellular information sets, hormones, and our environment play a large role—there are undoubtedly numerous correlations between genetic material and behavior to be made. It’s not just that we’ll be able to predict and avert upcoming health issues with far greater accuracy—though that alone will be enough to get many of us in the door. By adding together DNA and behavioral data—like the location information from iPhones or the text of Facebook status updates—an enterprising scientist could run statistical regression analysis on an entire society.

In all this data lie patterns yet undreamed of. Properly harnessed, it will fuel a level of filtering acuity that’s hard to imagine—a world in which nearly all of our objective experience is quantified, captured, and used to inform our environments. The biggest challenge, in fact, may be thinking of the right questions to ask of these enormous flows of binary digits. And increasingly, code will learn to ask these questions itself.

The End of Theory

In December 2010, researchers at Harvard, Google, Encyclopædia Britannica, and the American Heritage Dictionary announced the results of a four-year joint effort. The team had built a database spanning the entire contents of over five hundred years’ worth of books—5.2 million books in total, in English, French, Chinese, German, and other languages. Now any visitor to Google’s “N-Gram viewer” page can query it and watch how phrases rise and fall in popularity over time, from neologism to the long fade into obscurity. For the researchers, the tool suggested even grander possibilities—a “quantitative approach to the humanities,” in which cultural changes can be scientifically mapped and measured.

The initial findings suggest how powerful the tool can be. By looking at the references to previous dates, the team found that “humanity is forgetting its past faster with each passing year.” And, they argued, the tool could provide “a powerful tool for automatically identifying censorship and propaganda” by identifying countries and languages in which there was a statistically abnormal absence of certain ideas or phrases. Leon Trotsky, for example, shows up far less in midcentury Russian books than in English or French books from the same time.

The project is undoubtedly a great service to researchers and the casually curious public. But serving academia probably wasn’t Google’s only motive. Remember Larry Page’s declaration that he wanted to create a machine “that can understand anything,” which some people might call artificial intelligence? In Google’s approach to creating intelligence, the key is data, and the 5 million digitized books contain an awful lot of it. To grow your artificial intelligence, you need to keep it well fed.

To get a sense of how this works, consider Google Translate, which can now do a passable job translating automatically among nearly sixty languages. You might imagine that Translate was built with a really big, really sophisticated set of translating dictionaries, but you’d be wrong. Instead, Google’s engineers took a probabilistic approach: They built software that could identify which words tended to appear in connection with which, and then sought out large chunks of data that were available in multiple languages to train the software on. One of the largest chunks was patent and trademark filings, which are useful because they all say the same thing, they’re in the public domain, and they have to be filed globally in scores of different languages. Set loose on a hundred thousand patent applications in English and French, Translate could determine that when word showed up in the English document, mot was likely to show up in the corresponding French paper. And as users correct Translate’s work over time, it gets better and better.

What Translate is doing with foreign languages Google aims to do with just about everything. Cofounder Sergey Brin has expressed his interest in plumbing genetic data. Google Voice captures millions of minutes of human speech, which engineers are hoping they can use to build the next generation of speech recognition software. Google Research has captured most of the scholarly articles in the world. And of course, Google’s search users pour billions of queries into the machine every day, which provide another rich vein of cultural information. If you had a secret plan to vacuum up an entire civilization’s data and use it to build artificial intelligence, you couldn’t do a whole lot better.

As Google’s protobrain increases in sophistication, it’ll open up remarkable new possibilities. Researchers in Indonesia can benefit from the latest papers in Stanford (and vice versa) without waiting for translation delays. In a matter of a few years, it may be possible to have an automatically translated voice conversation with someone speaking a different language, opening up whole new channels of cross-cultural communication and understanding.

But as these systems become increasingly “intelligent,” they also become harder to control and understand. It’s not quite right to say they take on a life of their own—ultimately, they’re still just code. But they reach a level of complexity at which even their programmers can’t fully explain any given output.

This is already true to a degree with Google’s search algorithm. Even to its engineers, the workings of the algorithm are somewhat mysterious. “If they opened up the mechanics,” says search expert Danny Sullivan, “you still wouldn’t understand it. Google could tell you all two hundred signals it uses and what the code is and you wouldn’t know what to do with them.” The core software engine of Google search is hundreds of thousands of lines of code. According to one Google employee I talked to who had spoken to the search team, “The team tweaks and tunes, they don’t really know what works or why it works, they just look at the result.”

Google promises that it doesn’t tilt the deck in favor of its own products. But the more complex and “intelligent” the system gets, the harder it’ll be to tell. Pinpointing where bias or error exists in a human brain is difficult or impossible—there are just too many neurons and connections to narrow it down to a single malfunctioning chunk of tissue. And as we rely on intelligent systems like Google’s more, their opacity could cause real problems—like the still-mysterious machine-driven “flash crash” that caused the Dow to drop 600 points in a few minutes on May 6, 2010.

In a provocative article in Wired, editor-in-chief Chris Anderson argued that huge databases render scientific theory itself obsolete. Why spend time formulating human-language hypotheses, after all, when you can quickly analyze trillions of bits of data and find the clusters and correlations? He quotes Peter Norvig, Google’s research director: “All models are wrong, and increasingly you can succeed without them.” There’s plenty to be said for this approach, but it’s worth remembering the downside: Machines may be able to see results without models, but humans can’t understand without them. There’s value in making the processes that run our lives comprehensible to the humans who, at least in theory, are their beneficiaries.

Supercomputer inventor Danny Hillis once said that the greatest achievement of human technology is tools that allow us to create more than we understand. That’s true, but the same trait is also the source of our greatest disasters. The more the code driving personalization comes to resemble the complexity of human cognition, the harder it’ll be to understand why or how it’s making the decisions it makes. A simple coded rule that bars people from one group or class from certain kinds of access is easy to spot, but when the same action is the result of a swirling mass of correlations in a global supercomputer, it’s a trickier problem. And the result is that it’s harder to hold these systems and their tenders accountable for their actions.

No Such Thing as a Free Virtual Lunch

In January 2009, if you were listening to one of twenty-five radio stations in Mexico, you might have heard the accordion ballad “El más grande enemigo.” Though the tune is polka-ish and cheery, the lyrics depict a tragedy: a migrant seeks to illegally cross the border, is betrayed by his handler, and is left in the blistering desert sun to die. Another song from the Migra corridos album tells a different piece of the same sad tale:

To cross the border

I got in the back of a trailer

There I shared my sorrows

With forty other immigrants

I was never told

That this was a trip to hell.

If the lyrics aren’t exactly subtle about the dangers of crossing the border, that’s the point. Migra corridos was produced by a contractor working for the U.S. Border Control, as part of a campaign to stem the tide of immigrants along the border. The song is a prime example of a growing trend in what marketers delicately call “advertiser-funded media,” or AFM.

Product placement has been in vogue for decades, and AFM is its natural next step. Advertisers love product placement because in a media environment in which it’s harder and harder to get people to pay attention to anything—especially ads—it provides a kind of loophole. You can’t fast-forward past product placement. You can’t miss it without missing some of the actual content. AFM is just a natural extension of the same logic: Media have always been vehicles for selling products, the argument goes, so why not just cut out the middleman and have product makers produce the content themselves?

In 2010, Walmart and Procter & Gamble announced a partnership to produce Secrets of the Mountain and The Jensen Project, family movies that will feature characters using the companies’ products throughout. Michael Bay, the director of Transformers, has started a new company called the Institute, whose tagline is “Where Brand Science Meets Great Storytelling.” Hansel and Gretel in 3-D, its first feature production, will be specially crafted to provide product-placement hooks throughout.

Now that the video-game industry is far more profitable than the movie industry, it provides a huge opportunity for in game advertising and product placement as well. Massive Incorporated, a game advertising platform acquired by Microsoft for $200 million to $400 million, has placed ads on in game billboards and city walls for companies like Cingular and McDonald’s, and has the capacity to track which individual users saw which advertisements for how long. Splinter Cell, a game by UBIsoft, works placement for products like Axe deodorant into the architecture of the cityscape that characters travel through.

Even books aren’t immune. Cathy’s Book, a young-adult title published in September 2006, has its heroine applying “a killer coat of Lipslicks in ‘Daring.’”That’s not a coincidence—Cathy’s Book was published by Procter & Gamble, the corporate owner of Lipslicks.

If the product placement and advertiser-funded media industries continue to grow, personalization will offer whole new vistas of possibility. Why name-drop Lipslicks when your reader is more likely to buy Cover Girl? Why have a video-game chase scene through Macy’s when the guy holding the controller is more of an Old Navy type? When software engineers talk about architecture, they’re usually talking metaphorically. But as people spend more of their time in virtual, personalizable places, there’s no reason that these worlds can’t change to suit users’ preferences. Or, for that matter, a corporate sponsor’s.

A Shifting World

The enriched psychological models and new data flows measuring everything from heart rate to music choices open up new frontiers for online personalization, in which what changes isn’t just a choice of products or news clips, but the look and feel of the site on which they’re displayed.

Why should Web sites look the same to every viewer or customer? Different people don’t respond only to different products—they respond to different design sensibilities, different colors, even different types of product descriptions. It’s easy enough to imagine a Walmart Web site with softened edges and warm pastels for some customers and a hard-edged, minimalist design for others. And once that capacity exists, why stick with just one design per customer? Maybe it’s best to show me one side of the Walmart brand when I’m angry and another when I’m happy.

This kind of approach isn’t a futuristic fantasy. A team led by John Hauser at MIT’s business school has developed the basic techniques for what they call Web site morphing, in which a shopping site analyzes users’ clicks to figure out what kinds of information and styles of presentation are most effective and then adjusts the layout to suit a particular user’s cognitive style. Hauser estimates that Web sites that morph can increase “purchase intentions” by 21 percent. Industrywide, that’s worth billions. And what starts with the sale of consumer products won’t end there: News and entertainment sources that morph ought to enjoy an advantage as well.

On one hand, morphing makes us feel more at home on the Web. Drawing from the data we provide, every Web site can feel like an old friend. But it also opens the door to a strange, dreamlike world, in which our environment is constantly rearranging itself behind our backs. And like a dream, it may be less and less possible to share with people outside of it—that is, everyone else.

Thanks to augmented reality, that experience may soon be par for the course offline as well.

“On the modern battlefield,” Raytheon Avionics manager Todd Lovell told a reporter, “there is way more data out there than most people can use. If you are just trying to see it all through your eyes and read it in bits and bites, you’re never going to understand it. So the key to the modern technology is to take all that data and turn it into useful information that the pilot can recognize very quickly and act upon.” What Google does for online information, Lovell’s Scorpion project aims to do for the real world.

Fitting like a monocle over one of a jet pilot’s eyes, the Scorpion display device annotates what a pilot sees in real time. It color-codes potential threats, highlights when and where the aircraft has a missile lock, assists with night vision, and reduces the need for pilots to look at a dashboard in an environment where every microsecond matters. “It turns the whole world into a display,” jet pilot Paul Mancini told the Associated Press.

This is augmented-reality technology, and it’s moving rapidly from the cockpits of jet planes to consumer devices that can tune out the noise and turn up the signal of everyday life. Using your iPhone camera and an app developed by Yelp, the restaurant recommendation service, you can see eateries’ ratings haphazardly displayed over their real-world storefronts. A new kind of noise-canceling headphone can sense and amplify human voices while tuning other street or airplane noise down to a whisper. The Meadowlands football stadium is spending $100 million on new applications that give fans who attend games in person the ability to slice and dice the game in real time, view key statistics as they happen, and watch the action unfold from a variety of angles—the full high-information TV experience overlaid on a real game.

At DARPA, the defense research and development agency, technologies are being developed that make Scorpion look positively quaint. Since 2002, DARPA has been pushing forward research in what it calls augmented cognition, or AugCog, which uses cognitive neuroscience and brain imaging to figure out how best to route important information into the brain. AugCog begins with the premise that there are basic limits as to how many tasks a person can juggle at a time, and that “this capacity itself may fluctuate from moment to moment depending on a host of factors including mental fatigue, novelty, boredom and stress.”

By monitoring activity in brain areas associated with memory, decision making, and the like, AugCog devices can figure out how to make sure to highlight the information that most matters. If you’re absorbing as much visual input as you can, the system might decide to send an audio alert instead. One trial, according to the Economist, gave users of an AugCog device a 100 percent improvement in recall and a 500 percent increase in working memory. And if it sounds far-fetched, just remember: The folks at DARPA also helped invent the Internet.

Augmented reality is a booming field, and Gary Hayes, a personalization and augmented-reality expert in Australia, sees at least sixteen different ways it could be used to provide services and make money. In his vision, guide companies could offer augmented reality tours, in which information about buildings, museum artifacts, and streets is superimposed on the environs. Shoppers could use phone apps to immediately get readouts on products they’re interested in—including what the objects cost elsewhere. (Amazon.com already provides a rudimentary version of this service.) Augmented reality games could layer clues into real-world environments.

Augmented-reality tech provides value, but it also provides an opportunity to reach people with new attention-getting forms of advertising. For a price, digital sportscasts are already capable of layering corporate logos onto football fields. But this new technology offers the opportunity to do that in a personalized way in the real world: You turn on the app to, say, help find a friend in a crowd, and projected onto a nearby building is a giant Coke ad featuring your face and your name.

And when you combine the personalized filtering of what we see and hear with, say, face recognition, things get pretty interesting: You begin to be able to filter not just information, but people.

As the cofounder of OkCupid, one of the Web’s most popular dating sites, Chris Coyne has been thinking about filtering for people for a while. Coyne speaks in an energetic, sincere manner, furrowing his brows when he’s thinking and waving his hands to illustrate. As a math major, he got interested in how to use algorithms to solve problems for people.

“There are lots of ways you can use math to do things that turn a profit,” he told me over a steaming bowl of bibimbap in New York’s Koreatown. Many of his classmates went off to high-paid jobs at hedge funds. “But,” he said, “what we were interested in was using it to make people happy.” And what better way to make people happy than to help them fall in love?

The more Coyne and his college hallmates Sam Yeager and Max Krohn looked at other dating sites, the more annoyed they got: It was clear that other dating sites were more interested in getting people to pay for credits than to hook up. And once you did pay, you’d often see profiles of people who were no longer on the site or who would never write you back.

Coyne and his team decided to approach the problem with math. The service would be free. Instead of offering a one-sizefits-all solution, they’d use number crunching to develop a personalized matching algorithm for each person on the site. And just as Google optimizes for clicks, they’d do everything they could to maximize the likelihood of real conversations—if you could solve for that, they figured, profits would follow. In essence, they built a modern search engine for mates.

When you log on to OkCupid, you’re asked a series of questions about yourself. Do you believe in God? Would you ever participate in a threesome? Does smoking disgust you? Would you sleep with someone on the first date? Do you have an STD? (Answer yes, and you get sent to another site.) You also indicate how you’d like a prospective partner to answer the same questions and how important their answers are to you. Using these questions, OkCupid builds a custom-weighted equation to figure out your perfect match. And when you search for people in your area, it uses the same algorithm to rank the likelihood of your getting along. OkCupid’s powerful cluster of servers can rank ten thousand people with a two-hundred-question match model and return results in less than a tenth of a second.

They have to, because OkCupid’s traffic is booming. Hundreds of thousands of answers to poll questions flow into their system each night. Thousands of new users sign up each day. And the system is getting better and better.

Looking into the future, Coyne told me, you’ll have people walking around with augmented displays. He described a guy on a night out: You walk into a bar, and a camera immediately scans the faces in the room and matches them against OkCupid’s databases. “Your accessories can say, that girl over there is an eighty-eight percent match. That’s a dream come true!”

Vladimir Nabokov once commented that “reality” is “one of the few words that mean nothing without quotes.” Coyne’s vision may soon be our “reality.” There’s tremendous promise in this vision: Surgeons who never miss a suture, soldiers who never imperil civilians, and everywhere a more informed, information-dense world. But there’s also danger: Augmented reality represents the end of naive empiricism, of the world as we see it, and the beginning of something far more mutable and weird: a real-world filter bubble that will be increasingly difficult to escape.

Losing Control

There’s plenty to love about this ubiquitously personalized future.

Smart devices, from vacuum cleaners to lightbulbs to picture frames, offer the promise that our environments will be exactly the way we want them, wherever we are. In the near future, ambient-intelligence expert David Wright suggests, we might even carry our room-lighting preferences with us; when there are multiple people in a room, a consensus could be automatically reached by averaging preferences and weighting for who’s the host.

AugCog-enabled devices will help us track the data streams that we consider most important. In some situations—say, medical or fire alerts that find ways to escalate until they capture our attention—they could save lives. And while brainwave-reading AugCog is probably some way off for the masses, consumer variants of the basic concept are already being put into place. Google’s Gmail Priority Inbox, which screens e-mails and highlights the ones it assesses as more important, is an early riff on the theme. Meanwhile, augmented-reality filters offer the possibility of an annotated and hyperlinked reality, in which what we see is infused with information that allows us to work better, assimilate information more quickly, and make better decisions.

That’s the good side. But there’s always a bargain in personalization: In exchange for convenience, you hand over some privacy and control to the machine.

As personal data become more and more valuable, the behavioral data market described in chapter 1 is likely to explode. When a clothing company determines that knowing your favorite color produces a $5 increase in sales, it has an economic basis for pricing that data point—and for other Web sites to find reasons to ask you. (While OkCupid is mum about its business model, it likely rests on offering advertisers the ability to target its users based on the hundreds of personal questions they answer.)

While many of these data acquisitions will be legitimate, some won’t be. Data are uniquely suited to gray-market activities, because they need not carry any trace of where they have come from or where they have been along the way. Wright calls this data laundering, and it’s already well under way: Spyware and spam companies sell questionably derived data to middlemen, who then add it to the databases powering the marketing campaigns of major corporations.

Moreover, because the transformations applied to your data are often opaque, it’s not always clear exactly what decisions are being made on your behalf, by whom, or to what end. This matters plenty when we’re talking about information streams, but it matters even more when this power is infused into our sensory apparatus itself.

In 2000, Bill Joy, the Sun Microsystems cofounder, wrote a piece for Wired magazine titled “Why the Future Doesn’t Need Us.” “As society and the problems that face it become more and more complex and machines become more and more intelligent,” he wrote, “people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones.”

That may often be the case: Machine-driven systems do provide significant value. The whole promise of these technologies is that they give us more freedom and more control over our world—lights that respond to our whims and moods, screens and overlays that allow us to attend only to the people we want to, so that we don’t have to do the busywork of living. The irony is that they offer this freedom and control by taking it away. It’s one thing when a remote control’s array of buttons elides our ability to do something basic like flip the channels. It’s another thing when what the remote controls is our lives.

It’s fair to guess that the technology of the future will work about as well as the technology of the past—which is to say, well enough, but not perfectly. There will be bugs. There will be dislocations and annoyances. There will be breakdowns that cause us to question whether the whole system was worth it in the first place. And we’ll live with the threat that systems made to support us will be turned against us—that a clever hacker who cracks the baby monitor now has a surveillance device, that someone who can interfere with what we see can expose us to danger. The more power we have over our own environments, the more power someone who assumes the controls has over us.

That is why it’s worth keeping the basic logic of these systems in mind: You don’t get to create your world on your own. You live in an equilibrium between your own desires and what the market will bear. And while in many cases this provides for healthier, happier lives, it also provides for the commercialization of everything—even of our sensory apparatus itself. There are few things uglier to contemplate than AugCog-enabled ads that escalate until they seize control of your attention.

We’re compelled to return to Jaron Lanier’s question: For whom do these technologies work? If history is any guide, we may not be the primary customer. And as technology gets better and better at directing our attention, we need to watch closely what it is directing our attention toward.

Загрузка...