6 Hello, World!

SOCRATES: Or again, in a ship, if a man having the power to do what he likes, has no intelligence or skill in navigation [αρετης κυβερνητικης, aretēs kybernētikēs], do you see what will happen to him and to his fellow-sailors?

—Plato, First Alcibiades, the earliest known use of the word cybernetics

It’s the first fragment of code in the code book, the thing every aspiring programmer learns on day one. In the C++ programming language, it looks like this:

void main()

{

cout << “Hello, World!” <<

endl;

}

Although the code differs from language to language, the result is the same: a single line of text against a stark white screen:

Hello, World!

A god’s greeting to his invention—or perhaps an invention’s greeting to its god. The delight you experience is electric—the current of creation, running through your fingers into the keypad, into the machine, and back out into the world. It’s alive!

That every programmer’s career begins with “Hello, World!” is not a coincidence. It’s the power to create new universes, which is what often draws people to code in the first place. Type in a few lines, or a few thousand, strike a key, and something seems to come to life on your screen—a new space unfolds, a new engine roars. If you’re clever enough, you can make and manipulate anything you can imagine.

“We are as Gods,” wrote futurist Stewart Brand on the cover of his Whole Earth Catalog in 1968, “and we might as well get good at it.” Brand’s catalog, which sprang out of the back-to-the-land movement, was a favorite among California’s emerging class of programmers and computer enthusiasts. In Brand’s view, tools and technologies turned people, normally at the mercy of their environments, into gods in control of them. And the computer was a tool that could become any tool at all.

Brand’s impact on the culture of Silicon Valley and geekdom is hard to overestimate—though he wasn’t a programmer himself, his vision shaped the Silicon Valley worldview. As Fred Turner details in the fascinating From Counterculture to Cyberculture, Brand and his cadre of do-it-yourself futurists were disaffected hippies—social revolutionaries who were uncomfortable with the communes sprouting up in Haight-Ashbury. Rather than seeking to build a new world through political change, which required wading through the messiness of compromise and group decision making, they set out to build a world on their own.

In Hackers, his groundbreaking history of the rise of engineering culture, Steve Levy points out that this ideal spread from the programmers themselves to the users “each time some user flicked the machine on, and the screen came alive with words, thoughts, pictures, and sometimes elaborate worlds built out of air—those computer programs which could make any man (or woman) a god.” (In the era described by Levy’s book, the term hacker didn’t have the transgressive, lawbreaking connotations it acquired later.)

The God impulse is at the root of many creative professions: Artists conjure up color-flecked landscapes, novelists build whole societies on paper. But it’s always clear that these are creations: A painting doesn’t talk back. A program can, and the illusion that it’s “real” is powerful. Eliza, one of the first rudimentary AI programs, was programmed with a battery of therapistlike questions and some basic contextual cues. But students spent hours talking to it about their deepest problems: “I’m having some troubles with my family,” a student might write, and Eliza would immediately respond, “Tell me more about your family.”

Especially for people who’ve been socially ostracized due to quirks or brains or both, there are at least two strong draws to the world-building impulse. When social life is miserable or oppressive, escapism is a reasonable response—it’s probably not coincidental that role-playing games, sci-fi and fantasy literature, and programming often go together.

The infinitely expandable universe of code provides a second benefit: complete power over your domain. “We all fantasize about living without rules,” says Siva Vaidyanathan. “We imagine the Adam Sandler movie where you can move around and take people’s clothes off. If you don’t think of reciprocity as one of the beautiful and rewarding things about being a human being, you wish for a place or a way of acting without consequence.” When the rules of high school social life seem arbitrary and oppressive, the allure of making your own rules is pretty powerful.

This approach works pretty well as long as you’re the sole denizen of your creation. But like the God of Genesis, coders quickly get lonely. They build portals into their homespun worlds, allowing others to enter. And that’s where things get complicated: On the one hand, the more inhabitants in the world you’ve built, the more power you have. But on the other hand, the citizens can get uppity. “The programmer wants to set up some rules, to either a game or a system, and then let it run without interference from anything,” says Douglas Rushkoff, an early cyberbooster-turned-cyberpragmatist. “If you have a program that needs a minder to come in and help it run, then it’s not a very good program, is it? It’s supposed to just run.”

Coders sometimes harbor God impulses; they sometimes even have aspirations to revolutionize society. But they almost never aspire to be politicians. “While programming is considered a transparent, neutral, highly controllable realm… where production results in immediate gratification and something useful,” writes NYU anthropologist Gabriella Coleman, “politics tends to be seen by programmers as buggy, mediated, tainted action clouded by ideology that is not productive of much of anything.” There’s some merit to that view, of course. But for programmers to shun politics completely is a problem—because increasingly, given the disputes that inevitably arise when people come together, the most powerful ones will be required to adjudicate and to govern.

Before we get to how this blind spot affects our lives, though, it’s worth looking at how engineers think.

The Empire of Clever

Imagine that you’re a smart high school student on the low end of the social totem pole. You’re alienated from adult authority, but unlike many teenagers, you’re also alienated from the power structures of your peers—an existence that can feel lonely and peripheral. Systems and equations are intuitive, but people aren’t—social signals are confusing and messy, difficult to interpret.

Then you discover code. You may be powerless at the lunch table, but code gives you power over an infinitely malleable world and opens the door to a symbolic system that’s perfectly clear and ordered. The jostling for position and status fades away. The nagging parental voices disappear. There’s just a clean, white page for you to fill, an opportunity to build a better place, a home, from the ground up.

No wonder you’re a geek.

This isn’t to say that geeks and software engineers are friendless or even socially inept. But there’s an implicit promise in becoming a coder: Apprentice yourself to symbolic systems, learn to carefully understand the rules that govern them, and you’ll gain power to manipulate them. The more powerless you feel, the more appealing this promise becomes. “Hacking,” Steven Levy writes, “gave you not only an understanding of the system but an addictive control as well, along with the illusion that total control was just a few features away.”

As anthropologist Coleman points out, beyond the Jocks-and-Nerds stereotypes, there are actually many different geek cultures. There are open-software advocates, most famously embodied by Linux founder Linus Torvalds, who spend untold hours collaboratively building free software tools for the masses, and there are Silicon Valley start-up entrepreneurs. There are antispam zealots, who organize online posses to seek out and shut down Viagra purveyors. And then there’s the more antagonistic wing: spammers; “trolls,” who spend their time looking for fun ways to leverage technology at others’ expense; “phreaks,” who are animated by the challenge to break open telecommunications systems; and hackers who break into government systems to prove it can be done.

Generalizations that span these different niches and communities run the risk of stereotyping and tend to fall short. But at the heart of these subcultures is a shared method for looking at and asserting power in the world, which influences how and why online software is made.

The through-line is a focus on systematization. Nearly all geek cultures are structured as an empire of clever wherein ingenuity, not charisma, is king. The intrinsic efficiency of a creation is more important than how it looks. Geek cultures are data driven and reality based, valuing substance over style. Humor plays a prominent role—as Coleman points out, jokes demonstrate an ability to manipulate language in the same way that an elegant solution to a tricky programming problem demonstrates mastery over code. (The fact that humor also often serves to unmask the ridiculous pieties of the powerful is undoubtedly also part of its appeal.)

Systematization is especially alluring because it doesn’t offer power just in the virtual sphere. It can also provide a way to understand and navigate social situations. I learned this firsthand when, as an awkward seventeen-year-old with all the trappings of geek experience (the fantasy books, the introversion, the obsession with HTML and BBSes), I flew across the country to accept the wrong job.

In a late-junior-year panic, I’d applied for every internship I could find. One group, a nuclear disarmament organization based in San Francisco, had gotten back to me, and without much further investigation, I’d signed up. It was only when I walked into the office that I realized I’d signed up to be a can vaser. Off the top of my head, I couldn’t imagine a worse fit, but because I had no other prospects, I decided to stick out the day of training.

Canvasing, the trainer explained, was a science as much as an art. And the laws were powerful. Make eye contact. Explain why the issue matters to you. And after you ask for money, let your target say the first thing. I was intrigued: Asking people for money was scary, but the briefing hinted at a hidden logic. I committed the rules to memory.

When I walked through my first grassy Palo Alto lawn, my heart was in my throat. Here I was at the doorstep of someone I’d never met, asking for $50. The door opened and a harried woman with long gray hair peeped out. I took a deep breath, and launched into my spiel. I asked. I waited. And then she nodded and went to get her checkbook.

The euphoria I felt wasn’t about the $50. It was about something bigger—the promise that the chaos of human social life could be reduced to rules that I could understand, follow, and master. Conversation with strangers had never come naturally to me—I didn’t know what to talk about. But the hidden logic of getting someone I’d never met to trust me with $50 had to be the tip of a larger iceberg. By the end of a summer traipsing through the yards of Palo Alto and Marin, I was a master canvaser.

Systematization is a great method for building functional software. And the quantitative, scientific approach to social observation has given us many great insights into human phenomena as well. Dan Ariely researches the “predictably irrational” decisions we make on a daily basis; his findings help us make better decisions. The blog at OkCupid.com, a math-driven dating Web site, identifies patterns in the e-mails flying back and forth between people to make them better daters (“Howdy” is a better opener than “Hi”).

But there are dangers in taking the method too far. As I discussed in chapter 5, the most human acts are often the most unpredictable ones. Because systematizing works much of the time, it’s easy to believe that by reducing and brute-forcing an understanding of any system, you can control it. And as a master of a self-created universe, it’s easy to start to view people as a means to an end, as variables to be manipulated on a mental spreadsheet, rather than as breathing, thinking beings. It’s difficult both to systematize and to appeal to the fullness of human life—its unpredictability, emotionality, and surprising quirks—at the same time.

David Gelernter, a Yale computer scientist, barely survived an encounter with an explosive package sent by the Unabomber; his eyesight and right hand are permanently damaged as a result. But Gelernter is hardly the technological utopian Ted Kaczinski believed him to be.

“When you do something in the public sphere,” Gelernter told a reporter, “it behooves you to know something about what the public sphere is like. How did this country get this way? What was the history of the relationship between technology and the public? What’s the history of political exchange? The problem is, hackers don’t tend to know any of that. And that’s why it worries me to have these people in charge of public policy. Not because they’re bad, just because they’re uneducated.”

Understanding the rules that govern a messy, complex world makes it intelligible and navigable. But systematizing inevitably involves a trade-off—rules give you some control, but you lose nuance and texture, a sense of deeper connection. And when a strict systematizing sensibility entirely shapes social space (as it often does online), the results aren’t always pretty.

The New Architects

The political power of design has long been obvious to urban planners. If you take the Wantagh State Parkway from Westbury to Jones Beach on Long Island, at intervals you’ll pass under several low, vine-covered overpasses. Some of them have as little as nine feet of clearance. Trucks aren’t allowed on the parkway—they wouldn’t fit. This may seem like a design oversight, but it’s not.

There are about two hundred of these low bridges, part of the grand design for the New York region pioneered by Robert Moses. Moses was a master deal maker, a friend of the great politicians of the time, and an unabashed elitist. According to his biographer, Robert A. Caro, Moses’s vision for Jones Beach was as an island getaway for middle-class white families. He included the low bridges to make it harder for low-income (and mostly black) New Yorkers to get to the beach, as public buses—the most common form of transport for inner-city residents—couldn’t clear the overpasses.

The passage in Caro’s The Power Broker describing this logic caught the eye of Langdon Winner, a Rolling Stone reporter, musician, professor, and philosopher of technology. In a pivotal 1980 article titled “Do Artifacts Have Politics?” Winner considered how Moses’s “monumental structures of concrete and steel embody a systematic social inequality, a way of engineering relationships among people that, after a time, became just another part of the landscape.”

On the face of it, a bridge is just a bridge. But often, as Winner points out, architectural and design decisions are underpinned by politics as much as aesthetics. Like goldfish that grow only large enough for the tank they’re in, we’re contextual beings: how we behave is dictated in part by the shape of our environments. Put a playground in a park, and you encourage one kind of use; build a memorial, and you encourage another.

As we spend more of our time in cyberspace—and less of our time in what geeks sometimes call meatspace, or the offline world—Moses’s bridges are worth keeping in mind. The algorithms of Google and Facebook may not be made of steel and concrete, but they regulate our behavior just as effectively. That’s what Larry Lessig, a law professor and one of the early theorists of cyberspace, meant when he famously wrote that “code is law.”

If code is law, software engineers and geeks are the ones who get to write it. And it’s a funny kind of law, created without any judicial system or legislators and enforced nearly perfectly and instantly. Even with antivandalism laws on the books, in the physical world you can still throw a rock through the window of a store you don’t like. You might even get away with it. But if vandalism isn’t part of the design of an online world, it’s simply impossible. Try to throw a rock through a virtual storefront, and you just get an error.

Back in 1980, Winner wrote, “Consciously or unconsciously, deliberately or inadvertently, societies choose structures for technologies that influence how people are going to work, communicate, travel, consume, and so forth over a very long time.” This isn’t to say that today’s designers have malevolent impulses, of course—or even that they’re always explicitly trying to shape society in certain ways. It’s just to say that they can—in fact, they can’t help but shape the worlds they build.

To paraphrase Spider-Man creator Stan Lee, with great power comes great responsibility. But the programmers who brought us the Internet and now the filter bubble aren’t always game to take on that responsibility. The Hacker Jargon File, an online repository of geek culture, puts it this way: “Hackers are far more likely than most non-hackers to either (a) be aggressively apolitical or (b) entertain peculiar or idiosyncratic political ideas.” Too often, the executives of Facebook, Google, and other socially important companies play it coy: They’re social revolutionaries when it suits them and neutral, amoral businessmen when it doesn’t. And both approaches fall short in important ways.

Playing It Coy

When I first called Google’s PR department, I explained that I wanted to know how Google thought about its enormous curatorial power. What was the code of ethics, I asked, that Google uses to determine what to show to whom? The public affairs manager on the other end of the phone sounded confused. “You mean privacy?” No, I said, I wanted to know how Google thought about its editorial power. “Oh,” he replied, “we’re just trying to give people the most relevant information.” Indeed, he seemed to imply, no ethics were involved or required.

I persisted: If a 9/11 conspiracy theorist searches for “9/11,” was it Google’s job to show him the Popular Mechanics article that debunks his theory or the movie that supports it? Which was more relevant? “I see what you’re getting at,” he said. “It’s an interesting question.” But I never got a clear answer.

Much of the time, as the Jargon File entry claims, engineers resist the idea that their work has moral or political consequences at all. Many engineers see themselves as interested in efficiency and design, in building cool stuff rather than messy ideological disputes and inchoate values. And it’s true that if political consequences of, say, a somewhat faster video-rendering engine exist, they’re pretty obscure.

But at times, this attitude can verge on a “Guns don’t kill people, people do” mentality—a willful blindness to how their design decisions affect the daily lives of millions. That Facebook’s button is named Like prioritizes some kinds of information over others. That Google has moved from PageRank—which is designed to show the societal consensus result—to a mix of PageRank and personalization represents a shift in how Google understands relevance and meaning.

This amorality would be par for the corporate course if it didn’t coincide with sweeping, world-changing rhetoric from the same people and entities. Google’s mission to organize the world’s information and make it accessible to everyone carries a clear moral and even political connotation—a democratic redistribution of knowledge from closed-door elites to the people. Apple’s devices are marketed with the rhetoric of social change and the promise that they’ll revolutionize not only your life but our society as well. (The famous Super Bowl ad announcing the release of the Macintosh computer ends by declaring that “1984 won’t be like 1984.”)

Facebook describes itself as a “social utility,” as if it’s a twenty-first-century phone company. But when users protest Facebook’s constantly shifting and eroding privacy policy, Zuckerberg often shrugs it off with the caveat emptor posture that if you don’t want to use Facebook, you don’t have to. It’s hard to imagine a major phone company getting away with saying, “We’re going to publish your phone conversations for anyone to hear—and if you don’t like it, just don’t use the phone.”

Google tends to be more explicitly moral in its public aspirations; its motto is “Don’t be evil,” while Facebook’s unofficial motto is “Don’t be lame.” Nevertheless, Google’s founders also sometimes play a get-out-of-jail-free card. “Some say Google is God. Others say Google is Satan,” says Sergey Brin. “But if they think Google is too powerful, remember that with search engines, unlike other companies, all it takes is a single click to go to another search engine. People come to Google because they choose to. We don’t trick them.”

Of course, Brin has a point: No one is forced to use Google, just as no one is forced to eat at McDonald’s. But there’s also something troubling about this argument, which minimizes the responsibility he might have to the billions of users who rely on the service Google provides and in turn drive the company’s billions in advertising revenue.

To further muddle the picture, when the social repercussions of their work are troubling, the chief architects of the online world often fall back on the manifest-destiny rhetoric of technodeterminism. Technologists, Siva Vaidyanathan points out, rarely say something “could” or “should” happen—they say it “will” happen. “The search engines of the future will be personalized,” says Google Vice President Marissa Mayer, using the passive tense.

Just as some Marxists believed that the economic conditions of a society would inevitably propel it through capitalism and toward a world socialist regime, it’s easy to find engineers and technodeterminist pundits who believe that technology is on a set course. Sean Parker, the cofounder of Napster and rogue early president of Facebook, tells Vanity Fair that he’s drawn to hacking because it’s about “re-architecting society. It’s technology, not business or government, that’s the real driving force behind large-scale societal shifts.”

Kevin Kelly, the founding editor of Wired, wrote perhaps the boldest book articulating the technodeterminist view, What Technology Wants, in which he posits that technology is a “seventh kingdom of life,” a kind of meta-organism with desires and tendencies of its own. Kelly believes that the technium, as he calls it, is more powerful than any of us mere humans. Ultimately, technology—a force that “wants” to eat power and expand choice—will get what it wants whether we want it to or not.

Technodeterminism is alluring and convenient for newly powerful entrepreneurs because it absolves them of responsibility for what they do. Like priests at the altar, they’re mere vessels of a much larger force that it would be futile to resist. They need not concern themselves with the effects of the systems they’ve created. But technology doesn’t solve every problem of its own accord. If it did, we wouldn’t have millions of people starving to death in a world with an oversupply of food.

It shouldn’t be surprising that software entrepreneurs are incoherent about their social and political responsibilities. A great deal of this tension undoubtedly comes from the fact that the nature of online business is to scale up as quickly as possible. Once you’re on the road to mass success and riches—often as a very young coder—there simply isn’t much time to fully think all of this through. And the pressure of the venture capitalists breathing down your neck to “monetize” doesn’t always offer much space for rumination on social responsibility.

The $50 Billion Sand Castle

Once a year, the Y Combinator start-up incubator hosts a daylong conference called Startup School, where successful tech entrepreneurs pass wisdom on to the aspiring audience of bright-eyed Y Combinator investees. The agenda typically includes many of the top CEOs in Silicon Valley, and in 2010, Mark Zuckerberg was at the top of the list.

Zuckerberg was in an affable mood, dressed in a black T-shirt and jeans and enjoying what was clearly a friendly crowd. Even so, when Jessica Livingston, his interviewer, asked him about The Social Network, the movie that had made him a household name, a range of emotions crossed his face. “It’s interesting what kind of stuff they focused on getting right,” Zuckerberg began. “Like, every single shirt and fleece they had in that movie is actually a shirt or fleece that I own.”

Where there was an egregious discrepancy between fiction and reality, Zuckerberg told her, was how his own motivations were painted. “They frame it as if the whole reason for making Facebook and building something was that I wanted to get girls, or wanted to get into some kind of social institution. And the reality, for people who know me, is that I’ve been dating the same girl since before I started Facebook. It’s such a big disconnect…. They just can’t wrap their head around the idea that someone might build something because they like building things.”

It’s entirely possible that the line was just a clever bit of Facebook PR. And there’s no question that the twenty-six-year-old billionaire is motivated by empire building. But the comment struck me as candid: For programmers as for artists and craftsmen, making things is often its own best reward.

Facebook’s flaws and its founder’s ill-conceived views about identity aren’t the result of an antisocial, vindictive mind-set. More likely, they’re a natural consequence of the odd situation successful start-ups like Facebook create, in which a twenty-something guy finds himself, in a matter of five years, in a position of great authority over the doings of 500 million human beings. One day you’re making sand castles; the next, your sand castle is worth $50 billion and everyone in the world wants a piece of it.

Of course, there are far worse business-world personality types with whom to entrust the fabric of our social lives. With a reverence for rules, geeks tend to be principled—to carefully consider and then follow the rules they set for themselves and to stick to them under social pressure. “They have a somewhat skeptical view of authority,” Stanford professor Terry Winograd said of his former students Page and Brin. “If they see the world going one way and they believe it should be going the other way, they are more like to say ‘the rest of the world is wrong’ rather than ‘maybe we should reconsider.’”

But the traits that fuel the best start-ups—aggression, a touch of arrogance, an interest in empire building, and of course brilliant systematizing skills—can become a bit more problematic when you rule the world. Like pop stars who are vaulted onto the global stage, world-building engineers aren’t always ready or willing to accept the enormous responsibility they come to hold when their creations start to teem with life. And it’s not infrequently the case that engineers who are deeply mistrustful of power in the hands of others see themselves as supreme rationalists impervious to its effects.

It may be that this is too much power to entrust to any small, homogeneous group of individuals. Media moguls who get their start with a fierce commitment to the truth become the confidants of presidents and lose their edge; businesses begun as social ventures become preoccupied with delivering shareholder value. In any case, one consequence of the current system is that we can end up placing a great deal of power in the hands of people who can have some pretty far-out, not entirely well-developed political ideas. Take Peter Thiel, one of Zuckerberg’s early investors and mentors.

Thiel has penthouse apartments in San Francisco and New York and a silver gullwing McLaren, the fastest car in the world. He also owns about 5 percent of Facebook. Despite his boyish, handsome features, Thiel often looks as though he’s brooding. Or maybe he’s just lost in thought. In his teenage years, he was a high-ranking chess player but stopped short of becoming a grand master. “Taken too far, chess can become an alternate reality in which one loses sight of the real world,” he told an interviewer for Fortune. “My chess ability was roughly at the limit. Had I become any stronger, there would have been some massive tradeoffs with success in other domains in life.” In high school, he read Solzhenitsyn’s Gulag Archipelago and J. R. R. Tolkien’s Lord of the Rings series, visions of corrupt and totalitarian power. At Stanford, he started a libertarian newspaper, the Stanford, to preach the gospel of freedom.

In 1998, Thiel cofounded the company that would become PayPal, which he sold to eBay for $1.5 billion in 2002. Today Thiel runs a multi-billion-dollar hedge fund, Clarium, and a venture capital firm, Founder’s Fund, which invests in software companies throughout Silicon Valley. Thiel has made some legendarily good picks—among them, Facebook, in which he was the first outside investor. (He’s also made some bad ones—Clarium has lost billions in the last few years.) But for Thiel, investing is more than a day job. It’s an avocation. “By starting a new Internet business, an entrepreneur may create a new world,” Thiel says. “The hope of the Internet is that these new worlds will impact and force change on the existing social and political order.”

His comments raise the question of what kind of change Thiel would like to see. While many billionaires are fairly circumspect about their political views, Thiel has been vocal—and it’s safe to say that there are few with views as unusual as Thiel’s. “Peter wants to end the inevitability of death and taxes,” Thiel’s sometime collaborator Patri Friedman (grandson of Milton) told Wired. “I mean, talk about aiming high!”

In an essay posted on the libertarian Cato Institute’s Web site, Thiel describes why he believes that “freedom and democracy are no longer compatible.” “Since 1920,” he writes, “the vast increase in welfare beneficiaries and the extension of the franchise to women—two constituencies that are notoriously tough for libertarians—have rendered the notion of ‘capitalist democracy’ into an oxymoron.” Then he outlines his hopes for the future: space exploration, “sea-steading,” which involves building movable microcountries on the open ocean, and cyberspace. Thiel has poured millions into technologies to sequence genes and prolong life. He’s also focused on preparing for the Singularity, the moment a few decades from now when some futurists believe that humans and machines are likely to meld.

In an interview, he argues that should the Singularity arrive, one would be well advised to be on the side of the computers: “Certainly we would hope that [an artificially intelligent computer] would be friendly to human beings. At the same time, I don’t think you’d want to be known as one of the human beings that is against computers and makes a living being against computers.”

If all this sounds a little fantastical, it doesn’t worry Thiel. He’s focused on the long view. “Technology is at the center of what will determine the course of the 21st century,” he says. “There are aspects of it that are great and aspects that are terrible, and there are some real choices humans have to make about which technologies to foster and which ones we should be more careful about.”

Peter Thiel is entitled to his idiosyncratic views, of course, but they’re worth paying attention to because they increasingly shape the world we all live in. There are only four other people on the Facebook board besides Mark Zuckerberg; Thiel is one of them, and Zuckerberg publicly describes him as a mentor. “He helped shape the way I think about the business,” Zuckerberg said in a 2006 Bloomberg News interview. As Thiel says, we have some big decisions to make about technology. And as for how those decisions get made? “I have little hope,” he writes, “that voting will make things better.”

“What Game Are You Playing?”

Of course, not all engineers and geeks have the views about democracy and freedom that Peter Thiel does—he’s surely an outlier. Craig Newmark, the founder of the free Web site craigslist, spends most of his time arguing for “geek values” that include service and public-spiritedness. Jimmy Wales and the editors at Wikipedia work to make human knowledge free to everyone. The filtering goliaths make huge contributions here as well: The democratic ideal of an enlightened, capable citizenry is well served by the broader set of relationships Facebook allows me to manage and the mountains of formerly hard-to-access research papers and other public information that Google has freed.

But the engineering community can do more to strengthen the Internet’s civic space. And to get a sense of the path ahead, I talked to Scott Heiferman.

Heiferman, the founder of MeetUp.com, is soft-spoken in a Midwestern sort of way. That’s fitting, because he grew up in Homewood, Illinois, a small town on the outskirts of Chicago. “It was a stretch to call it suburban,” he says. His parents operated a paint store.

As a teenager, Heiferman devoured material about Steve Jobs, eating up the story about how Jobs wooed a senior executive from Pepsi by asking him if he wanted to change the world or sell sugar water. “Throughout my life,” he told me, “I’ve had a love-hate relationship with advertising.” At the University of Iowa in the early 1990s, Heiferman studied engineering and marketing but at night he ran a radio show called Advertorial Infotainment in which he would remix and cut advertisements together to create a kind of sound art. He put the finished shows online and encouraged people to send in ads to remix, and the attention got him his first job, managing the Web site at Sony .com.

After a few years as Sony’s “interactive-marketing frontiersman,” Heiferman founded i-traffic, one of the major early advertising companies of the Web. Soon i-traffic was the agency of record for clients like Disney and British Airways. But although the company was growing quickly, he was dissatisfied. The back of his business card had a mission statement about connecting people with brands they’d love, but he was increasingly uncertain that was a worthy endeavor—perhaps he was selling sugar water after all. He left the company in 2000.

For the remainder of the year and into 2001, Heiferman was in a funk. “I was exhibiting what you could call being depressed,” he says. When he heard the first word of the World Trade Center attacks on 9/11, he ran up to his lower-Manhattan rooftop and watched in horror. “I talked to more strangers in the next three days,” he says, “than in the previous five years of living in New York.”

Shortly after the attacks, Heiferman came across the blog post that changed his life. It argued that the attacks, as awful as they were, might bring Americans back together in their civic life, and referenced the bestselling book Bowling Alone. Heiferman bought a copy and read it cover to cover. “I became captivated,” he says, “by the question of whether we could use technology to rebuild and strengthen community.” MeetUp.com, a site that makes it easy for local groups to meet face-to-face, was his answer, and today, MeetUp serves over 79,000 local groups that do that. There’s the Martial Arts MeetUp in Orlando and the Urban Spirituality MeetUp in Barcelona and the Black Singles MeetUp in Houston. And Heiferman is a happier man.

“What I learned being in the ad business,” he says, “is that people can just go a long time without asking themselves what they should put their talent towards. You’re playing a game, and you know the point of the game is to win. But what game are you playing? What are you optimizing for? If you’re playing the game of trying to get the maximum downloads of your app, you’ll make the better farting app.”

“We don’t need more things,” he says. “People are more magical than iPads! Your relationships are not media. Your friendships are not media. Love is not media.” In his low-key way, Heiferman is getting worked up.

Evangelizing this view of technology—that it ought to do something meaningful to make our lives more fulfilling and to solve the big problems we face—isn’t as easy as it might seem. In addition to MeetUp more generally, Scott founded the New York Tech MeetUp, a group of ten thousand software engineers who meet every month to preview new Web sites. At a recent meeting, Scott made an impassioned plea for the assembled group to focus on solving the problems that matter—education, health care, the environment. It didn’t get a very good reception—in fact, he was just about booed off the stage. “‘We just want to do cool stuff,’ was the attitude,” Scott told me later. “ ‘Don’t bother me with this politics stuff.’”

Technodeterminists like to suggest that technology is inherently good. But despite what Kevin Kelly says, technology is no more benevolent than a wrench or a screwdriver. It’s only good when people make it do good things and use it in good ways. Melvin Kranzberg, a professor who studies the history of technology, put it best nearly thirty years ago, and his statement is now known as Kranzberg’s first law: “Technology is neither good or bad, nor is it neutral.”

For better or worse, programmers and engineers are in a position of remarkable power to shape the future of our society. They can use this power to help solve the big problems of our age—poverty, education, disease—or they can, as Heifer-man says, make a better farting app. They’re entitled to do either, of course. But it’s disingenuous to have it both ways—to claim your enterprise is great and good when it suits you and claim you’re a mere sugar-water salesman when it doesn’t.

Actually, building an informed and engaged citizenry—in which people have the tools to help manage not only their own lives but their own communities and societies—is one of the most fascinating and important engineering challenges. Solving it will take a great deal of technical skill mixed with humanistic understanding—a real feat. We need more programmers to go beyond Google’s famous slogan, “Don’t be evil.” We need engineers who will do good.

And we need them soon: If personalization remains on its current trajectory, as the next chapter describes, the near future could be stranger and more problematic than many of us would imagine.

Загрузка...