CHAPTER 2

The Future of Identity, Citizenship and Reporting

In the next decade, the world’s virtual population will outnumber the population of Earth. Practically every person will be represented in multiple ways online, creating vibrant and active communities of interlocking interests that reflect and enrich our world. All of those connections will create massive amounts of data—a data revolution, some call it—and empower citizens in ways never before imagined. Yet despite these advancements, a central and singular caveat exists: The impact of this data revolution will be to strip citizens of much of their control over their personal information in virtual space, and that will have significant consequences in the physical world. This may not be true in every instance or for every user, but on a macro level it will deeply affect and shape our world. The challenge we face as individuals is determining what steps we are willing to take to regain control over our privacy and security.

Today, our online identities affect but rarely overshadow our physical selves. What people do and say on their social-networking profiles can draw praise or scrutiny, but for the most part truly sensitive or personal information stays hidden from public view. Smear campaigns and online feuds typically involve public figures, not ordinary citizens. In the future, our identities in everyday life will come to be defined more and more by our virtual activities and associations. Our highly documented pasts will have an impact on our prospects, and our ability to influence and control how we are perceived by others will decrease dramatically. The potential for someone else to access, share or manipulate parts of our online identities will increase, particularly due to our reliance on cloud-based data storage. (In nontechnical language, cloud computing refers to software hosted on the Internet that the user does not need to closely manage. Storing documents or content “in the cloud” means that data is stored on remote servers rather than on local ones or on a person’s own computer, and it can be accessed by multiple networks and users. With cloud computing, online activities are faster, quicker to spread and better equipped to handle traffic loads.) This vulnerability—both perceived and real—will mandate that technology companies work even harder to earn the trust of their users. If they do not exceed expectations in terms of both privacy and security, the result will be either a backlash or abandonment of their product. The technology industry is already hard at work to find creative ways to mitigate risks, such as through two-factor authentication, which requires you to provide two of the following to access your personal data: something you know (e.g., password), have (e.g., mobile device) and are (e.g., thumbprint). We are also encouraged knowing that many of the world’s best engineers are hard at work on the next set of solutions. And at a minimum, strong encryption will be nearly universally adopted as a better but not perfect solution. (“Encryption” refers to the scrambling of information so that it can be decoded and used only by someone with the right verification requirements.)

The basics of online identity could also change. Some governments will consider it too risky to have thousands of anonymous, untraceable and unverified citizens—“hidden people”; they’ll want to know who is associated with each online account, and will require verification, at a state level, in order to exert control over the virtual world. Your online identity in the future is unlikely to be a simple Facebook page; instead it will be a constellation of profiles, from every online activity, that will be verified and perhaps even regulated by the government. Imagine all of your accounts—Facebook, Twitter, Skype, Google+, Netflix, New York Times subscription—linked to an “official profile.” Within search results, information tied to verified online profiles will be ranked higher than content without such verification, which will result in most users naturally clicking on the top (verified) results. The true cost of remaining anonymous, then, might be irrelevance; even the most fascinating content, if tied to an anonymous profile, simply won’t be seen because of its excessively low ranking.

The shift from having one’s identity shaped off-line and projected online to an identity that is fashioned online and experienced off-line will have implications for citizens, states and companies as they navigate the new digital world. And how people and institutions handle privacy and security concerns in this formative period will determine the new boundaries for citizens everywhere. We want to explore here what full connectivity will mean for citizens in the future, how they will react to it and what consequences it will have for dictators and democrats alike.


The

Data Revolution

The data revolution will bring untold benefits to the citizens of the future. They will have unprecedented insight into how other people think, behave and adhere to norms or deviate from them, both at home and in every society in the world. The newfound ability to obtain accurate and verified information online, easily, in native languages and in endless quantity, will usher in an era of critical thinking in societies around the world that before had been culturally isolated. In societies where the physical infrastructure is weak, connectivity will enable people to build businesses, engage in online commerce and interact with their government at an entirely new level.

The future will usher in an unprecedented era of choices and options. While some citizens will attempt to manage their identity by engaging in the minimum amount of virtual participation, others will find the opportunities to participate worth the risk of the exposure they incur. Citizen participation will reach an all-time high as anyone with a mobile handset and access to the Internet will be able to play a part in promoting accountability and transparency. A shopkeeper in Addis Ababa and a precocious teenager in San Salvador will be able to disseminate information about bribes and corruption, report election irregularities and generally hold their governments to account. Video cameras installed in police cars will help keep the police honest, if the camera phones carried by citizens don’t already. In fact, technology will empower people to police the police in a plethora of creative ways never before possible, including through real-time monitoring systems allowing citizens to publicly rate every police officer in their hometown. Commerce, education, health care and the justice system will all become more efficient, transparent and inclusive as major institutions opt in to the digital age.

People who try to perpetuate myths about religion, culture, ethnicity or anything else will struggle to keep their narratives afloat amid a sea of newly informed listeners. With more data, everyone gains a better frame of reference. A Malawian witch doctor might find his community suddenly hostile if enough people find and believe information online that contradicts his authority. Young people in Yemen might confront their tribal elders over the traditional practice of child brides if they determine that the broad consensus of online voices is against it, and thus it reflects poorly upon them personally. Or followers of an Indian holy man might find a way to cross-reference his credentials on the Internet, abandoning him if it is revealed that he misled them. While many worry about the phenomenon of confirmation bias (when consciously or otherwise, people pay attention to sources of information that reinforce their existing worldview) as online sources of information proliferate, a recent Ohio State University study suggests that this effect is weaker than perceived, at least in the American political landscape. In fact, confirmation bias is as much about our responses to information passively received as it is about our tendency to proactively select information sources. So as millions of people come online we have reason to be optimistic about the social changes ahead.

Governments, too, will find it more difficult to maneuver as their citizens become more connected. Destroying documents, kidnapping, demolishing monuments—restrictive and repressive actions like these will lose much of their functional and symbolic power in the new digital age. Those documents would be recoverable, having been stored in the cloud, and the pressure that an active and globalized Internet community can produce when rallied against injustice will make governments think twice before snatching anyone or detaining him indefinitely. A Taliban-like government would still be able to destroy monuments like the Bamiyan Buddhas, but in the future those monuments will have been scanned with sophisticated technology that preserves every nook and cranny in virtual memory, allowing them to be rebuilt later by men or 3-D printers, or even projected as a hologram. Perhaps the UNESCO World Heritage Centre will add these practices to its restoration efforts. The structure of Syria’s oldest synagogue, for example, currently in a museum in Damascus, could be projected as a hologram or reconstructed using 3-D printing at its original site in Dura-Europos. What’s true now in most developed countries—the presence of an active civil society keen to fact-check and investigate its government—will be true almost everywhere, aided significantly by the prevalence of cheap and powerful handsets. And on a more basic level, citizens anywhere will be able to compare themselves and their way of life with the rest of the world. Practices widely considered barbaric or backward will seem even more so when seen in that context.


Identity will be the most valuable commodity for citizens in the future, and it will exist primarily online. Online experience will start with birth, or even earlier. Periods of people’s lives will be frozen in time, and easily surfaced for all to see. In response, companies will have to create new tools for control of information, such as lists that would enable people to manage who sees their data. The communication technologies we use today are invasive by design, collecting our photos, comments and friends into giant databases that are searchable and, in the absence of outside regulation, fair game for employers, university admissions personnel and town gossips. We are what we tweet.

Ideally, all people would have the self-awareness to closely manage their online identities and the virtual lives they lead, monitoring and shaping them from an early age so as not to limit their opportunities in life. Of course, this is impossible. For children and adolescents, the incentives to share will always outweigh the vague, distant risks of self-exposure, even with salient examples of the consequences in public view. By the time a man is in his forties, he will have accumulated and stored a comprehensive online narrative, all facts and fictions, every misstep and every triumph, spanning every phase of his life. Even the rumors will live forever.

In deeply conservative societies where social shame is weighed heavily, we could see a kind of “virtual honor killing”—dedicated efforts to ruin a person’s online identity either preemptively (by exposing perceived misdeeds or planting false information) or reactively (by linking his or her online identity to content detailing a crime, real or imagined). Ruined online reputations might not lead to physical violence by the perpetrator, but a young woman facing such accusations could find herself branded with a digital scarlet letter that, thanks to the unfortunate but hard-to-prevent reality of data permanence, she’d never be able to escape. And that public shame could lead one of her family members to kill her.

And what about the role of parents? Being a parent is hard enough, as anyone who has kids knows. While the online world has made it even tougher, it is not a hopeless endeavor. Parents will have the same responsibilities in the future, but they will need to be even more involved if they are going to make sure their children do not make mistakes online that could hurt their physical future. As children live significantly faster lives online than their physical maturity allows, most parents will realize that the most valuable way to help their child is to have the privacy-and-security talk even before the sex talk. The old-fashioned tactic of parents talking to their children will retain enormous value.

School systems will also adapt to play an important role. Parent-teacher associations will advocate for privacy and security classes to be taught alongside sex-education classes in their children’s schools. Such classes will teach students to optimize their privacy-and-security settings and train them to become well versed in the dos and don’ts of the virtual world. And teachers will frighten them with real-life stories of what happens if they don’t take control of their privacy and security at an early age.

Certainly some parents will try to game the system as well with more algorithmic solutions that may or may not have an effect. The process of naming a child offers one such example. As the functional value of online identity increases, parental supervision will play a critical role in the early stages of life, beginning with a child’s name. Steven D. Levitt and Stephen J. Dubner, the authors of the popular economics book Freakonomics, famously dissected how ethnically popular names (specifically, names common in African-American communities) can be an indicator of children’s chances for success in life. Looking ahead, parents will also consider how online search rankings will affect their child’s future. The truly strategic will go beyond reserving social-networking profiles and buying domain names (e.g., www.JohnDavidSmith.com), and instead select names that affect how easy or hard it will be to find their children online. Some parents will deliberately choose unique names or unusually spelled traditional names so that their children have an edge in search results, making them easy to locate and promotable online without much direct competition. Others will go the opposite route, choosing basic and popular names that allow their children to live in an online world with some degree of shelter from Internet indexes—just one more “Jane Jones” among thousands of similar entries.

We’ll also see a proliferation of businesses that cater to privacy and reputation concerns. This industry exists already, with companies like Reputation.com using a range of proactive and reactive tactics to remove or dilute unwanted content from the Internet.1 During the 2008 economic crash, it was reported that several Wall Street bankers hired online reputation companies to minimize their appearance online, paying up to $10,000 per month for the service. In the future, this industry will diversify as the demand explodes, with identity managers becoming as common as stockbrokers and financial planners. Active management of one’s online presence—say, by receiving quarterly reports from your identity manager tracking the changing shape of your online identity—will become the new normal for the prominent and those who aspire to be prominent.

A new realm of insurance will emerge, too. Companies will offer to insure your online identity against theft and hacking, fraudulent accusations, misuse or appropriation. For example, parents may take out an insurance policy against reputational damage caused by what their children do online. Perhaps a teacher will take out an insurance policy that covers her against a student hacking into her Facebook account and changing details of her online profile to embarrass or defame her. We have identity-theft protection companies today; in the future, insurance companies will offer customers protection against very specific misuses. Any number of people could be attracted to such an insurance policy, from the genuinely in need to the generally paranoid.

Online identity will become such a powerful currency that we will even see the rise of a new black market where people can buy real or invented identities. Citizens and criminals alike will be attracted to such a network, since the false identity that could provide cover for a known drug smuggler could also shelter a political dissident. The identity will be manufactured or stolen, and it will come complete with backdated entries and IP (Internet protocol) activity logs, false friends and sales purchases, and other means of making it appear convincing. If a Mexican whistle-blower’s family needed to flee the violence of Ciudad Juárez and feared cartel retribution, a set of fake online identities would certainly help cover their tracks and provide them with a clean slate.

Naturally, this kind of escape route is a high-risk endeavor in the digital age: Embarking on a new life would require total disconnection from previous ties, because even the smallest gesture (like a search query for a relative) could give away a person’s position. Furthermore, anyone assuming a false identity would need to avoid all places with facial-recognition technology lest a scan of his or her face flag an earlier profile. And there would be no dark alleyways in this illicit market, either: All identities could be purchased over an encrypted connection between mutually anonymous parties, paid for with difficult-to-trace virtual currency. Brokers and buyers in this exchange would face risks similar to what black marketeers do today, including undercover agents and dishonest dealings (perhaps made all the more likely due to the anonymous nature of these virtual-world transactions).


Some people will cheer for the end of control that connectivity and data-rich environments engender. They are the people who believe that information wants to be free,2 and that greater transparency in all things will bring about a more just, safe and free world. For a time, WikiLeaks’ cofounder Julian Assange was the world’s most visible ambassador for this cause, but supporters of WikiLeaks and the values it champions come in all stripes, including right-wing libertarians, far-left liberals and apolitical technology enthusiasts. While they don’t always agree on tactics, to them, data permanence is a fail-safe for society. Despite some of the known negative consequences of this movement (threats to individual security, ruined reputations and diplomatic chaos), some free-information activists believe the absence of a delete button ultimately strengthens humanity’s progress toward greater equality, productivity and self-determination. We believe, however, that this is a dangerous model, especially given that there is always going to be someone with bad judgment who releases information that will get people killed. This is why governments have systems and valuable regulations in place that, while imperfect, should continue to govern who gets to make the decision about what is classified and what is not.

We spoke with Assange in June 2011, while he was under house arrest in the United Kingdom. Our above-mentioned position aside, we must account for what free-information activists may try to do in the future, and therefore, Assange is a useful starting point. We will not revisit the ongoing debates of today (about which there are already many books and articles), which focus largely on the Western reaction to WikiLeaks, the contents of the cables that have been leaked, how destructive the leaks were and what punishments should await those involved in such activities. Instead, our interest is in the future and what the next phase of free-information movements—beginning with, but not restricted to, the Assange types—may try to achieve or destroy. Over the course of the interview, Assange shared his two basic arguments on this subject, which are related: First, our human civilization is built upon our complete intellectual record; thus the record should be as large as possible to shape our own time and inform future generations. Second, because different actors will always try to destroy or otherwise cover up parts of that shared history out of self-interest, it should be the goal of everyone who seeks and values truth to get as much as possible into the record, to prevent deletions from it, and then to make this record as accessible and searchable as possible for people everywhere.

Assange’s is not a war on secrecy, per se—“There are all sorts of reasons why non-powerful organizations engage in secrecy,” he told us, “and in my view it’s legitimate; they need it because they’re powerless”—but instead it is a fight against the secrecy that shields actions not in the public’s interest. “Why are powerful organizations engaged in secrecy?” he asked rhetorically. The answer he offered is that the plans they have would be opposed if made public, so secrecy floats them to the implementation stage, at which point it’s too late to alter the course effectively. Organizations whose plans won’t incur public opposition don’t carry that burden, so they don’t need to be secretive, he added. As these two types of organizations battle, the one with genuine public support will eventually come out on top, Assange said. Releasing information, then, “is positive to those engaged in acts which the public supports and negative to those engaged in acts the public doesn’t support.”

As to the charge that those secretive organizations can simply take their operations off-line and avoid unwelcome disclosure, Assange is confident in his movement’s ability to prevent this. Not a possibility, he said; serious organizations will always leave a paper trail. By definition, he explained, “systematic injustice is going to have to involve a lot of people.” Not every participant will have full access to the plans, but each will have to know something in order to do his job. “If you take your information off paper, if you take it outside the electronic or physical paper trail, institutions decay,” he said. “That’s why all organizations have rigorous paper trails for the instructions from the leadership.” Paper trails ensure that instructions are carried out properly; therefore, as Assange said, “if they internally balkanize so that information can’t be leaked, there’s a tremendous cost to the organizational efficiency of doing that.” And inefficient organizations mean less powerful ones.

Openness, on the other hand, introduces new challenges for this movement of truth-seekers, from Assange’s perspective. “When things become more open, then they start to become more complex, because people start hiding what they’re doing—their bad behavior—through complexity,” he said. He pointed to bureaucratic doublespeak and the offshore financial sector as clear examples. These systems are technically open, he said, but in fact are impenetrable; they are hard to attack but even harder to use efficiently. Obfuscation at this level, where the complexity is legal but still covering something up, is a much more difficult problem to solve than straightforward censorship.

Unfortunately, people like Assange and organizations like WikiLeaks will be well placed to take advantage of some of the changes in the next decade. And even supporters of their work are faced with difficult questions about the methods and implications of online disclosures, particularly as we look beyond the case study of WikiLeaks and into the future. One of the most difficult is the question of discretionary power: Who gets to decide what information is suitable for release, and what must be redacted, even temporarily? Why is it Julian Assange, specifically, who gets to decide what information is relevant to the public interest? And what happens if the person who makes such decisions is willing to accept indisputable harm to innocents as a consequence of his disclosures? Most people would agree that some level of supervision is necessary for any whistle-blowing platforms to serve a positive role in society, but there is no guarantee that supervision will be there (a glance at the recklessness of hackers3 who publish others’ personal information online in bulk confirms this).

If there is a central body facilitating the release of information, someone or some group of people, with their own ideas and biases, must be making those decisions. So long as humans, and not computers, are running things in our world, we will face these questions of judgment, no matter how transparent or technically sound the platforms are.

Looking ahead, some people might assume that the growth of connectivity around the world will spur a proliferation of WikiLeaks-like platforms. With more users and more classified or confidential information online, the argument goes, dozens of smaller secret-publishing platforms will emerge to meet the increase in supply and demand. A compelling and frightening idea, but wrong. There are natural barriers to growth in the field of whistle-blowing websites, including exogenous factors that limit the number of platforms that can successfully coexist. Regardless of what one thinks of WikiLeaks, consider all the things it needed in order to become a known, global brand: more than one geopolitically relevant large-scale leak to grab international attention; a track record of leaks to show commitment to the cause, to generate public trust and to give incentives to other potential leakers by demonstrating WikiLeaks’ ability to protect them; a charismatic figurehead who could embody the organization and serve as its lightning rod, as Assange called himself; a constant upload of new leaks (often in bulk) to remain relevant in the public eye; and, not least, a broadly distributed and technically sophisticated digital platform for leakers, organization staff and the public to handle the leaked materials (while all remaining anonymous to one another) that could evade shutdown by authorities in multiple countries. It is very difficult to build such an intricate and responsive system, both technically and because the value of most components depends on the capabilities of others. (What good is a sophisticated platform without motivated leakers, or a set of valuable secrets without the system to discretely process and disseminate them?) The balance struck by WikiLeaks between public interest, private disclosure and technical protections took years to reach, so it is hard to imagine future upstarts, offshoots or rivals building an equivalent platform and brand much faster than they could—particularly now that authorities around the world are attuned to the threat such organizations pose.

Moreover, even if new organizations managed to build such platforms, it is highly unlikely that the world could support more than a handful at any given time. There are a few reasons for this. First, even the juiciest disclosures require a subsequent media cycle in order to have impact. If the landscape of secret-spilling websites became too decentralized, media outlets would find it difficult to keep track of these sites and their leaks, and to gauge their trustworthiness as sources. Second, leakers will naturally coalesce around organizations that they believe will generate maximum impact for their disclosures while providing them with the maximum amount of protection. These websites can compete for leakers, with promises of ever better publicity and anonymity, but it’s only logical that a potential whistle-blower would look for successful examples and follow the lead of other leakers before him. What source would risk his chance, even his life, on an untested group? And organizations that cannot consistently attract high-level leaks will lose attention and funding, slowly but surely atrophying in the process. Assange described this dynamic from his organization’s perspective as a positive one, providing a check on WikiLeaks as surely as it kept them in business. “Sources speak with their feet,” he said. “We’re disciplined by market forces.”

Regionality may determine the future of whistle-blowing websites more than anything else. Governments and corporations in the West are, for the most part, now wise to the risks that lackluster cybersecurity allows, and though their systems are by no means impenetrable, significant resources are being invested in both the public and the private sector to better protect records, user data and infrastructure. The same is not true for most developing countries, and we can expect that as these populations come online in the next decade, some will experience their own version of the WikiLeaks phenomenon: sources with access to newly digitized records and the incentive to leak sensitive materials to cause a political impact. The ensuing storms may be limited to a particular country or region, but they will nonetheless be disruptive and significant for the environments they touch. They may even catalyze a physical revolution or riot. We should also expect the deployment of similar tactics from government authorities to combat such sites (even if the organizations and their servers are based elsewhere): filtering, direct attacks, financial blockades and legal prosecution.

Eventually, though, the technology used by these platforms will be so sophisticated that they will be effectively unblockable. When WikiLeaks lost its principal website URL, WikiLeaks.org, due to a series of distributed denial-of-service (DDoS) attacks and the pullout of its Internet service provider (which hosted the site) in 2010, its supporters immediately set up more than a thousand “mirror” sites (copies of the original site hosted at remote locations), with URLs like WikiLeaks.fi (in Finland), WikiLeaks.ca (in Canada) and WikiLeaks.info. (In a DDoS attack, a large number of compromised computer systems attack a single target, overloading the system with information requests and causing it to shut down, denying service to legitimate users.) Because WikiLeaks was designed as a distributed system—meaning its operations were distributed across many different computers, instead of concentrated in one centralized hub—shutting down the platform was much more difficult than it seemed to most laymen. Future whistle-blowing websites will surely move beyond mirror sites (copies of existing sites) and use new methods to replicate and obfuscate their operations to shield themselves from authorities. One way to accomplish this would be to create a storage system where fragments of files are copied and distributed in such a way that if one file directory is shut down, the files can be reassembled from those fragments. These platforms will develop new ways to ensure anonymous submission for potential leakers; WikiLeaks constantly updated its submission methods, warning users to avoid earlier cryptographic routes—among them SSL, or secure sockets layer, and hidden Tor service, using the highly encrypted Tor network—once they had determined that those were insufficiently secure.

And what of the individuals leading this charge? The Assanges of the world will still exist in the future, but their support bases will remain small. The more welcomed whistle-blowers of the future will be the ones who follow the example of people like Alexei Navalny, a Russian blogger and anticorruption activist, who enjoys much sympathy from many in the West. Disillusioned with Russia’s liberal opposition parties, Navalny, a real-estate lawyer, started his own blog dedicated to exposing corruption in major Russian companies, initially supplying the disclosures himself by taking small stakes in the businesses and invoking shareholder rights to force them to share information. He later crowd-sourced his approach, instructing supporters to try to do the same, with some success. Eventually, his blog grew into a full-blown secret-spilling platform, where visitors were encouraged to donate toward its operating costs via PayPal. Navalny’s profile grew as his collection of scoops swelled, most notably with a set of leaked documents that revealed the misuse of $4 billion at the state-owned oil pipeline company Transneft in 2010. By late 2011, Navalny’s public stature placed him at the center of preelection protests, and his nickname for Vladimir Putin’s United Russia party, the Party of Crooks and Thieves, had gone viral, adopted widely throughout the country.

Navalny’s approach, at least in the beginning of his new activism, was distinctive in that for all his zeal he had not turned the focus of his whistle-blowing operation toward Putin himself. His targets had largely been commercial, although given that the Russian public and private sector are not always easily distinguished, the information implicated some government officials as well. Moreover, despite the harassment he experienced—he had been arrested, imprisoned, spied on and investigated for embezzlement—he remained free for years. His critics may have called him a liar, a hypocrite or a CIA stooge, but Navalny remained in Russia (unlike so many other high-profile Kremlin opponents) and his blog was not censored.

Some think Navalny did not constitute much of a threat to the Kremlin; his name recognition among Russians remained quite low, though his supporters argue that such figures merely reflect low Internet penetration across the country and the success of state media censorship (Navalny was banned from appearing on state-run television). But a more interesting theory is that, for a time at least, Navalny found a way to toe the line as an anticorruption activist, knowing what to leak—and from whom—and what areas to avoid. Unlike prominent Putin critics, like the jailed billionaire Mikhail Khodorkovsky and the self-exiled oligarch Boris Berezovsky, Navalny seems to have found a way to challenge the Kremlin, while fighting corruption, without veering into overly sensitive areas that might place him in grave danger. (Short of a badly doctored photograph that appeared in a pro-Kremlin newspaper showing Navalny laughing with Berezovsky, there is little to suggest he has any ties to those critics.) His presence seemed to be tolerated by the Russian government until July 2012, when it deployed all available tools to discredit him, formally charging him with embezzlement in a case concerning a state-owned timber business in the Kirov region, where he had formerly worked as an advisor to the governor. The charges, carrying a maximum sentence of ten years in prison, reflected how much of a threat the resilient antigovernment protest movement had become. The world will continue to watch the trajectories of figures like Navalny to see whether his approach provides some measure of insulation from attack for digital activists.

There is also the frightening possibility that sites will emerge created by people who share the design and scale of these whistle-blower platforms but not their motivations. Rather than functioning as a clearinghouse for whistle-blowers, such platforms would serve as hosts to all manner of pilfered digital content—leaked active military operations, hacked bank accounts, stolen passwords and home addresses—without any particular agenda beyond anarchy. Operators of these sites would not be ideologues or political activists; they would be agents of chaos. Today, hackers and information criminals publish their ill-gotten gains fairly indiscriminately—the 150,000 Sony customer records released by the hacker group LulzSec in 2011 were simply made downloadable as a file through a peer-to-peer file-sharing service—but in the future, if a centralized platform emerged that offered them WikiLeaks-level security and publicity, it would present a real problem. Redaction, verification and other precautionary measures taken by WikiLeaks and its media partners would surely not be performed on these unregulated sites (indeed, Assange told us he redacted only to reduce the international pressure that was financially strangling him and said he would have preferred no redactions), and lack of judgment around sensitive materials might well get people killed. Information criminals would almost certainly traffic in bulk leaks in order to cause maximum disruption. To some extent, leaking selectively reflects purpose while releasing material in bulk is effectively thumbing one’s nose at the entire system of secure information.

But context matters, too. How different would the reaction have been, from Western governments in particular, if WikiLeaks had published stolen classified documents from the regimes in Venezuela, North Korea and Iran? If Bradley Manning, the alleged source of WikiLeaks’ materials about the United States government and military, had been a North Korean border guard or a defector from Iran’s Revolutionary Guard Corps, how differently would politicians and pundits in the United States have viewed him? Were a string of whistle-blowing websites dedicated to exposing abuses within those countries to appear, surely the tone of the Western political class would shift. Taking into account the precedent President Barack Obama set in his first term in office—a clear “zero tolerance” approach toward unauthorized leaks of classified information from U.S. officials—we would expect that future Western governments would ultimately adopt a dissonant posture toward digital disclosures, encouraging them abroad in adversarial countries, but prosecuting them ferociously at home.


The Reporting Crisis

Where we get our information and what sources we trust will have a profound impact on our future identities. What’s in store for the news in the Internet era is well-covered ground, and the battles we see today over monetization strategies and content syndication will continue to play out in the coming decade. But as technology lowers entry barriers in every industry, how will the media landscape as we know it today change?

It is manifestly clear that mainstream media outlets will increasingly find themselves a step behind in the reporting of news worldwide. These organizations simply cannot move quickly enough in a connected age, no matter how talented their reporters and stringers are, and how many sources they have. Instead, the world’s breaking news will continually come from platforms like Twitter: open networks that facilitate information-sharing instantly, widely and in accessible packages. If everyone in the world has a data-enabled phone or access to one—a not-so-distant reality—then the ability to “break news” will be left to luck and chance, as one unwitting civilian in Abbottabad, Pakistan, discovered after he unknowingly live-tweeted the covert raid that killed Osama bin Laden.4

Eventually, this lag time—before the mainstream media can get the story—will alter the nature of audiences’ loyalty, as readers and viewers seek more immediate methods of information delivery. Every future generation will be able to produce and consume more information than the previous one, and people will have little patience or use for media that cannot keep up. The loyalty that audiences retain will derive from the analysis and perspective these outlets offer, and, most critically, the trust they have in these institutions. These audiences will trust the credibility of the information, the accuracy of the analysis and the prioritization of news stories. In other words, some people will split their loyalty between new platforms for breaking news and established media organizations for the rest of the story.

News organizations will remain an important and integral part of society in a number of ways, but many outlets will not survive in their current form—and those that do survive will have adjusted their goals, methods and organizational structure to meet the changing demands of the new global public. As language barriers break down and cell towers rise, there will be no end to the number of new voices, potential sources, citizen journalists and amateur photographers looking to contribute. This is good: With so many news outlets scaling back their operations, particularly their international footprint, such outside contributors will be needed. The global audience benefits as well, through exposure to a greater range of issues and perspectives. The effect of having so many new actors involved, connected through a range of online platforms into the great, diffuse media system, is that major media outlets will report less and validate more.

Reporting duties will become more widely distributed than they are today, which will expand the scope of coverage but probably reduce the quality on a net level. The role of the mainstream media will primarily become one of an aggregator, custodian and verifier, a credibility filter that sifts through all of this data and highlights what is and is not worth reading, understanding and trusting. Particularly for the elite—the business leaders, policymakers and intellectuals who rely on established media—validation will be critical, as will the media’s ability to provide cogent analysis. In fact, the elite will probably rely more on established news organizations simply because of the massive swell of low-grade reporting and information in the system. Twitter can no more produce analysis than a monkey can type out a work of Shakespeare (although a heated Twitter exchange between two smart, credible people can come close); the strength of open, unregulated information-sharing platforms is their responsiveness, not their insight or depth.

Mainstream media outlets will have to find ways to integrate all of the new global voices they can now reach, a challenging but necessary task. Ideally, the business of journalism will become less extractive and more collaborative; in a story about rising tide levels in Bangkok, instead of just quoting a Thai river-cruise operator, the newspaper would link its article to the man’s own news platform or personal live stream. Of course, the chance for error increases in the inclusion of new, untrained voices—many respected journalists today believe that a full-bodied embrace of citizen journalism is detrimental to the field, and their concerns are not unwarranted.

Global connectivity will introduce entirely new contributors to the supply chain. One new subcategory to emerge will be a network of local technical encryption specialists, who deal exclusively in encryption keys. Their value for journalists would not be content or source related but instead would provide the necessary confidentiality mechanisms between parties. Dissidents in repressive countries—for example, today’s Belarus and Zimbabwe—will always be more willing to share their stories if they know they can do so safely and anonymously. Many people could potentially offer this technology, but local encryption specialists will be highly valued because trust is important. This is not too different from what we see throughout the Middle East today, where virtual private network (VPN) dealers roam busy marketplaces, along with other traders of illicit goods, to offer access to dissidents and rebellious youth to connect from their device to a secure network. Media organizations that cover international issues will rely on these scrappy young VPN and encryption dealers as they rely on foreign stringers to build their news coverage.

A new type of stringer will evolve as well. The conventional stringer today is an uncredited journalist whom newspapers pay to report, often from a foreign or unstable country. Stringers risk their lives to gain access to certain sources or visit dangerous places, taking these risks because professional reporters cannot or will not go there. An additional category of stringer may well emerge: men and women who deal exclusively in digital content and online sources. Instead of braving dangers on the ground, they’ll take advantage of rising global connectivity to find, engage and extract information from sources they know only online. They would connect journalists with sources, as stringers do today. Obviously, given the additional layer of distance and obfuscation the virtual world presents, media outlets would have to exercise even greater caution than they usually do with regard to embellishment, validation of sources and ethics.


Imagine celebrities in the future starting their own news portal online about a particular ethnic conflict that they care deeply about. Perhaps they believe that the mainstream media isn’t doing enough to publicize it or that it has gotten the narrative wrong. They decide to cut out the traditional middlemen and deliver stories directly to the public; let’s call it Brangelina news. They hire their own people to work in the conflict zone, and they provide daily reports that their staff at home form into news articles to publish on their platform. Their overhead would be low, certainly lower than major news outlets, and they might not even need to compensate reporters and stringers, some of whom would work for free in exchange for the visibility. In short order, they become the ultimate source of information and news on the conflict because they both are highly visible and have built up enough credibility in their work that they can be taken seriously.

Mainstream media outlets will find such new serious competitors in the future—not just tweeters and amateur onsite observers—and that will complicate the media environment in this period. As we said, many will still favor and support the established news organizations, out of loyalty and trust in the institutions, and the serious work of journalism—the investigative reporting, the high-level interviews, the prescient contextualization of complicated events—will remain in the domain of the mainstream media. But for others, the diversification of content sources will represent a choice between a serious outlet and a “celebrity” outlet, and the seemingly insatiable appetite for tabloid-like content (in the United States, the U.K. and elsewhere) suggests that many consumers will probably choose the celebrity one. Visibility, not consistency or strength of content, will drive the popularity of such publishers.

Just as they do today with charities and business ventures, celebrities will look to starting their own media outlet as a logical extension of their “brand.” (We are using as broad a definition of “celebrity” as possible here: We mean all highly visible public figures, which today could mean anyone from reality-TV stars to famous evangelical preachers.) To be sure, some of these new outlets will be solid attempts to contribute to public discourse, but many will be vapid and nearly content free, merely exercises in self-promotion and commercialized fame.

We will see a period in which people flock to these new celebrity outlets for their novelty value and to be part of a trend. Those that stay won’t mind that the content and professionalism are a few notches below those of established media organizations. Media critics will decry these changes and lament the death of journalism, but this will be premature, because once the audience shifts, so too will the burden of reporting. If a celebrity outlet doesn’t provide enough news, or consistently makes errors that are publicly exposed, the audience will leave. Loyalties are fickle when it comes to media, and this will only become truer as the field grows more crowded. If enough celebrity outlets lose the faith and trust of their audience, the resulting exodus will lead back to the professional media outlets, which will have undergone their own transformations (more aggregation, wider scope, faster response time) in the interim. Not all who left will return, just as not all who take issue with the mainstream media will jettison familiar information sources for new and trendy ones. Ultimately, it remains to be seen just how much impact these new celebrity competitors will have on the media landscape in the long term, but their emergence as players in the game of accruing viewers, readers and advertisers will undoubtedly cause a stir.


Expanded connectivity promises more than just challenges for media outlets; it offers new possibilities for the role of media more generally, particularly in countries where the press is not free. One reason that corrupt officials, powerful criminals and other malevolent forces in a society can continue to operate without fear of prosecution is that they control local information sources, either directly as owners and publishers or indirectly through harassment, bribery, intimidation or violence. This is as true in countries with largely state-owned media, like Russia, as it is in those where criminal syndicates hold enormous power and territory, like Mexico. The result—the lack of an independent press—reduces both accountability and the risk that public knowledge of misdeeds will lead to pressure and the political will to prosecute.

Connectivity can help upend such a power imbalance in a number of ways, and one of the most interesting ones concerns digital encryption and what it will enable underground or at-risk media organizations to do. Imagine an international NGO whose mission is to facilitate confidential reporting from places where it is difficult or dangerous to be a journalist. What differentiates this organization from others today, like watchdog groups and nonprofit media patrons, is the encrypted platform it builds and deploys to be used by media inside these countries. The platform’s design is novel yet surprisingly simple. In order to protect the identities of journalists (who are the most exposed in the chain of reporting), every reporter for a given outlet is registered in the system with a unique code. Their names, mobile numbers and other identifiable details are encrypted behind this code, and the only people able to de-encrypt that information are key individuals at the NGO headquarters (not anyone at the news outlet), which, crucially, is based outside of the country. Inside the country, reporters are known only by this unique code—they use it to file stories and interact with their sources and local editors. As a result, if, for example, a journalist reports on an election irregularity in Venezuela (as many did during the October 2012 presidential election, although not anonymously), those charged with carrying out the president’s dirty work have no way of knowing whom to target because they can’t access the reporter’s information, nor does anyone the reporter dealt with know who he or she really is. Media outlets don’t maintain formal physical offices, since those could be targeted. Outlets necessarily have to vet their reporters initially, but after a journalist is introduced into the system, he is switched to a new editor (who has not met him) and his personal details evaporate into the platform.

The NGO outside of the country operates this platform from a safe distance, allowing the various participants to interact safely through a veil of encryption. Treating reporters in the same ways as confidential sources (protecting identities, preserving content) is not itself a new idea, but the ability to encrypt that identifiable data, and use an online platform to facilitate anonymous news-gathering, is only becoming possible now. The stories and other sensitive materials that journalists uncover can easily be stored in servers outside the country (someplace where there are strong legal protections around data), further limiting the exposure of those inside. Initially, perhaps this NGO would release its platform as a free product and operate it for different news outlets, financed by third-party donations. Eventually the NGO might take all of the working platforms and federate them, building a super-platform comprised of unidentifiable journalists from countries around the world. While we certainly do not advocate a popular shift toward anonymity, we assume in this case that the security situation is so dire and the society so repressive that the move is an act of desperation and necessity. An editor in New York would be able to log in, search for a reporter in Ukraine and find someone with a track record of published stories and even snippets from former colleagues. Without even knowing the journalist’s name, the editor could rely on the available stories and the trust he has in this platform to decide whether to work with him. He could request an encrypted call with the reporter, also possible through the platform, to begin building a relationship.

This kind of disaggregated, mutually anonymous news-gathering system would not be difficult to build or maintain, and by encrypting the personal details of journalists (as well as their editors) and storing their reporting in remote servers, those who stand to lose as a more independent press emerges will become increasingly immobilized. How does one retaliate against a digital platform, particularly in an age when everyone can read the news on their mobile devices? Connectivity is relatively low in many places that lack free media today, but as that changes, the reach of local reporting on sensitive matters will be even wider—international, in fact. These two trends—safer reporting backed by encryption and a wider readership due to gains in connectivity—ensure that even if a country’s legal system is too corrupt or inept to properly prosecute bad actors, they can be publicly tried online through the media. Warlords operating in eastern Congo may not all be hauled into the International Criminal Court, but their lives will become more unpleasant if their every deed is captured and chronicled by unidentifiable and unreachable journalists, and the stories written about them travel to the far ends of the online world. At a minimum, other criminals who might otherwise do business with them will be deterred by their digital radioactivity, meaning they are too visible and under too much public scrutiny to be desirable business partners.


Privacy Revisited—Different Implications for Different Citizens

Security and privacy are a shared responsibility between companies, users and the institutions around us. Companies like Google, Apple, Amazon and Facebook are expected to safeguard data, prevent their systems from being hacked into and provide the most effective tools for users to maximize control of their privacy and security. But it is up to users to leverage these tools. Each day you choose not to utilize them you will experience some loss of privacy and security as the data keeps piling up. And you cannot assume there is a simple delete button. The option to “delete” data is largely an illusion—lost files, deleted e-mails and erased text messages can be recovered with minimal effort. Data is rarely erased on computers; operating systems tend to remove only a file’s listing from the internal directory, keeping the file’s contents in place until the space is needed for other things. (And even after a file has been overwritten, it’s still occasionally possible to recover parts of the original content due to the magnetic properties of disc storage. This problem is known as “data remanence” by computer experts.) Cloud computing only reinforces the permanence of information, adding another layer of remote protection for users and their information.

Such mechanisms of retention were designed to save us from our own carelessness when operating computers. In the future, people will increasingly trust cloud storage—like ATMs in banks—over physical machinery, placing their faith in companies to store some of their most sensitive information, avoiding the risks of hard-drive crashes, computer theft or document loss. This multilayer backup system will make online interactions more efficient and productive, not to mention less emotionally fraught.

Near-permanent data storage will have a big impact on how citizens operate in virtual space. There will be a record of all activity and associations online, and everything added to the Internet will become part of a repository of permanent information. The possibility that one’s personal content will be published and become known one day—either by mistake or through criminal interference—will always exist. People will be held responsible for their virtual associations, past and present, which raises the risk for nearly everyone since people’s online networks tend to be larger and more diffuse than their physical ones. The good and bad behavior of those they know will affect them positively or negatively. (And no, stricter privacy settings on social-networking sites will not suffice.)

This will be the first generation of humans to have an indelible record. Colleagues of Richard Nixon may have been able to erase those eighteen and a half minutes of a tape recording regarding the Watergate break-in and cover up, but today’s American president faces a permanent record of every e-mail sent from his BlackBerry, accessible to the public under the Presidential Records Act.

Since information wants to be free, don’t write anything down you don’t want read back to you in court or printed on the front page of a newspaper, as the saying goes. In the future this adage will broaden to include not just what you say and write, but the websites you visit, who you include in your online network, what you “like,” and what others who are connected to you do, say and share.

People will become obsessively concerned about where personal information is stored. A wave of businesses and start-ups will emerge promising to offer solutions, from present-day applications such as Snapchat, which automatically deletes a photo or message after ten seconds, to more creative solutions that also add a layer of encryption and a shorter countdown. At best, such solutions will only mitigate the risk of private information being released more broadly. Part of this is due to counter-innovations such as apps that will automatically take a screenshot of every message and photo sent faster than your brain can instruct your fingers to command your device. More scientifically, attempts to keep personal information private are always going to be defeated by attacking the analog hole, which stipulates that information must eventually be seen if it is to be consumed. As long as this holds true, there will always be the risk of someone taking a screenshot or proliferating the content.

If we are on the web we are publishing and we run the risk of becoming public figures—it’s only a question of how many people are paying attention, and why. Individuals will still have some discretion over what they share from their devices, but it will be impossible to control what others capture and share. In February 2012, a young Saudi newspaper columnist named Hamza Kashgari posted an imaginary conversation with the Prophet Muhammad on his personal Twitter account, at one point writing that “I have loved aspects of you, hated others, and could not understand many more.” His tweets sparked instant outrage (some people considered his posts blasphemous or a sign of apostasy, both serious sins in conservative Islam). He deleted them within six hours of posting—but not before thousands of angry responses, death threats and the creation of a Facebook group called “The Saudi People Demand Hamza Kashgari’s Execution.” Kashgari fled to Malaysia but was deported three days later to Saudi Arabia, where charges of blasphemy (a capital crime) awaited him. Despite his immediate apology after the incident and a subsequent August 2012 apology, the Saudi government refused to release him. In the future, it won’t matter whether messages like these are public for six hours or six seconds; they will be preserved as soon as electronic ink hits digital paper. Kashgari’s experience is just one of many sad and cautionary stories.

Data permanence will persist as an intractable challenge everywhere and for all people, as we said, but the type of political system and level of government control in place will greatly determine how it affects people. To examine these differences in detail, we’ll consider an open democracy, a repressive autocracy and a failed state.

In an open democracy, where free expression and responsive governance feed the public’s impulse to share, citizens will increasingly serve as judge and jury of their peers. More available data about everyone will only intensify the trends we see today: Every opinion will find space in an expansive virtual landscape, real-time updating will foster hyperactive social and civil spheres, and the ubiquity of social networking will allow everyone to play celebrity, paparazzo and voyeur, all at once. Each person will produce a voluminous amount of data about himself—his past and present, his likes and choices, his aspirations and daily habits. Like today, much of this will be “opt-in,” meaning the user deliberately chooses to share content for some undefined social or commercial reason; but some of it won’t be. Also like today, many online platforms will relay data back to companies and third parties about user activity without their express knowledge. People will share more than they’re even aware of. For governments and companies, this thriving data set is a gift, enabling them to better respond to citizen and customer concerns, to precisely target specific demographics of the population, and, with the emergent field of predictive analytics, to predict what the future will hold.5

As we said earlier, never before will so much data be available to so many people. Citizens will draw conclusions about one another from accurate and inaccurate sources, from “legitimate” sources like LinkedIn profiles and “illegitimate” ones like errant YouTube comments long forgotten. More than a few aspiring politicians will fall on their swords as past behavior documented online is later brought to light. Certainly, with time, the normalization trend that softened public attitudes toward leaders’ infidelity or past drug use—who can forget President Bill Clinton’s caveat that he “didn’t inhale”?—will take hold. Perhaps the voting public will shrug off a scandalous post or photo based on a time stamp that predates the candidate’s eighteenth birthday. Public acceptance for youthful indiscretions documented on the Internet will move a few paces forward, but probably not until a painful liminal period passes. In some ways, this is the logical next stage of an era characterized by the loss of heroes. What began with mass media and Watergate will continue into the new digital age, where even more data about individuals, from nearly every part of their lives, is available for scrutiny. The fallibility of humans over a lifetime will provide an endless stream of details online to puncture mythical hero status.

Any would-be professional, particularly one in a position of trust, will have to account for his past if he is to get ahead. Would it matter to you if your family physician spent his weekends typing long screeds against immigrants, or if your son’s soccer coach spent his twenties working as a tour guide in Bangkok’s red-light district? This granular level of knowledge about our peers and leaders will produce unanticipated consequences within society. Documented pasts will affect many people in the workplace and in day-to-day life, and some citizens will spend their entire lives acutely aware of the potentially volatile parts of their lives, wondering what might surface online one day.

In democratic countries, corruption, crime and personal scandals will be more difficult to get away with in an age of comprehensive citizen engagement. The amount of information about people that enters the public domain—tax records, flight itineraries, phone geo-location sites (global-positioning-system data collected by a user’s mobile phone) and so much more, including what is revealed through hacking—will undoubtedly provide countless suspicious citizens with more than enough to go on. Activists, watchdog groups and private individuals will work hand in hand to hold their leaders to account, and they’ll have the tools necessary to determine whether what their government tells them is the truth. Public trust may initially fall, but it will emerge stronger as the next generation of leaders takes these developments into consideration.

When the scope of such changes becomes fully realized, large portions of the population will demand government action to protect personal privacy, at a much louder volume than anything we hear today. Laws will not change the permanence of digital information, but sensible regulations can install checks that will ensure some modicum of privacy for citizens who seek it. Today’s government officials, with a few exceptions, don’t understand the Internet—not its architecture or its manifold uses. This will change. In ten years, more politicians will understand how communication technologies work and how they empower citizens and other nongovernmental actors. The result will be public figures in government who can lead more informed debates on issues of privacy, security and user protection.

In democracies in the developing world, where both democratic institutions and technology are newer, government regulation around privacy will be more random. In each country, a particular incident will initially raise the issues at stake in dramatic fashion and drive public demand, similar to what has happened in the United States. A federal statute was passed in 1994 prohibiting state departments of motor vehicles from sharing personal information after a series of high-profile abuses of that information, including the murder of a prominent actress by a stalker. In 1988, following the leak of the late Judge Robert Bork’s video-rental information during the Supreme Court nomination process, Congress passed the Video Privacy Protection Act, criminalizing disclosure of personally identifiable rental information without customer consent.6

While all of this digital chaos will be a nuisance to democratic societies, it will not destroy the democratic system. Institutions and polities will be left intact, if slightly battered. And once democracies determine the appropriate laws to regulate and control new trends, the result may even be an improvement, with a strengthened social contract and greater efficiency and transparency in society. But this will take time, because norms are not quick to change, and each democracy will move at its own pace.


Without question, the increased access to people’s lives that the data revolution brings will give some repressive autocracies a dangerous advantage in targeting their citizens.

While this is a bad outcome and one we hope will be mitigated by developments discussed elsewhere in the book, we must understand that citizens living in autocracies will have to fight even harder for their privacy and security. Rest assured, demand for tools and software to help safeguard citizens living under digital repression will give rise to a growing and aggressive industry. And that is the power of this new information revolution: For every negative, there will be a counterresponse that has the potential to be a substantial positive. More people will fight for privacy and security than look to restrict it, even in the most repressive parts of the world.

But authoritarian regimes will put up a vicious fight. They will leverage the permanence of information and their control over mobile and Internet service providers to create an environment of heightened vulnerability for their citizens. What little privacy existed before will be long gone, because the handsets that citizens have with them at all times will double as the surveillance bugs regimes have long wished they could put in people’s homes. Technological solutions will protect only a distinct technically savvy minority, and only temporarily.

Regimes will compromise devices before they are sold, giving them access to what everybody says, types and shares in public and in private. Citizens will be oblivious to how they might be vulnerable to giving up their own secrets. They will accidentally provide usable intelligence on themselves—particularly if they have an active online social life—and the state will use that to draw damning conclusions about who they are and what they might be up to. State-initiated malware and human error will give regimes more intelligence on their citizens than they could ever gather through non-digital means. Networks of citizens, offered desirable incentives by the state, will inform on their fellows. And the technology already exists for regimes to commandeer the cameras on laptops, virtually invade a dissident’s home without his or her knowledge, and both listen to and watch everything that is said and done there.

Repressive governments will be able to determine who has censorship-circumvention applications on their handsets or in their homes, so even the non-dissident just trying to illegally download The Sopranos will come under increased scrutiny. States will be able to set up random checkpoints or raids to search people’s devices for encryption and proxy software, the presence of which could earn them fines, jail time or a spot on a government database of offenders. Everyone who is known to have downloaded a circumvention measure will suddenly find life more difficult—they will not be able to get a loan, rent a car or make an online purchase without some form of harassment. Government agents could go classroom to classroom at every school and university in the country, expelling all students whose mobile-phone activity indicates that they’ve downloaded such software. Penalties could extend to these students’ networks of family and friends, further discouraging that behavior for the wider population.

And, in the slightly less totalitarian autocracies, if the governments haven’t already mandated “official” government-verified profiles, they’ll certainly try to influence and control existing online identities with laws and monitoring techniques. They could pass laws that require social-networking profiles to contain certain personal information, like home address and mobile number, so that users are easier to monitor. They might build sophisticated computer algorithms that allow them to roam citizens’ public profiles looking for omissions of mandated information or the presence of inappropriate content.

States are already engaging in this type of behavior, if somewhat covertly. As the Syrian uprising dragged on into 2013, a number of Syrian opposition members and foreign aid workers reported that their laptops were infected with computer viruses. (Many hadn’t realized it until their online passwords suddenly stopped working.) Information technology (IT) specialists outside of Syria checked the discs and confirmed the presence of malware, in this case different types of Trojan horse viruses (programs that appear legitimate but are in fact malicious) that stole information and passwords, recorded keystrokes, took screenshots, downloaded new programs and remotely turned on webcams and microphones, and then sent all of that information back to an IP address which, according to the IT analysts, belonged to the state-owned telecom, Syrian Telecommunications Establishment. In this case, the spyware arrived through executable files (the user had to independently open a file to download the virus), but that doesn’t mean the targeted individuals had been careless. One aid worker had downloaded a file, which appeared to be a dead link (meaning it no longer worked), in an online conversation with a person she thought was a verified opposition activist about the humanitarian need in the country. Only after the conversation did she learn to her chagrin that she had probably spoken with a government impersonator who possessed stolen or coerced passwords; the real activist was in prison.

People living under these conditions will be left to fend for themselves against the tag team of their government and its corrupt corporate allies. What governments can’t build in-house, they can outsource to willing suppliers. Guilt by association will take on a new meaning with this level of monitoring. Just being in the background of a person’s photo could matter if a government’s facial-recognition software were to identify a known dissident in the picture. Being documented in the wrong place at the wrong time, whether by photo, voice or IP address, could land unwitting citizens in an unwanted spotlight. Though this scenario is profoundly unfair, we worry that it will happen all too often, and could encourage self-censoring behaviors among the rest of society.

If connectivity enhances the state’s power, enabling it to mine its citizens’ data with a fly-on-the-wall vantage point, it also constricts the state’s ability to control the news cycle. Information blackouts, propaganda and “official” histories will fail to compete with the public’s access to outside information, and cover-ups will backfire in the face of an informed and connected population. Citizens will be able to capture, share and remark upon an event before the government can decide what to say or do about it, and thanks to the ubiquity of cheap mobile devices, this grassroots power will be fairly evenly distributed throughout even large countries. In China, where the government has one of the world’s most sophisticated and far-reaching censorship systems in place, attempts to cover up news stories deemed potentially damaging to the state have been missing the mark with increasing frequency.

In July 2011, the crash of a high-speed train in Wenzhou, in southeast China, resulted in the deaths of forty people and gave weight to a widely held fear that the country’s infrastructure projects were moving too quickly for proper safety reviews. Yet the accident was downplayed by official channels, its coverage in the media actively minimized. It took tens of millions of posts on weibos, Chinese microblogs similar to Twitter, for the state to acknowledge that the crash had been the result of a design flaw and not bad weather or an electricity outage, as had previously been reported. Further, it was revealed that the government sent directives to the media shortly after the crash, specifically stating, “There must be no seeking after the causes [of the accident], rather, statements from authoritative departments must be followed. No calling into doubt, no development [of further issues], no speculation and no dissemination [of such things] on personal microblogs!” The directives also instructed journalists to maintain a feel-good tone about the story: “From now on, the Wenzhou train accident should be reported along the theme of ‘major love in the face of major disaster.’ ” But where the mainstream media fell in line, the microbloggers did not, leading to a deeply embarrassing incident for the Chinese government.

For a country like China, this mix of active citizens armed with technological devices and tight government control is exceptionally volatile. If state control relies on the perception of total command of events, every incident that undermines that perception—every misstep captured by camera phone, every lie debunked with outside information—plants seeds of doubt that encourage opposition and dissident elements in the population, and that could develop into widespread instability.


There may be only a handful of failed states in the world today, but they offer an intriguing model for how connectivity can operate in a power vacuum. Indeed, telecommunications seems to be just about the only industry that can thrive in a failed state. In Somalia, telecommunications companies have come to fill many of the gaps that decades of war and failed government have created, providing information, financial services and even electricity.

In the future, as the flood of inexpensive smart phones reaches users in failed states, citizens will find ways to do even more. Phones will help to enable the education, health care, security and commercial opportunities that the citizens’ governments cannot provide. Mobile technology will also give much-needed intellectual, social and entertainment outlets for populations who have been psychologically traumatized by their environment. Connectivity alone cannot revert a failed state, but it can drastically improve the situation for its citizens. As we’ll discuss later, new methods to help communities handle conflict and post-conflict challenges—developments like virtual institution building and skilled labor databases in the diaspora—will emerge to accelerate local recovery.

In power vacuums, though, opportunists take control, and in these cases connectivity will be an equally powerful weapon in their hands. Newly connected citizens in failed states will have all the vulnerabilities of undeletable data, but none of the security that could insulate them from those risks. Warlords, extortionists, pirates and criminals will—if they’re smart enough—find ways to consolidate their own power at the expense of other people’s data. This could mean targeting specific populations, such as wealthier subclans or influential religious leaders, with more precision and virtually no accountability. If the online data (say, transfer records for a mobile money platform) showed that a particular extended family received a comparatively large sum of money from relatives in the diaspora, local thugs could stop by and demand tribute—paid, probably, over a mobile money system as well. Today’s warlords grow rich by acting as the requisite pass-through for all sorts of valuable resources, and in the future, while drugs, minerals and money will all still matter, so too will valuable personal data. Warlords of the future may not even use the data they have, instead selling it to outside parties willing to pay a premium. And, most important, these opportunists will be able to appear even more anonymous and elusive than they do today, because they’ll unfortunately have the resources and incentive to get anonymity in ways ordinary people do not.


Power vacuums, warlords and collapsed states may sound like a foreign and unrelated world to many in Silicon Valley, but this will soon change. Today, technology companies constantly underscore their focus on, and responsibility to, the virtual world’s version of citizenry. But as five billion new people come online, companies will find that the attributes of these users and their problems are much more complex than those of the first two billion. Many of the next five billion people live in impoverished, censored and unsafe conditions. As the providers of access, tools and platforms, technology companies will have to shoulder some of the physical world’s burdens as they play out online if they want to stay true to the doctrine of responsibility to all users.

Technology companies will need to exceed the expectations of their customers in both privacy and security protections. It is unsurprising that the companies responsible for the architecture of the virtual world will shoulder much of the blame for the less welcome developments in our future. Some of the anger directed toward technology firms will be justified—after all, these businesses will be profiting from expanding their networks quickly—but much will be misplaced. It is, after all, much easier to blame a single product or company for a particularly evil application of technology than to acknowledge the limitations of personal responsibility. And of course there will always be some companies that allow their desire for profit to supersede their responsibility to users, though such companies will have a harder time achieving success in the future.

In truth, some technology companies are more acutely aware than others of the responsibility they bear toward their own users and the online community around the world; this is in part why nearly all online products and services today require users to accept terms and conditions and abide by those contractual guidelines. People have a responsibility as consumers and individuals to read a company’s policies and positions on privacy and security before they willingly share information. As the proliferation of companies continues, citizens will have more options and thus due diligence will be more important than ever. A smart consumer will look not just at the quality of a product, but also at how easy that product makes it for you to control your privacy and security. Still, in the court of public opinion and environments where the rule of law is shaky, these preexisting stipulations count for little, and we can expect more attention to be focused on the makers and purveyors of such tools in the coming decades.

This trend will certainly affect how technology companies form, grow and navigate in what will certainly be a tumultuous period. Certain subsections of the technology industry that receive particularly negative attention will have trouble recruiting engineers or attracting users to and monetizing their products, despite the fact that such atrophying will not solve the problem (and will only hurt the community of users in the end, by denying them the full benefits of innovation). Thick skin will be a necessity for technology companies in the coming years of the digital age, because they will find themselves beset by public concerns over privacy, security and user protections. It simply won’t be possible to avoid these discussions, nor will companies be able to avoid taking a position on the issues.

They’ll also have to hire more lawyers. Litigation will always outpace genuine legal reform, as any of the technology giants fighting perpetual legal battles over intellectual property, patents, privacy and other issues would attest. Google encounters lawsuits from governments around the world with some frequency over alleged breaches of copyright or national laws, and it works hard to assure its users that Google serves their interests first and foremost, while staying within the boundaries of the laws itself. But if Google stopped all product development whenever it found itself faced with a government suit, it would never build anything.

Companies will have to learn how to manage public expectations of the possibilities and limits of their products. When formulating policies, technology companies will, like governments, increasingly have to factor in all sorts of domestic and international dynamics, such as the political risk environment, diplomatic relationships between states, and the rules that govern citizens’ lives. The central truth of the technology industry—that technology is neutral but people are not—will periodically be lost amid all the noise. But our collective progress as citizens in the digital age will hinge on our not forgetting it.


Coping Strategies

People and institutions around the world will rise to meet the new challenges they face with innovative private- and public-sector coping strategies. We can loosely group them into four categories: corporate, legal, societal and personal.

Technology corporations will have to more than live up to their privacy and security responsibilities if they want to avoid unwanted government regulation that could stifle industry dynamism. Companies are already taking proactive steps, such as offering a digital “eject button” that allows users to liberate all of their data from a given platform; adding a preferences manager; and not selling personally identifying information to third parties or advertisers. But given today’s widespread privacy and security concerns, there is still a great deal of work to be done. Perhaps a group of companies will make a pledge not to sell data to third parties, in a corporate treaty of sorts.

The second coping strategy will focus on the legal options. As the impact of the data revolution settles in, states will come under increasing pressure to protect their citizens from the permanence of what appears on the Internet and from their own newly exposed vulnerabilities. In democracies, this means new laws. They will be imperfect, overly idealistic and probably often quite rushed, but they will generally represent societies’ best attempts to react effectively to the chaotic and unpredictable changes that connectivity produces.

As discussed above, the trail of information that will shape our online identities in the future begins well before any citizen has the judgment to understand it. The scrutiny that young people will face in the next decade will be unlike anything we’ve seen. If you think it is hard to get past a co-op board today, just imagine when it has the equivalent of your life story at hand. Because this development will affect a large portion of the population, there will be sufficient public pressure and political will to generate a range of new laws for the digital age.

As this next generation comes fully into adulthood, with digital documentation of every irresponsible thing they did during adolescence, it’s hard to believe that some politicians won’t champion the cause of sealing virtual juvenile records. Everything an individual shares before the age of eighteen might then become unusable, sealed and not for public disclosure on pain of fines or even prison. Laws would make it illegal for any employer, court, housing authority or university to take that content into account. Of course, these laws would be difficult to enforce, but their very presence would lend a hand in changing norms, so that most adolescent mishaps caught online may ultimately be viewed by society with the same lens as experimental drug and alcohol use.

Other laws may emerge as attempts to safeguard privacy and increase the liability for those releasing confidential information. Stealing someone’s cell phone could be considered on a par with identity theft, and online intrusions (stolen passwords, hijacking accounts) could well carry the same charge as breaking and entering.7 Each country will determine its own cultural threshold for what type of information is permissible to be shared, and what type is inappropriate or just too personal. What the Indian government considers obscene or perhaps pornographic, the French might let pass without a second thought. Consider the case of a society that is deeply concerned about privacy but is also saturated with camera-equipped smart phones and inexpensive camera drones that can be purchased at any toy store. The categories that exist for paparazzi photographers (“public” versus “private” space) could be extended and applied to everyone, with certain designated “safe zones” where photography requires a subject’s consent (or, in the case of Saudi Arabia, consent from a female subject’s male guardian). People would use specific apps on their phones to get permission, and because digital photos generate a time stamp and digital watermark, determining if someone took an illegal picture would be simple work. Digital watermarking refers to the insertion of bits into a digital image, audio or video file that contains copyright information about the file’s owner—name, date, rights and so on. Watermarks act as protection against manipulation because, while they are invisible, they can be extracted and read with special software, so when tampering is suspected, technical experts can determine whether a file is indeed an unadulterated copy or not.

For the third type of coping strategy, at the societal level, we need to ask how non-state actors (such as communities and nonprofit organizations) will respond to the consequences of the data revolution. We think a wave of civil-society organizations will emerge in the next decade designed to shield connected citizens from their governments and from themselves. Powerful lobbying groups will advocate content and privacy laws. Rights organizations that document repressive surveillance tactics will call for better citizen protection. There will be support groups to help different demographics deal with the consequences of undeletable data. Educational organizations will try to reach school-age children to avoid over-sharing. (“Never give your data to a stranger.”) The recent campaign in the United States against cyber-bullying is truly a harbinger of what is to come: broad public acknowledgment, grassroots social campaigns to promote awareness, and tepid political attempts to contain it. Within schools, we expect that teachers and administrators will treat cyber-bullying with the same weight and penalties as physical altercations, only instead of a child’s being sent to the principal’s office after recess, he will be sent there when he arrives in the morning for something he wrote online the previous night at home.

In addition to mitigating the negative consequences of a more connected world, non-state actors will be responsible for generating many of the most promising new ideas that harness these technological changes for the better. In developing countries, aid organizations are already leading the way with innovative pilot projects that capitalize on the growing global connectivity. During the 2011 famine in East Africa, the United States Agency for International Development (USAID) administrator Rajiv Shah reported that his organization was using a mix of mobile money platforms and the traditional “hawala” money-transfer system in Somalia to get past the violent Islamist group al-Shabaab’s ban on aid for affected populations. (The hawala system is an Islamic-world network of trust-based money-transfer agents who operate outside of formal financial institutions.) The high rate of growth of mobile adoption and basic connectivity in the country has forged new opportunities for both the population and those seeking to help. Nonprofit and philanthropic organizations in particular will continue to push the boundaries of technology-driven solutions in the new digital age, well suited as they are to the task, being more flexible than government agencies and more able to absorb risk than businesses.

The fourth category of coping strategy is the personal. Citizens will demonstrate an increased reliance on anonymous peer-to-peer communication methods. In a world with no delete button, peer-to-peer (P2P) networking will become the default mode of operation for anyone looking to operate under or off the radar. Contemporary mobile P2P technologies like Bluetooth allow two physical devices to speak directly to each other rather than having to communicate over the Internet. This is in contrast to P2P file-sharing networks such as BitTorrent, which operate over the Internet. Common to both forms of peer-to-peer technologies is that users connect to each other (acting as both suppliers and receivers) without using a fixed third-party service. For citizens in the future, P2P networking will offer an enticing combination of instant communication and independence from third-party controls or monitoring.

All smart phones today are equipped with some form of peer-to-peer capability, and as the wave of cheap smart phones saturates the emerging markets in the next decade, even more people will be able to take advantage of these increasingly sophisticated tools. Bluetooth is already massively popular in many parts of the developing world because even very basic phones can often use it. In much of West Africa, where mobile adoption has vastly outpaced computer use and Internet growth, many people treat their phones like stereo systems because easy peer-to-peer sharing allows them to store, swap and listen to music entirely through their phones.

Mobile jukeboxes in Mali may be a response to specific infrastructure challenges, but people everywhere will begin to favor P2P networking, some for personal reasons (discomfort with undeletable records) and others for pragmatic ones (secure communications). Citizens in repressive societies already use common P2P communication platforms and encrypted messaging systems like Research in Motion (RIM)’s BlackBerry Messenger (BBM) to interact with less fear of government intrusion, and in the future, new forms of technologies that utilize P2P models will also become available to them.

Today, the discussions around wearable technologies are focused on a luxury market: wristwatches we’ll wear that vibrate or apply a pulse when our alarm clock goes off (of which some versions already exist), earrings that monitor our blood pressure and so on.8 New applications of augmented reality (AR) technology (the superimposing of touch, sound or images from the virtual world over a physical, real-world environment) promise even richer wearable experiences. In April 2012 Google unveiled its own AR prototype called Project Glass—eyeglasses with a built-in display over one eye that can convey information, handle messages through voice command and shoot and record video through its camera—and similar devices from other companies are on the way. In the future, the intersection of wearable technology, AR and peer-to-peer communications will combine sensory data, rich information channels and secure communications to generate exceptionally interesting and useful devices. In a country where religious police or undercover agents roam public areas, for example, good spatial awareness is critical, so a wearable-technology inventor will design a discreet wristwatch that its wearer can use to send a warning pulse to others around him when he spots a regime agent in his vicinity. An entirely new nonverbal language will emerge around sensory data—perhaps two pulses tell you a government agent is nearby, and three will mean “Run.” Using GPS data, the watch would also share the location of its wearer with others, who might be wearing AR glasses that could identify which direction the agent is coming from. All these communications will be peer-to-peer. This makes them more secure and reliable than technologies that depend on being connected to the Internet.

Your device will know things about your surroundings that you have no way of knowing on your own: where people are, who they are and what their virtual profiles contain. Today, users already share their iTunes libraries with strangers over Wi-Fi networks, and in the future, they’ll be able to share much more. In places like Yemen, where socially conservative norms limit many teenagers’ ability to socialize with the opposite sex, young people may elect to hide their personal information on peer-to-peer networks when at home or at the mosque—who knows who could be looking?—but reveal it when in public parks and cafés, and at parties.

Yet P2P technology is a limited replacement for the richness and convenience of the Internet, despite its myriad advantages. We often need stored and searchable records of our activities and communications, particularly if we want to share something or refer to it later. And, unfortunately, not even P2P communications are a perfect shield against infiltration and monitoring. If authorities (or criminal organizations) can identify one side of a conversation they can usually find the other party as well. This is true for messaging, voice-over-Internet-protocol (VoIP) calls—meaning phone calls over the Internet (e.g., Google Voice and Skype) and video chats. Users assume they are safe, but unless the exchange is encrypted, anyone with access to intermediate parts of the network can listen in. For instance, the owner of a Wi-Fi hot spot can listen to any unencrypted conversations of users connected to the hot spot. One of the most insidious forms of cyber attack that P2P users can encounter is known as a “man-in-the-middle” attack, a form of active eavesdropping. In this situation a third-party attacker inserts himself between two participants in a conversation and automatically relays messages between them, without either participant realizing it. This third party acts like an invisible intermediary, having tricked each participant into believing that the attacker is actually the other party of the conversation. So as the conversation occurs (whether through text, voice or video), that third-party attacker can sit back and watch, occasionally siphoning off information and storing it elsewhere. (Or, more maliciously, the attacker could insert false information into the conversation.) Man-in-the-middle attacks occur in all protocols, not just peer-to-peer, yet they seem all the more malicious in P2P communications simply because people using those platforms believe they are secure.

And even the protection that encryption offers isn’t a sure bet, especially given some of the checks that will still exist in the physical realm. In the United States, the FBI and some lawmakers have already hinted at introducing bills that would force communications services like BlackBerry and Skype to comply with wiretap orders from law-enforcement officials, introducing message-interception capabilities or providing keys that enable authorities to unscramble encrypted messages.

P2P networking has a history of challenging governments, especially around copyright issues for democracies (e.g., Napster, Pirate Bay) and political dissent for autocracies (e.g., Tor). In the United States, the pioneer of P2P file sharing, Napster, was shut down in 2001 by an injunction demanding that the company prevent all trading of copyrighted material on its network. (Napster told a district court that it was capable of blocking the transfer of 99.4 percent of copyrighted material, but the court said that wasn’t good enough.) In Saudi Arabia and Iran, religious police have found it extremely difficult to prevent young people from using Bluetooth-enabled phones to call and text complete strangers within range, oftentimes for the purpose of flirting, but also for close-proximity coordination between protesters. Unless all mobile devices in the country are confiscated (a task the secret police realize is impossible), the flirtatious Saudi and Iranian youth have at least one small edge on their state-sponsored babysitters.

BlackBerry mobile devices offer both encrypted communication and telephone services, and the unique encryption they offer users has led many governments to target them directly. In 2009, the United Arab Emirates’ partially state-owned telecom Etisalat sent nearly 150,000 of its BlackBerry users a prompt for a required update for “service enhancements.” These enhancements were actually spyware that allowed unauthorized access to private information stored on users’ phones. (When this became public knowledge, the maker of BlackBerry, RIM, distanced itself from Etisalat and told users how to remove the software.) Just a year later, the U.A.E. and its neighbor Saudi Arabia both called for bans on BlackBerry phones altogether, citing the country’s encryption protocol. India chimed in as well, giving RIM an ultimatum to provide access to encrypted communications or see its services suspended. (In all three countries, the ban was averted.)

Repressive states will display little hesitation in their attempts to ban or gain control of P2P communications. Democratic states will have to act more deliberately. We already have a prominent example of this in the August 2011 riots in the United Kingdom. British protesters rallied to demand justice for twenty-nine-year-old Mark Duggan, who had been shot and killed by British police in Tottenham. Several days later the crowds turned violent, setting fire to local shops, police cars and a bus. Violence and looting spread across the country over subsequent nights, eventually reaching Birmingham, Bristol and other cities. The riots resulted in five deaths, an estimated £300 million ($475 million) in property damage and a great deal of public confusion. The scale of the disorder across the country—as well as the speed with which it spread—caught the police and government wholly off guard, and communication tools like Twitter, Facebook and particularly BlackBerry were singled out as a major operational factor in the spread of the riots. While the riots were occurring, the MP for Tottenham called on BlackBerry to suspend its messaging service during night hours to stop the rioters from communicating. When the violence had subsided, the British prime minister, David Cameron, told Parliament he was considering blocking these services altogether in certain situations, particularly “when we know [people] are plotting violence, disorder and criminality.” His goal, he said, was to “give the police the technology to trace people on Twitter or BBM, or close it down.” (After meeting with industry representatives, Cameron said industry cooperation with law enforcement was sufficient.)

The examples of the U.A.E. and the U.K. illustrate real concern on the part of governments, but it is important to clarify that this concern has been about encryption and social networking. In the future, however, communication will also take place on mobile P2P networks, meaning that citizens will be able to network without having to rely on the Internet (this was not the case in the U.A.E. and the U.K.). It stands to reason that every state, from the least democratic to the most, may fight the growth of device-to-device communication. Governments will claim that without restrictions or loopholes for special circumstances, capturing criminals and terrorists (among other legitimate police activities) and prosecuting them will become more difficult, planning and executing crimes will be easier and a person’s ability to publish slanderous, false or other harmful information in the public sphere without accountability will improve. Democratic governments will fear uncontrollable libel and leaking, autocracies internal dissent. But if illegal activity is the primary concern for governments, the real challenge will be the combination of virtual currency with anonymous networks that hide the physical location of services. For example, criminals are already selling illegal drugs on the Tor network in exchange for Bitcoins (a virtual currency), avoiding cash and banks altogether. Copyright infringers will use the same networks.

As we think about how to address these kinds of challenges, we cannot afford to take a black-and-white view; context matters. For example, in Mexico, drug cartels are among some of the most effective users of anonymous encryption, both P2P and through the Internet. In 2011, we met with Bruno Ferrari, then the country’s secretary of the economy, and he described to us how the Mexican government has struggled to engage the population in the fight against the cartels—fear of retribution is enough to prevent people from reporting crimes or tipping off law enforcement to cartel activity in their neighborhoods. Corruption and untrustworthiness in the police department further limit the options for citizens. “Without anonymity,” Ferrari told us, “there is no clear mechanism in which people can trust the police and report the crimes committed by the drug cartels. True anonymity is vital to getting the citizens to be part of the solution.” The drug cartels were already using anonymous communications, so anonymity levels the playing field. “The arguments behind restricting anonymous encryption make sense,” he added, “but just not in Mexico.”


Police State 2.0

All things considered, the balance of power between citizens and their governments will depend on how much surveillance equipment a government is able to buy, sustain and operate. Genuinely democratic states may struggle to deal with the loss of privacy and control that the data revolution enables, but as a result they will have more empowered citizens, better politicians and stronger social contracts. Unfortunately, the majority of states in the world are either not democratic or democratic in name only, and the relative impact of connectivity—both positive and negative—for citizens in those countries will be far greater than we’ll see elsewhere.

In the long run, the presence of communication technologies will chip away at most autocratic governments, since, as we have seen, the odds against a restrictive, information-shy regime dealing with an empowered citizenry armed with personal fact-checking devices get progressively worse with each embarrassing incident. In other words, it’s no coincidence that today’s autocracies are for the most part among the least connected societies in the world. In the near term, however, such regimes will be able to exploit the growth of connectivity to their advantage, as they already exploit the law and the media. There is a trend in authoritarian governance to harness the power of connectivity and data, rather than ban information technology out of fear, a shift from totalitarian obviousness to more subtle forms of control that the journalist William J. Dobson captured in his excellent book The Dictator’s Learning Curve. As Dobson describes it, “Today’s dictators and authoritarians are far more sophisticated, savvy, and nimble than they once were. Faced with growing pressures, the smartest among them neither hardened their regimes into police states nor closed themselves off from the world; instead, they learned and adapted. For dozens of authoritarian regimes, the challenge posed by democracy’s advance led to experimentation, creativity and cunning.” Dobson identifies numerous avenues through which modern dictators consolidate power while feigning legitimacy: a quasi-independent judicial system, the semblance of a popularly elected parliament, broadly written laws that are applied selectively and a media landscape that allows for an opposition press as long as regime opponents understand where the unspoken limits are. Unlike the strongman regimes and pariah states of old, Dobson writes, modern authoritarian states are “conscious, man-made projects that must be carefully built, polished, and reinforced.”

But Dobson covers only a small number of case studies in his work and we are less certain that the new digital age will yield such advantages to all autocratic regimes. How dictators handle connectivity will greatly determine their future in the new digital age, particularly if their states want to compete for status and business on the global stage. The centralization of power, the delicate balancing of patronage and repression, the outward projection of the state itself—every element of autocratic governance will depend on the control that regimes have over the virtual world their population inhabits.

In the span of a decade, the world’s autocracies will go from having a minority to a majority of their citizens online, and for dictators looking to stay in power, this will be a turbulent transition. Yet building the kind of system that can monitor and contain all types of dissident energy is thankfully not easy and will require very specialized solutions, expensive consultants, technologies not widely available and a great deal of money. Cell towers, servers and microphones will be needed, as well as large data centers to store information; specialized software will be necessary to process the data gathered; trained people will have to operate all of this, and basic resources like electricity and connectivity will need to be constantly and abundantly available. If autocrats want to build a surveillance state, it’s going to cost them—we hope more than they can afford.

There are some autocracies with poor populations but vast amounts of oil, minerals or other resources that they can trade. As in the arms-for-minerals trade, we can imagine the growth of a technology-for-minerals exchange between technology-poor but resource-rich countries (Equatorial Guinea is one example) and technology-rich but resource-hungry countries (China is an obvious one). Not many states will be able to pull off this kind of trade, and hopefully those that do will not be able to sustain or effectively operate what they have.

Once the infrastructure is in place, repressive regimes will need to manage the glut of information they acquire with the help of supercomputers. In countries where connectivity was established early, governments have had time to acclimate to the types of data their citizens produce; the pace of technological adoption and progress has been somewhat gradual. But these newly wired regimes will not have that luxury; they’ll need to move quickly to make use of their data if they want to be effective in its management. To address this, they’ll build powerful computer banks with much faster processing power than the average laptop, and they’ll buy or build software that facilitates the data-mining and real-time monitoring they desire. Everything a regime would need to build an incredibly intimidating digital police state is commercially available now, and export restrictions are currently insufficiently monitored and enforced.

Once one regime builds its surveillance state, it will share what it learned with others. We know that autocratic governments share information, governance strategies and military hardware, and it’s only logical that the configuration that one state designs will (if it works) proliferate among its allies and assorted others. Companies that sell data-mining software, surveillance cameras and other products will flaunt their work with governments to attract new business.

The most important form of data to collect for an autocrat isn’t Facebook posts or Twitter comments—it’s biometric information. “Biometric” refers to information that can be used to uniquely identify individuals through their physical and biological attributes. Fingerprinting, photographs and DNA testing are all familiar biometric data types today. Indeed, the next time you visit Singapore, you might be surprised to find that airport security requires both a filled-out customs form and a scan of your voice. In the future, voice-recognition and facial-recognition software will largely surpass all of these earlier forms in accuracy and use.

The facial-recognition systems of today use a camera to zoom in on an individual’s eyes, mouth and nose, and extract a “feature vector,” which is a set of numbers that describes key aspects of the image, such as the precise distance between the eyes. (Remember, in the end, digital images are just numbers.) Those numbers can be fed back into a large database of faces in search of a match. To many this sounds like science fiction, and it’s true that the accuracy of this software is limited today (by, among other things, pictures shot in profile), but the progress in this field in just the past few years is remarkable. A team at Carnegie Mellon demonstrated in a 2011 study that the combination of “off-the-shelf” facial-recognition software and publicly available online data can match a large number of faces very quickly, thanks to technical advancements like cloud computing. In one experiment, unidentified pictures from dating sites (where people often use pseudonyms) were compared with profile shots from social-networking sites, which can be publicly accessed on search engines (i.e., no log-in required), yielding a statistically significant result. It was noted in the study that it would be unfeasible for a human to do this search manually, but with cloud computing, it takes just seconds to compare millions of faces. The accuracy improves regarding people with many pictures of themselves available online—which, in the age of Facebook, is practically everyone.

Like so many technological advances, the promise of comprehensive biometric data offers innovative solutions to entrenched sociopolitical problems—and it makes dictators salivate. For each repressive regime that gathers biometric data to better oppress its population, however, a similar investment will be made by an open, stable and progressive country for very different reasons.

India’s unique identification (UID) program is the largest biometric identification undertaking in the world. Constituted in 2009, the campaign, collectively called Aadhaar (meaning “foundation” or “support”), aims to provide every Indian citizen—1.2 billion and counting—with a card that includes a unique twelve-digit identity and an embedded computer chip that contains a person’s biometric data, including fingerprints and iris scans. This vast program was conceived as a way to solve the problems of inefficiency, corruption and fraud endemic in the existing system, in which overlapping jurisdictions resulted in up to twenty different forms of identification issued by various local and national agencies.

Many in India believe that as the program progresses, Aadhaar will help citizens who have been excluded from government institutions and aid networks. For castes and tribes traditionally lowest on the socioeconomic scale, Aadhaar represents a chance to receive state aid like public housing and food rations—things that had been technically available but still out of reach, since many potential recipients lacked identification. Others who had trouble obtaining identification, like internal migrant workers, will be able to open a bank account, obtain a driver’s license, apply for government support, vote and pay taxes with Aadhaar. When enrolling in the scheme, an individual may open a bank account that is tied to his or her UID number. This enables the government to easily track subsidies and benefits.


In a political system racked by political corruption and crippled by its own sheer size—less than 3 percent of the Indian population is registered to pay income tax—this effort seems like a possible win-win for all honest parties. Poor and rural citizens gain an identity, government systems become more efficient and all aspects of civic life (including voting and paying taxes) become more transparent and inclusive. But Aadhaar has its detractors, people who consider the program Orwellian in scope and character and a ploy to enhance the surveillance capacities of the Indian state at the expense of individual freedoms and privacy. (Indeed, the government can use Aadhaar to track the movements, phones and monetary transactions of suspected terrorists.) These detractors also point out that Indians do not have to have an Aadhaar card, since public agencies aren’t allowed to require one before providing services. Concerns over whether the Indian government is intruding on civil liberties echo those of opponents of a similar project in the United Kingdom, the Identity Cards Act of 2006. (After a several-year struggle to implement the program, Britain’s newly elected coalition government scrapped the plan in 2010.)

In India, these concerns seem to be outweighed by the promise of the plan’s benefits, but their presence in the debate proves that even in a democracy, public apprehension over the impact of large biometric databases, and whether they’ll ultimately serve the citizens or the state, exists. So what happens when less democratic governments begin collecting biometric data in earnest? Many already have, beginning with passports.

States won’t be the only ones trying to acquire biometric data. Warlords, drug cartels and terrorist groups will seek to build or access biometric databases in order to track recruits, monitor potential victims and keep an eye on their own organizations. The same logic applies here as to dictators: If they have something to trade, they can get the technology.

Given the strategic value of these databases, states will need to prioritize protection of their citizens’ information just as they would safeguard weapons of mass destruction. Mexico is currently moving toward a biometric data system for its population in order to improve its law-enforcement functionality, better monitor its borders and identify criminals and drug-cartel leaders. But since the cartels have already infiltrated large swaths of the police and national institutions, there is a very real fear that somehow an unauthorized actor could gain access to the valuable biometric data of the Mexican population. Eventually, some illicit group will successfully steal or illegally acquire a biometric database from a government, and maybe only when that happens will states fully invest in high-level security measures to protect this data.

All societies will reach agreement on the need to keep biometric data out of the hands of certain groups, and most will try hard to keep individual citizens from gaining access as well. Regulation will, like regulation of other types of user data, vary by country. In the European Union, which already boasts a series of robust biometric databases, member states are required by law to ensure that no individual’s right to privacy is violated. States must get the full and informed consent of citizens before they can enter biometric information into the system, leaving citizens the option to revoke consent in the future without penalty. Member states are further required to hear complaints and see that victims are compensated. The United States will probably adopt similar laws due to shared privacy concerns, but in repressive countries, it’s likely that such databases will be controlled by the ministry of the interior, ensuring that they are primarily used as a tool for the police and security forces. Government officials in those regimes will also have access to facial-recognition software, databanks of citizens’ personal information and real-time surveillance methods through people’s technological devices. Secret police will often find a handset more valuable than a gun.


For all of the discussions about privacy and security, we rarely look at the two together and ask the question What makes people nervous about the Internet? From the world’s most repressive societies to those that are the most democratic, citizens are nervous about the unknowns, the dangers and crises that come with entangling their lives in a web of connected strangers. For those who are already connected, living in both the physical and the virtual worlds has become part of who we are and what we do. As we grow accustomed to this change, we also learn that the two worlds are not mutually exclusive, and what happens in one has consequences in the other.

What seem like defined debates today over security and privacy will broaden to questions of who controls and influences virtual identities and thus citizens themselves. Democracies will become more influenced by the wisdom of crowds (for better or for worse), poor autocracies will struggle to acquire the necessary resources to effectively extend control into the virtual world, and wealthier dictatorships will build modern police states that tighten their grip on citizens’ lives. These changes will spur new behaviors and progressive laws, but given the sophistication of the technologies involved, in most cases citizens stand to lose many of the protections they feel and rely upon today. How populations, private industry and states handle the forthcoming changes will be highly determined by their social norms, legal frameworks and particular national characteristics.

We will now turn to a discussion of how global connectivity will affect the way states operate, negotiate and wrestle with each other. Diplomacy has never been as interesting as it will be in the new digital age. States, which are constantly playing power politics in the international system, will find themselves having to retool their domestic and foreign policies in a world where their physical and virtual tactics are not always aligned.


1 Most of these techniques fall under the umbrella of search-engine optimization (SEO) processes. To influence the ranking algorithm of search engines, the most common method is to seed positive content around the target (e.g., a person’s name), encourage links to it and frequently update it, so that the search-engine spiders are likely to identify the material as popular and new, which pushes down the older, less relevant content. Using prominent keywords and adding back-links (incoming links to a website) to popular sites can also influence the ranking. This is all legal and generally considered fair. There is an underside to SEO, however—“black-hat SEO”—where efforts to manipulate rankings include less legal or fair practices like sabotaging other content (by linking it to red-flag sites like child pornography), adding hidden text or cloaking (tricking the spiders so that they see one version of the site while the end user sees another).

2 This dictum is commonly attributed to Stewart Brand, the founder and editor of the Whole Earth Catalog, recorded at the first Hackers’ Conference, in 1984.

3 While in the technical community the term “hacker” means a person who develops something quickly and with an air of spontaneity, we use it here in its colloquial meaning to imply unauthorized entry into systems.

4 Among the tweets the Pakistani IT consultant Sohaib Athar sent the night of the bin Laden raid: “Helicopter hovering above Abbottabad at 1AM (is a rare event).”

5 “Predictive analytics” is a young field of study at the intersection of statistics, data-mining and computer modeling. At its core, it uses data to make useful predictions about the future. For one example, predictive analytics could use data on ridership fluctuations on the New York City subway to predict how many trains would be needed on a given day, accounting for seasonality, employment and the weather forecast.

6 Interestingly, the VPPA statute came into play in a Texas lawsuit in 2008, when a woman filed a class-action suit against Blockbuster for sharing her rental and sales record with Facebook without her permission. The parties settled.

7 In the United States, the “trespass to chattels” tort has in some cases already been applied to cyberspace.

8 Wearable technology overlaps with the similar emergent industry of haptic technology, but the two are not synonymous. Haptics refers to technology that interacts with a user’s sense of touch, usually though pulses or the application of pressure. Wearable technologies often include many haptic elements but are not limited to them (like a jacket for cyclists that lights up in the evening); nor are all haptic technologies wearable.

Загрузка...