EVGENY MOROZOV
Contributing editor, The New Republic; syndicated columnist; author, To Save Everything, Click Here: The Folly of Technological Solutionism
I worry that as the problem-solving power of our technologies increases, our ability to distinguish between important and trivial or even nonexistent problems diminishes. Just because we have “smart” solutions to fix every single problem under the sun doesn’t mean that all of those problems deserve our attention. In fact, some of them may not be problems at all; that certain social and individual situations are awkward, imperfect, noisy, opaque, or risky might be by design. Or, as the geeks like to say, some bugs are not bugs, some bugs are features.
I find myself preoccupied with the invisible costs of “smart” solutions in part because Silicon Valley mavericks are not lying to us: Technologies are becoming not only more powerful but also more ubiquitous. We used to think that, somehow, digital technologies lived in a national reserve of some kind; first we called this imaginary place “cyberspace” and then we switched to the more neutral label of “Internet.” It’s only in the last few years, with the proliferation of geolocational services, self-driving cars, and smart glasses, that we grasped that such national reserves were perhaps a myth and that digital technologies would be everywhere: in our fridges, on our belts, in our books, in our trash bins.
All this smart awesomeness will make our environment more plastic and more programmable. It will also tempt us to design out all imperfections—just because we can!—from our interactions, social institutions, politics. Why have an expensive law enforcement system, if we can design smart environments where no crimes are committed simply because those people deemed “risky”—based, no doubt, on their online profiles—are barred from access and thus unable to commit crimes in the first place? So we are faced with a dilemma: Do we want some crime or no crime? What would we lose—as a democracy—in a world without crime? Would our debate suffer, as the media and courts would no longer review the legal cases? This is an important question that I’m afraid Silicon Valley, with its penchant for efficiency and optimization, might not get right.
Or take another example: If, through the right combination of reminders, nudges, and virtual badges, we can get people to be “perfect citizens”—recycle, show up at elections, care about urban infrastructure—should we take advantage of the possibilities offered by smart technologies? Or should we, perhaps, accept that slacking off and idleness, in small doses, are productive in that they create spaces and openings where citizens can still be appealed to by deliberation and moral argument, not just the promise of a better shopping discount courtesy of their smartphone app?
If problem solvers can get you to recycle via a game, would they even bother with the less effective path of engaging you in moral reasoning? The difference is that those people earning points in a game might end up not knowing anything about the “problem” they were solving, while those who had been through the argument would have a tiny chance of grasping the issue’s complexity and doing something that would matter in the years to come, not just today.
Alas, smart solutions don’t translate into smart problem solvers. In fact, the opposite might be true: Blinded by the awesomeness of our tools, we might forget that some problems and imperfections are just the normal costs of accepting the social contract of living with other human beings, treating them with dignity, and ensuring that, in our recent pursuit of a perfect society, we do not shut the door to change. Change usually happens in rambunctious, chaotic, and imperfectly designed environments; sterile environments, where everyone is content, are not known for innovation, of either the technological or the social variety. When it comes to smart technologies, there’s such a thing as too “smart,” and it isn’t pretty.