PART IV NATURE, PIXILATED

An (Un)Natural Future of the Senses

What about us? Are we natural anymore? How can we be, when we’ve morphed into superheroes? Our ancestors adapted to nature according to the limits of their senses. But over the eons, by extending our senses through clever inventions—language, writing, books, tools, telescopes, telephones, eyeglasses, cars, planes, rocket ships—we’ve changed how we engage the world and also how we think of ourselves. We just assume now that human beings can move across the skies at 500 mph. Or spot a hawk across a valley. Or do colossal calculations at speed. Or watch events unfolding halfway around the world. Or safely repair someone’s heart. Or wage war. Our attitude about our own nature, what sort of creatures we are, now includes the novelties we’ve pinned to our senses.

All these add-ons are a perfectly ordinary part of daily life. The use of tools and technology has become an innate part of our being, as we extend ourselves deeper and deeper into our environment. In the past decades a fundamental change has evolved in the idea of the universe we inhabit, and also what a human being is and may become. We don’t worry if we can’t see a splinter in a child’s finger. We automatically don glasses and become an animal with keener eyesight. That may save the child from infection, but it also revises what a human being is. How will that continue changing in our lifetime?

Already, we’re masters of the invisible. Just as we accept that the universe is mainly invisible dark matter and dark energy, we accept the reality of protozoans and viruses even though we can’t see them without a microscope, or perhaps as stationary oddities in the pages of a textbook—which few people are tempted to do. We believe in television and radio waves, gnomelike quarks, GPS, microwaves, the World Wide Web, gosling photons, a mantilla of nerve endings in the brain, the voiceless hissing of background fizz from the Big Bang, planets orbiting many stars in the night sky—some hospitable to life. Then there’s all the panting eyes, throbbing jellies, iridescent bladders, and glowing mouths haunting the remote sunless abysses of the deep sea.

Our mental cosmos teems with a thicker texture of invisibles than ever before. Living with invisible forces used to mean spirits, ghosts, gods, angels, and ancestors. Our view of nature now supplies different familiar ghosts, including all the wispy tangles, tinctures, and driblets of a working body being revealed to us as never before through technology and nanotechnology. We take for granted the vast invisible worlds surrounding and inside of us. It’s a sort of high-tech shamanism (the belief that spirits inhabit all things, living or nonliving). Some entities may hide in the holly bush at the front door; others float light-years away.

We can forge so many invisibles in our mind’s eye because enough of our kind have witnessed them firsthand, through microscopes, telescopes, or computers, and smeared that knowledge far and wide. As a result, the air gyrates with invisibles I can hear but not see, and yet take for granted like distant relatives whose photos I’ve framed.

In autumn, a season of night fiddlers, I know summer is fraying away because the air brims with their eerie music, although I don’t see the hidden musicians—katydids and crickets playing their marimbas, as they lift their wings high and rub a sharp edge of one wing over a ridge of pegs on the other. It’s not easy to spot cellophane-winged aerobats among late summer’s wild chicory, Queen Anne’s lace, and clover-scented milkweed—kingdom of the giant, much showier monarch butterfly.

Nonetheless, I can picture them all combing strands of song from their wings, picture them in microscopic hair-perfect detail. Katydids are rasping a tattletale: Katy did! Katy did! Katy did!! Cicadas, buckling and unbuckling their stomach muscles, are yielding the sound of someone sharpening scissors. Fall field crickets, the thermometer hounds, are adding high-pitched tinkling chirps to the jazz. Carolina crickets are furnishing a buzzing trill. Grasshoppers sound like they’re shuffling decks of cards. Snowy tree crickets are lending an evenly spaced chirping melody to the ensemble. It’s the ultimate jug band, using body parts as instruments.[16]

I don’t see any of their courtship, since they’re small and hidden in the darkness. But I’ve learned enough from scientist-seers and their technology to trust that the males do all the serenading, horny for females, each of whom waits in the dark loins of the night, listening with ears in rather odd places—on the abdomen or the knees. She homes in on a winged dude, lured by his siren song. Then the happy male croons a different courtship tune. But they haven’t much time for dalliance before the first heart-stopping frost. According to folklore’s timetable—and I still believe in folklore—frost creeps in ninety days past the katydids’ first song. In my insect-loud yard, I heard the first katydid call about a week early this year, round about the middle of July, and sure enough frost fell in mid-October.

Alongside this buzzing-chirping-tinkling-fiddling in the night, and choreographed to it, there’s the raw sexcapade drama. And, although I don’t ogle thousands of bugs in flagrante all over the woods, and tens of thousands, maybe millions, yodeling their lust downtown, up in the forest leaf-parlors, and along sinewy country roads, I was a college student once; I get it.

All of this happens unseen, which is a haunting thought, but even without laying our eyes on the crickets, grasshoppers, cicadas, and katydids, we hear them shrilling in this season and trust that they’re the tiny living gargoyles scientists claim. We believe the katydids exist in their scratchy little corner of the invisible—an act of faith that suits us just fine. Most don’t wish to search in the dark buzz of night for the multieyed and antenna-ed.

Anyway, these days, we know we can verify the existence of creatures we can’t see easily enough in books, films, or bug Facetime on the Internet. The ancients believed the gods were angry when storms crackled and boomed. We check the Weather Channel’s radarscope.

Our ancient understanding of nature (faith, lore, hearsay, story) has a new level, one changing almost every day, proxy sine nomina from technology-equipped scientists and other researchers, the designated witnesses who behold, listen, and chronicle as the likes of insect love parties on. We agree en masse to believe these professionally designated seers.

Or we become citizen-seers ourselves. A smartphone will do. Walking on a trail in New Forest, in southern England, I stopped in a sunlit clearing when I heard the distinctive rasp of Buddhist monks creating a sand mandala by rubbing brass scrapers down ribbed brass funnels to release single brightly colored grains. A quick spin around: trees, pastures, a perching woodland warbler falling into flight, shadows dancing along the trail. No monks. I smiled at the sleight of ear. Weeks before, at Keystone College in Pennsylvania, I had watched, enchanted, as Tibetan monks sand-dribbled a mandala, producing what sounded like the trills and quiverings of invisible cicadas.

Pulling my iPhone out of my pocket, I opened the Cicada Hunt app, a brainchild of entomologists at the University of Southampton. Over a thousand people sent in reports this past summer. On the iPhone, a green card appeared with a white cicada icon sitting on black velvet at its center. When I held it up and tapped the cicada icon, a white outer ring fanned open around it and the cicada glowed orange. For eighteen seconds, the app tested the soundscape for the exact frequency of rare New Forest cicadas. No luck. Only a scant few New Forest cicadas have been detected by the thousands of citizen scientists in England since 2000, but that’s enough to offer proof of their whereabouts and need for protection. Though I knew it was a long shot, I found the app doesn’t register the calls of my homely New York variety of cicada.

The Buddhist mandala-makers may live in a cosmos dancing with colorful deities, just as they always have. But now they and the Dalai Lama (a science aficionado) are also aware, from mindful moment to moment, of an invisible dimension that includes neurons, quarks, Higgs bosons, MRIs, condensation nuclei, white dwarfs, DNA, and a googolplex of others.

Elsewhere on Earth, over 5.2 million Internet-connected computers, citizen scientists are helping SETI (Search for Extraterrestrial Intelligence) monitor radio telescope data through the SETI@home project, hoping to catch a message from alien life forms in some distant star system. SETI’s senior astronomer, Seth Shostak, believes that the first calling card from aliens may well be detected on home computers, not by official scientists at radio telescopes arrayed in India, Australia, Puerto Rico, or Chile.

More than ever, our technology allows us to peer into worlds far beyond our outmoded senses, into a realm where cells loom large as lakes, pores are chasms, the body is just another kind of ecosystem, and the idea of cartography no longer applies only to landforms. We’ve mapped galaxies and genomes. We keep projecting ourselves into landscapes we’re not equipped to cross in the flesh. Computers have shed light on biological processes invisible to humankind until very recently. In 1990, I wrote about our sensory grasp of the world in A Natural History of the Senses. Only twenty years later, the basic experience is the same, but its scope has been vastly amplified. For example, our proprioception, the sense of where we are in space, now spins far beyond the physical body. We can spy on ourselves in sly, public, or cloak-and-dagger ways, from lavish perspectives, inside and out. By satellite, a drone’s eye, via Skype, on security cameras, through electron microscopes. Some of us are even relaxed about, or excited by, the promise of connecting our brains to the world outside of the body. In such sweeping sensory adventures, our cameo of who and what we are shifts, and also how we may decide to know ourselves in the future.

What we see and think when we look at the night sky has also changed. Two decades ago, the only planets were here in our own solar system. Now we know that the cosmos is littered with them. We know now that the Milky Way, the backbone of night, is twice as large, even heavier, and spinning faster than we previously thought. Also that it has four arms, not two. Our telescopes listen with cupped ears for whispers from the beginning of time, when the whole universe was no larger than a grapefruit, a small solid object, before the light of stars and the destiny of planets. How could something that small give birth to more space than the mind’s eye can fathom?

Although the brain’s star chamber is sealed and invisible in its cave of bone, we’re craning our high-tech senses (MRI, fMRI, PET scan, etc.) to peer in as never before at networks lit like night views of Earth from space. Thanks to digital displays, scalpel-less dissecting of live patients is commonplace, as is cut-free slicing of gray matter into wafer-thin sheets that can be viewed three-dimensionally and rotated as if the conscious, alert, and no doubt mind-wandering occupant had set his actual brain on an anatomy bench for anyone to probe. All sorts of abnormalities and diseases, such as schizophrenia and autism, have bared some of their bones, and we’ve begun exploring the mental haunts of such notorious intangibles as religion, addiction, and compassion. By studying busy neural work sites, increased traffic flow, and where thought-crews guzzle oxygen as they toil, we’re forming insights about everything from lying to love. For the first time, we’re able to see some of the ties that bind us. The verb we use, “scan,” which used to mean a brief skim with the eyes, has evolved into its opposite: a machine’s searching stare. People gamely volunteer to have their heads examined so that researchers can witness emotional regattas in full sail (or, sometimes, on the rocks).

Nightly news often reports the latest nugget about concussion, depression, rejection, multitasking, empathy, risk-taking, fear, and other states of mind—explained in terms of the neural architecture and wiring of the brain. In 2012, when President Obama proposed a federally funded $100 million brain-mapping project, stressing that “as humans we can identify galaxies light-years away, study particles smaller than an atom, but we still haven’t unlocked the mystery of the three pounds of matter that sits between our ears,” some people balked at the expense, but few believed it wasn’t possible and a worthy goal.

A new field, called interpersonal neurobiology, draws its vigor from one of the great discoveries of our age: the brain is rewiring itself daily, and all relationships change the brain—but especially our most intimate bonds, which foster or fail us, altering the delicate circuits that shape memories, emotions, and that ultimate souvenir, the self. Love is the best school, but the tuition is high, and the homework can be physically painful. As imaging studies by the UCLA neuroscientist Naomi Eisenberger show, the same areas of the brain that register physical pain are active when someone feels brutalized by love. That’s why rejection hurts all over the body, but in no place you can point to. Or rather, you’d need to point to the dorsal anterior cingulate in the brain, the front of a collar wrapped around the corpus callosum, the bundle of nerve fibers zinging messages between the hemispheres, that registers both rejection and physical assault. Whether they speak Armenian or Mandarin, people around the world use the same images of physical pain to describe a broken heart, which they perceive as crushing, crippling, a real blow that hurts so bad they go all to pieces. It’s not just a metaphor for an emotional punch that’s too shadowy to name. As our technology is beginning to reveal, social pain—rejection, the end of an affair, bereavement—can trigger the same sort of sensations as a stomachache or a broken bone.

But a loving touch is enough to change everything. The neuroscientist James Coan, of the University of Virginia in Charlottesville, conducted experiments in which he gave an electric shock to one ankle of women in happy committed relationships. Tests registered their anxiety before and pain level during the shocks. Then they were shocked again, this time holding their loving partner’s hand. The same level of electricity produced significantly lower pain and even less neural response in the cingulate. In troubled relationships, this protective effect did not occur. If you’re in a healthy relationship, holding your partner’s hand is enough to subdue your blood pressure, ease your response to stress, improve your health, and even soften physical pain. We’re able to dramatically alter one another’s physiology and neural functions—and watch.

The ability to see these scans has ushered in a whole new level of relating to one another. One can decide to be a more attentive and compassionate partner, mindful of the other’s motives, hurts, and longings. Breaking old habits and patterns isn’t easy, but couples are choosing to rewire their brains on purpose, sometimes with a therapist’s help, to ease conflicts and strengthen their at-one-ness. Neanderthals didn’t sit around thinking about their partners’ neurons—and neither did Plato, Shakespeare, Michelangelo, or my mother, for that matter. I didn’t when I was an undergraduate. Even though we are still in the early days of brain imagery, we’re tagging invisibles like butterflies; we’re learning life-altering truths.

What will this mean for a new Anthropocene ethics? How might our knowledge influence how we choose to relate to our spouse, children, friends, coworkers? As such knowledge trickles through society, will it influence how we conduct our relationships? How will we handle the responsibility of knowing that harsh words can be as physical as a punch, inflict violent pain, and subtly mess with the wiring in someone’s brain?

Weighing in the Nanoscale

We’re not just seeing invisibles; we’re engineering things on a minute, invisible-to-the-eye scale. “Nano,” which means “dwarf” in Greek, applies to things one-billionth of a meter long. In nature that’s the size of sea spray and smoke. An ant is about 1 million nanoparticles long. A strand of hair is 80,000 to 100,000 nanometers wide, roomy enough to hold 100,000 perfectly machined carbon nanotubes (which are 50 to 100 times stronger than steel at one-sixth the weight). A human fingernail grows about 1 nanometer a second. About 500,000 nanometers would fit in the period at the end of this sentence, with room left over for a rave of microbes and a dictator’s heart.

I’m stirred by the cathedral-like architecture of the nanoscale, which I love to ogle in photographs taken through scanning electron microscopes. One year in college, I spent off-duty hours hooking long-stranded wool rugs after the patterns of the amino acid leucine (seen by polarized light), an infant’s brain cells, a single neuron, and other objects revealed by such microdelving. How beautifully some amino acids shine when lit by polarized light: pastel crystals of pyramidal calm, tiny tents along life’s midway. Arranged on a slide or flattened on a page, they glow gemlike but arid. We cannot see their vitality, how they collide and collude as they build behavior. But their nanoscale physiques are eye-openers, and more and more we’re turning to nature for inspiration.

We used to think that wall-climbing geckos must have suckers on the soles of their feet. But in 2002, biologists at Lewis & Clark College in Portland, Orgeon, and the University of California at Berkeley released their strange findings, and science was agog. Viewed at the nano level, a gecko’s five-toed feet are covered in a series of ridges, the ridges are tufted with billions of tiny tubular elastic hairs, and the hairs bear even tinier spatula-shaped boots. The natural force between atoms and molecules is enough to stick the spatulas to the surface of most anything. And the toes are self-cleaning. As a gecko relaxes a toe and begins to step, the dirt slides off and the gecko steps out of it. No grooming required.

When I learned of gecko feet from a biologist friend with an infectious sense of wonder, the idea of sticky instantly changed from a gluey sensation to a triumph of nature’s engineering. The next time I spied a gecko climbing up a stucco wall, my brain saw the tidy toes rising, and the spatula-tipped hairs clinging, even though my raw eyes couldn’t see beyond the harlequin slither. Inspired by gecko toes, scientists have invented chemical-free dry bio-adhesives and -bandages, and all sorts of biodegradable glues and geckolike coatings for home, office, military, and sports.

The nanotechnology world is a wonderland of surfaces unimaginably small, full of weird properties, and invisible to the naked eye, where we’re nonetheless reinventing industry and manufacturing in giddy new ways. Nano can be simply, affordably lifesaving during natural disasters. The 2012 spate of floods in Thailand inspired scientists to whisk silver nanoparticles into a solar-powered water filtration system that can be mounted on a small boat to purify water for drinking from the turbid river it floats on.

In the Namibian desert, inspired by water-condensing bumps on the backs of local beetles, a new breed of water bottle harvests water from the air and refills itself. The bottles will hit the market in 2014, for use by both marathon runners and people in third-world countries where fresh water may be scarce. South African scientists have created water-purifying tea bags. Nano can be as humdrum as the titanium dioxide particles that thicken and whiten Betty Crocker frosting and Jell-O pudding. It can be creepy: pets genetically engineered with firefly or jellyfish protein so that they glow in the dark (fluorescent green cats, mice, fish, monkeys, and dogs have already been created). It can be omnipresent and practical: the army’s newly invented self-cleaning clothes. It can be unexpected, as microchips embedded in Indian snake charmers’ cobras so that they can be identified if they stray into the New Delhi crowds. Or it can dazzle and fill us with hope, as in medicine, where it promises nano-windfalls.

In the 1966 science-fiction movie Fantastic Voyage, a tiny human-crewed submarine could sail through a patient’s turbulent bloodstream, careening down the rapids of an artery, dodging red blood cells, drifting through flesh lagoons, until they found the diseased or torn parts needing repair. With the advent of nanotechnology, this adventure leaves the realm of fiction. Researchers are perfecting microscopic devices known as nanobots and beebots (equipped with tiny stingers) that can swim through the bloodstream and directly target the site of a tumor or disease, providing radical new treatments.

The futurist Ray Kurzweil predicts that “by the 2030s we’ll be putting millions of nanobots inside our bodies to augment our immune system, to basically wipe out disease. One scientist cured Type I diabetes in rats with a blood-cell-size device already.”

There are nanobots invisible to the immune system, which shed their camouflage when they reach their work site. Tiny and agile enough to navigate a labyrinth of fragile blood vessels, some are thinner than a human hair. Researchers at the École Polytechnique de Montréal in Canada are developing a kind of self-propelled bacterium with naturally magnetic innards. In nature, the bacterium’s corkscrewlike tail propels it, and its magnetic particles point like a compass needle to guide it toward deeper water and away from the death knell of oxygen. Researchers are learning to steer the bacterium with precise tugs and pushes from an MRI machine, and at only 2 microns in diameter, the bacteria are small enough to fit through the smallest blood vessels in the human body. These harnessed bacteria can carry polymer beads roughly 150 nanometers in size; the goal is to modify the beads to carry medicines to tumors and other targets. Because we find it hard to imagine both ends of the visual spectrum—the cosmic infinite or the minutely finite—it sounds impossible. Yet we believe in them as surely as we do the unseen katydids in the woods.

We may imagine harnesses as large, leathery, and worn, and bacteria as invisibly tiny, able to slip into, through, and around objects or people. Harnessing bacteria doesn’t form a feasible image in the mind’s eye. You need to imagine bacterial horses and magnetic harnesses carrying polymer-bead bells that jingle a cancer-fighting drug. Still, a “sleigh” of medicine could become a new commonplace that slips into conversation the way a “flight” of stairs has, so comfortably that we no longer picture birds in flight when we see a staircase. We’re constantly minting new metaphors for the brain to use as mental shortcuts.

“Is there a sleigh for my illness?” someone may one day ask a doctor, as we now ask, “Is there a pill I can take?”[17]

Because boys love monster machines that dig, drag, roar, or explode, maybe the metaphor will be a “tug,” “tractor,” “missile,” or “submarine.” It might even be a “Phelps,” after the Olympic swimmer.

Another recent marvel of nanotechnology promises to alter daily life, too, but this one, despite its silver lining, is wickedly dangerous. Inevitably, it will inspire a welter of patents and ignite bioethical debates. Nano-engineers have devised a true silver bullet, a way to coat both hard surfaces (such as hospital bedrails, doorknobs, and furniture) and also soft surfaces (sheets, gowns, and curtains) with microscopic nanoparticles of silver, an element known to kill microbes. You’d think the new nanocoating would be a godsend to patients stricken with hospital-acquired sepsis and pneumonia and to the doctors fighting what has become a nightmare of antibiotic-resistant microorganisms that kill forty-eight thousand people a year.

It is. That’s the problem.

It’s possibly too effective. Remember, most microorganisms are harmless, many are beneficial, but some are absolutely essential for the environment and human life. Bacteria were the first life forms on the planet, and we owe them everything. Swarms of bacteria blanket us, other swarms colonize our insides, and still more flock like birds to any crease, cave, or canyon of the body they can find. Our biochemistry is interwoven with theirs. We also draft bacteria for many industrial and culinary purposes, from decontaminating sewage to creating tangily delicious foods like kefir, sauerkraut, and yogurt. So we need to be careful about the bacteria we target.

Will it be too tempting for nanotechnology companies, capitalizing on our fears and fetishes, to engineer superbly effective nanosilver microbe-killers, deodorants, and sanitizers of all sorts for home and industry? We may accept the changes nanotechnology creates in everyday life (such as antimalaria garments that ward off bugs) as part of the brave new world we deserve, yet we’re inventing them before thinking through their potential consequences. There’s no evidence that the antibacterial soaps available at the supermarket work better than soap and water, and in fact they may be hazardous. Triclosan—one of the standard ingredients in these soaps—is considered a pesticide by the FDA.

That’s why Kathleen Eggleson, a scientist at the University of Notre Dame, founded the Nano Impacts Intellectual Community, a monthly meeting that draws campus researchers, community leaders, and visiting scholars and authors to discuss the ethics and impact of new developments in nanotechnology. Her April 2012 paper published by the Center for Nano Science and Technology highlights the risk of unregulated products destroying microbial biodiversity. “Just this past December,” she points out, a coating for textiles “was the first nano-scale material approved as a pesticide by the FDA.” What if our nanopesticides accidentally kill the nitrogen-fixing bacteria that make our atmosphere breathable?

How incredible that we now have national committees and college seminars that debate bioethics, neuroethics, and nanoethics. We’re creating ethical predicaments that would have made Montaigne or Whitman blink. “I sing the body electric,” Walt Whitman wrote in 1855, inspired by the novelty of useful (not just parlor-trick) electricity, which he would live to see power streetlights and telephones, trolleys and dynamos. Whitman was the first American poet the technological universe didn’t scare. He often celebrated the steam engine, the railroad, and other new inventions of his era. In Leaves of Grass, his ecstatic epic poem of American life, he depicts himself as a live wire, a relay station for all the voices of Earth, natural or invented, human or mineral. “I have instant conductors all over me,” he wrote. “They seize every object and lead it harmlessly through me.… My flesh and blood playing out lightning, to strike what is hardly different from me.”

The invention of electricity equipped Whitman and other poets with a scintillation of metaphors. Like inspiration, it was a lightning flash. Like prophetic insight, it illuminated the darkness. Like sex, it tingled the flesh. Like life, it energized raw matter. Deeply as he believed the vow “I sing the body electric,” Whitman didn’t know that our cells really do generate electricity, that the heart’s pacemaker relies on such signals, and that billions of axons in the brain create their own electrical charge (equivalent to about a 60-watt bulb). A force of nature himself, he admired the range and raw power of electricity.

Yet I’m quite sure nanotechnology’s recent breakthroughs would have stunned him, such as the dream textile named GraphExeter, a light, supple, diaphanous material made for conducting electricity, which will revolutionize electronics by making it fashionable to wear your computer or cell phone.[18] Recharging would be automatic as nanosized generators convert the body’s normal stretches and twists into electricity through the piezoelectrical effect (what keeps a self-winding quartz watch ticking).[19] Wake Forest engineers recently invented Power Felt, a nanotube fabric that generates electricity from the difference in temperature between room heat and body heat. You could start your laptop by plugging it into your jeans, recharge your cell phone by tucking it into a pocket. Then, not only would your cells sizzle with electricity, even your couture clothing could chime in.

Would a fully electric suit upset flight electronics, pacemakers, airport security monitors, or the brain’s cellular dispatches? If you wore an electric coat in a lightning storm, would the hairs on the back of your neck stand up? Would you be more prey to a lightning strike? How long will it be before late-night hosts riff about electric undies? Will people tethered to recharging poles haunt airport waiting rooms? Will it become hip to wear flashing neon ads, quotes, and designs—maybe a lover’s name in a luminous tattoo?

Yet electricity has already lost its pizzazz. It’s hard to spot things hidden in plain sight. Even harder when they’re invisible. We take electricity for granted, unaware of it if lights and devices are turned off. Still, its specter haunts the walls all around us, sizzles in great looping strings that encircle us. If you have sockets in your house, you keep pocket lightning pulsingly at hand. Flip one switch and daylight floods the room; flip another and night falls like an iron door. The ancient Romans used to build their spas around natural hot springs; today we keep miniature electric hot springs in our homes to boil water for washing and bathing. Electric clocks watch over us while we sleep, an electric furnace (even gas or oil heat uses an electric pilot light) keeps us warm, and an electric fan or air conditioner cools us. In the summer we live in an electric igloo.

How Anthropocene that we “condition” the very air we breathe, flavoring its essence. For most of human history, we simply breathed the air that surrounded us, whatever nature delivered, whether it was fume-laced from oil deposits or salty-fishy from the coast. Before the Industrial Revolution, neighbors inhaled similar air. Now we tailor the breath that streams into and out of us. Neighbors may prefer their homes warmer, cooler, candle-scented, more humid, redolent of ammonia or bleach, cleaned by UV lights or “ozone-spiced” bulbs. We personalize our air!

We don’t find any of this strange, don’t regard it as unnatural. We don’t even notice it unless the electricity goes out, and then it’s as if the electric in our cells failed and we feel disconnected, a word we use when speaking of both power outages and psychic alienation (which feels like our inner grid has blown a million fuses). We too are electric, after all, a hive of minute, usually imperceptible jolts, as electrical signals leap like mountain goats from cell to cell throughout the body. Electricity, the brain’s telegraph, is almost instantaneous. Pinch a nerve in your back and dancing knives prick your skin. Pinch a nerve in your neck and a tiny electrocutioner throws a switch. But sexual tingles and jolts we find shockingly pleasurable, “electric flesh-arrows,” as Anaïs Nin calls them.

Electricity is a molecular tug-of-war. Life forms can’t exist without electric pumps in each cell. Ions of potassium and sodium, flowing into and out of a cell, produce a wave. The sodium is forced out, the potassium rushes in, the potassium is pushed out of another cell, the sodium rushes in, and so on. Ions fly like balls tossed by a one-armed juggler. It’s balance gone awry, regained, and lost again.

We reject things deemed too “wobbly,” “rickety,” or “unsteady.” We may condemn a person for being “unstable,” “unhinged,” or “unbalanced.” Yet deep in every cell, even the most slothful of us are falling out of balance and recovering. The body’s inner electric is not a steady stream. How ironic it is that we fight change in our lives and yearn for a state of permanence no life form can manage without dying—because we’re forever tumbling and snatching ourselves up before we smack the dirt and stay down.

Just as electricity ghosts throughout modern buildings everywhere and all the time, the same will soon be true of digital technology, woven into the walls, flowing through the floors, hidden all around and upon us. We’ll completely clothe ourselves in it, swim in it. As with the natural and man-made electric in our lives, we’ll probably ignore the clouds of technology we float on, under, and inside everywhere all the time. The brain relishes familiarity, loves being on autopilot, because then it can slur over the details and spend most of its sparking on something else. At the Oshkosh Airshow I attended one summer, the first wing-walker atop an old biplane drew gasps from thousands of people brought to a halt in amazement. The second fetched an anthology of admiring stares. By the third, a surprising number of people were blasé and continued milling about, chatting or shopping. You could almost hear their brains moaning, Oh, that again, another wing-walker. It doesn’t take much for a novelty to become invisible. And yet, isn’t this what we wish from exciting new technology, for it to slide invisibly into our lives, making them effortless and more enjoyable?

We’ll get used to living inside a digital bubble. Unless, perhaps, we must think hard when we connect via a brain-computer interface to the house’s fixtures. But even then, habit being what it is, we’d most likely come home, absentmindedly recall the code opening the front door’s lock, step over the threshold, daydream a hand swiping a light switch until the room brightens, mind’s-eye visualize the solar-electric shingles melting ice jams on the roof, while simultaneously worrying over a supposed affront at work or rebuff at school, anticipating dinner, fantasizing about a cute guy or gal, and hearing a stupid tune lodged in some spiky thicket of the brain.

Whether it’s hospital chairs robed in silver nanojackets to ward off bacteria, or invisibility cloaks, or degradable electronic devices that dissolve when you’re finished with them, or thin, flexible solar panels that can be printed or painted onto a surface, the writing is on the wall (though you’ll need a microscope to read it). And when it comes to the delicate balance of Earth’s life forms, it may be a small, small world after all.

Nature, Pixilated

It is winter in upstate New York, on a morning so cold the ground squeaks loudly underfoot as sharp-finned ice crystals rub together. The trees look like gloved hands, fingers frozen open. Something lurches from side to side up the trunk of an old sycamore—a nuthatch climbing in zigzags, on the prowl for hibernating insects. A crow veers overhead, then lands. As snow flurries begin, it leaps into the air, wings aslant, catching the flakes to drink. Or maybe just for fun, since crows can be mighty playful.

Another life form curves into sight down the street: a girl laughing down at her gloveless fingers, which are busily texting on some handheld device. This sight is so common that it no longer surprises me, though strolling in a large park one day I was startled by how many people were walking without looking up, or walking in a myopic daze while talking on their “cells,” as we say in shorthand, as if spoken words were paddling through the body from one saltwater lagoon to another.

We don’t find it strange that, in the Human Age, slimy, hairy, oozing, thorny, smelly, seed-crackling, pollen-strewn nature is digital. It’s finger-swiped across, shared with others over, and honeycombed in our devices. For the first time in human history, we’re mainly experiencing nature through intermediary technology that, paradoxically, provides more detail while also flattening the sensory experience. Because we have riotously visual, novelty-loving brains, we’re entranced by electronic media’s caged hallucinations. Over time, can that affect the hemispheric balance of the brain and dramatically change us? Are we able to influence our evolution through the objects we dream up and rely on?

We may possess the same brain our prehistoric ancestors did, but we’re deploying it in different ways, rewiring it to meet twenty-first-century demands. The Neanderthals didn’t have the same mental real estate that modern humans enjoy, gained from a host of skills and preoccupations—wielding laser scalpels, joyriding in cars, navigating the digital seas of computers, iPhones, and iPads. Generation by generation, our brains have been evolving new networks, new ways of wiring and firing, favoring some behaviors and discarding others, as we train ourselves to meet the challenges of a world we keep amplifying, editing, deconstructing, and recreating.

Through lack of practice, our brains have gradually lost their mental maps for how to read hoofprints, choose the perfect flints for arrows, capture and transport fire, tell time by plant and animal clocks, navigate by landmarks and the stars. Our ancestors had a better gift for observing and paying attention than we do. They had to: their lives depended on it. Today, paying attention as if your life depends on it can be a bugbear requiring conscious effort. More and more people are doing all of their reading on screens, and studies find that they’re retaining 46 percent less information than when they read printed pages. It’s not clear why. Have all the distractions shortened our attention spans? Do the light displays interfere with memory? It’s not like watching animals in ordinary life. Onscreen, what we’re really seeing isn’t the animal at all, but just three hundred thousand tiny phosphorescent dots flickering. A lion on TV doesn’t exist until your brain concocts an image, piecemeal, from the pattern of scintillating dots.

College students are testing about 40 percent lower in empathy than their counterparts of twenty or thirty years ago. Is that because social media has replaced face-to-face encounters? We are not the most socially connected we’ve ever been—that was when we lived in small tribes. In our cells and instincts, we still crave that sense of belonging, and fear being exiles, because for our ancestors living alone in the wild, without the group protection of the tribe, meant almost certain death. Those with a strong social instinct survived to pass their genes along to the next generation. We still follow that instinct by flocking to social media, which connects us to a vast multicultural human tribe—even though it isn’t always personal.

Many of our inventions have reinvented us, both physically and mentally. Through texting, a child’s brain map of the thumbs grows larger. Our teeth were sharper and stronger before we invented cooking; now, they’re blunt and fragile. Even cheap and easily crafted inventions can be powerful catalysts. The novelty of simple leather stirrups advanced warfare, helped to topple empires, and introduced the custom of romantic “courtly” love to the British Isles in the eleventh century. Before stirrups, wielding either a bow and arrow or a javelin, a rider might easily tumble off his horse. Stirrups added lateral stability, and soldiers learned the art of charging with lances at rest, creating terror as their horses drove the lances home. Fighting in this specialized way, an aristocracy of well-armed and -armored warriors emerged, and feudalism arose as a way to finance these knights, whose code of chivalry and courtly love quickly dominated Western society. In 1066, William the Conqueror’s army was outnumbered at the Battle of Hastings, but, by using mounted shock warfare, he won England anyway, and introduced a feudal society steeped in stirrups and the romance of courtly love.[20]

Tinkering with plows and harnesses, beyond just alleviating the difficult work of breaking ground, meant farmers could plant a third-season crop of protein-rich beans, which fortified the brain, and some historians believe that this brain boost, right at the end of the Dark Ages, ushered in the Renaissance.[21] Improved ship hulls spread exotic goods and ideas around the continents—as well as vermin and diseases. Electricity allowed us to homestead the night as if it were an invisible country. Remember, Thomas Edison perfected the lightbulb by candle or gas-lamp light.

Our inventions don’t just change our minds; they modify our gray and white matter, rewiring the brain and priming it for a different mode of living, problem-solving, and adapting. In the process, a tapestry of new thoughts arises, and one’s worldview changes. Think how the nuclear bomb altered warfare, diplomacy, and our debates about morality. Think how television shoved wars and disasters into our living rooms, how cars and airplanes broadened everything from our leisure to our gene pool, how painting evolved when paints became portable, how the printing press remodeled the spread of ideas and the possibility of shared knowledge. Think how Eadweard Muybridge’s photographs of things in motion—horses running, humans broad-jumping—awakened our understanding of anatomy and everyday actions.

Or think how the invention of the typewriter transformed the lives of women, great numbers of whom could leave the house with dignity to become secretaries. Although they won the opportunity because their dexterous little fingers were considered better able to push the keys, working in so-called pools they risked such bold ideas as their right to vote. Even the low-tech bicycle modified the lives of women. Straddling a bike was easier if they donned bloomers—large billowy pants that revealed little more than that they had legs—which scandalized society. They had to remove their suffocating “strait-laced” corsets in order to ride. Since that seemed wicked, the idea of “loose” women became synonymous with low morals.

In ancient days, our language areas grew because we found the rumpled currency of language lifesaving, not to mention heady, seductive, and fun. Language became our plumage and claws. The more talkative among us lived to pass on their genes to chatty offspring. Language may be essential, but the invention of reading and writing was pure luxury. The uphill march children find in learning how to read reminds us that it may be one of our best tools, but it’s not an instinct. I didn’t learn to read with fluent ease until I was in college. It takes countless hours of practice to fine-tune a brain for reading. Or anything else.

Near- or farsightedness was always assumed to be hereditary. No more. In the United States, one-third of all adults are now myopic, and nearsightedness has been soaring in Europe as well. In Asia, the numbers are staggering. A recent study testing the eyesight of students in Shanghai and young men in Seoul reported that 95 percent were nearsighted. From Canberra to Ohio, one finds similar myopia, a generation of people who can’t see the forest for the trees. This malady, known as “urban eyes,” stems from spending too much time indoors, crouched over small screens. Our eyeballs adjust by changing shape, growing longer, which is bad news for those of us squinting to see far away. For normal eye growth, children need to play outside, maybe watching how a squirrel’s nest, high atop an old hickory tree, sways in the wind, then zooming down to the runnel-rib on an individual blade of grass. Is that brown curtsey at the bottom of the yard a wild turkey or a windblown chrysanthemum?

In the past, bands of humans hunted and gathered, eyes nimble, keenly attuned to a nearby scuffle or a distant dust-mist, as they struggled to survive. Natural light, peripheral images, a long field of view, lots of vitamin D, an ever-present horizon, and a caravan of visual feedback shaped their eyes. They chipped flint and arrowheads, flayed and stitched hides, and did other close work, but not for the entire day. Close work now dominates our lives, but that’s very recent, one of the Anthropocene’s hallmarks, and we may evolve into a more myopic species.

Studies also show that Google is affecting our memory in chilling ways. We more easily forget anything we know we can find online, and we tend to remember where online information is located, rather than the information itself.[22]

Long ago, the human tribe met to share food, expertise, ideas, and feelings. The keen-eyed observations they exchanged about the weather, landscape, and animals saved lives on a daily basis. Now there are so many of us that it’s not convenient to sit around a campfire. Electronic campfires are the next best thing. We’ve reimagined space, turning the Internet into a favorite pub, a common meeting place where we can exchange knowledge or know-how or even meet a future mate. The sharing of information is fast, unfiltered, and sloppy. Our nervous systems are living in a stream of such data, influenced not just by the environment—as was the case for millennia—but abstractly, virtually. How has this changed our notion of reality? Without our brain we’re not real, but when our brain is plugged into a virtual world, then that becomes real. The body remains in physical space, while the brain travels in a virtual space that is both nowhere and everywhere at once.


ONE MORNING SOME birder pals and I spend an hour at Sapsucker Woods Bird Sanctuary, watching two great blue herons feed their five rowdy chicks. It’s a perfect setting for nesting herons, with an oak-snag overhanging a plush green pond, marshy shallows to hunt in, and a living larder of small fish and frogs. Only a few weeks old, the chicks are mainly fluff and appetite.

Mom and Dad run relays, and each time one returns the chicks clack wildly like wooden castanets and tussle with each other, beaks flying. Then one hogs Mom’s beak by scissoring across it and holding on until a fish slides loose. The other chicks pounce, peck like speed typists, try to steal the half-swallowed fish, and if it’s too late for that, grab Mom’s beak and claim the next fish. Sibling rivalry is rarely so explicit. We laugh and coo like a flock of doting grandparents.

At last Mom flies off to hunt, and the chicks hush for a nap, a trial wing stretch, or a flutter of the throat pouch. Real feathers have just begun to cover their down. When a landing plane roars overhead, they tilt their beaks skyward, as if they are part of a cargo cult or expecting food from pterodactyls. We could watch their antics all day.

I’m new to this circle of blue heron aficionados, some of whom have been visiting the nest daily since April and comparing notes. “I have let a lot of things go,” one says. “On purpose, though. This has been such a rare and wonderful opportunity.” “Work?” another replies. “Who has time to work?”

So true. The bird sanctuary offers a rich mosaic of live and fallen trees, mallards, songbirds, red-tailed hawks, huge pileated woodpeckers, and of course yellow-bellied sapsuckers. Canada geese have been known to stop traffic (literally)—with adults serving as crosswalk guards. It’s a green mansion, and always captivating.

However, we’re not really there. We’re all—more than 1.5 million of us thus far—watching on two live webcams affixed near the nest, and “chatting” in a swiftly scrolling Twitter-like conversation that rolls alongside the bird’s-eye view.

We’re virtually at the pond, without the mud, sweat, and mosquitoes. No need to dress, share snacks, make conversation. Some of us may be taking a coffee break, or going digitally AWOL during class or work. All we can see is the heron nest up close, and that’s a wonderful treat we’d miss if we were visiting on foot. In a couple of weeks the camera will follow the chicks as they learn to fish.

This is not an unusual way to pass time nowadays, and it’s swiftly becoming the preferred way to view nature. Just a click away, I could have chosen a tarantula-cam, meerkat-cam, blind-mole-rat-cam, or twenty-four-hour-a-day Chinese-panda-cam from a profusion of equally appealing sites, some visited by tens of millions of people. Darting around the world to view postage-stamp-size versions of wild animals that are oblivious to the video camera is the ultimate cinema verité, and an odd shrinking and flattening of the animals, all of whom seem smaller than you. Yet I rely on virtual nature to observe animals I may never see in the wild. When I do, abracadabra, a computer mouse becomes a magic wand and there is an orphan wombat being fed by wildlife rescuers in Australia. Or from 308 photos of cattle posted on Google Earth I learn that herds tend to face either north or south, regardless of weather conditions, probably because they’re able to perceive magnetic fields, which helps them navigate, however short the distance. Virtual nature offers views and insights that might otherwise escape us. It also helps to satisfy a longing so essential to our well-being that we feel compelled to tune in, and we find it hypnotic.

What happens when that way of engaging the world becomes habitual? Nature now comes to us, not the other way round—on a small glowing screen. You can’t follow a beckoning trail, or track a noise off-camera. You don’t exercise as you meander, uncertain what delight or danger may greet you, while feeling dwarfed by forces older and larger than yourself. It’s a radically different way of being—with nature, but not in nature—and it’s bound to shape us.

Films and TV documentaries like Microcosmos, Winged Migration, Planet Earth, March of the Penguins, and The Private Life of Plants inspire and fascinate millions while insinuating environmental concerns into the living room. It’s mainly in such programs that we see animals in their natural settings, but they’re dwarfed, flattened, interrupted by commercials, narrated over, greatly edited, and sometimes staged for added drama. Important sensory feedback is missing: the pungent mix of grass, dung, and blood; drone of flies and cicadas, dry rustling of wind through tall grass; welling of sweat; sandpapery sun.

On YouTube I just glimpsed several icebergs rolling in Antarctica—though without the grandeur of size, sounds, colors, waves, and panorama. Oddest of all, the icebergs looked a bit grainy. Lucky enough to visit Antarctica years ago, I was startled to find the air so clear that glare functioned almost as another color. I could see longer distances. Some icebergs are pastel, depending on how much air is trapped inside. And icebergs produce eerie whalelike songs when they rub together. True, in many places it’s a crystal desert, but in others life abounds. An eye-sweep of busy seals, whales, penguins and other birds, plus ice floes and calving glaciers, reveals so much drama in the foreground and background that it’s like entering a pop-up storybook. Watching icebergs online, or even at an Imax theater, or in sumptuous nature films, can be stirring, educational, and thought-provoking, but the experience is wildly different.

Last summer, I watched as a small screen in a department store window ran a video of surfing in California. That simple display mesmerized high-heeled, pin-striped, well-coiffed passersby who couldn’t take their eyes off the undulating ocean and curling waves that dwarfed the human riders. Just as our ancient ancestors drew animals on cave walls and carved animals from wood and bone, we decorate our homes with animal prints and motifs, give our children stuffed animals to clutch, cartoon animals to watch, animal stories to read. Our lives trumpet, stomp, and purr with animal tales, such as The Bat Poet, The Velveteen Rabbit, Aesop’s Fables, The Wind in the Willows, The Runaway Bunny, and Charlotte’s Web. I first read these wondrous books as a grown-up, when both the adult and the kid in me were completely spellbound. We call each other by “pet” names, wear animal-print clothes. We ogle plants and animals up close on screens of one sort or another. We may not worship or hunt the animals we see, but we still regard them as necessary physical and spiritual companions. It seems the more we exile ourselves from nature, the more we crave its miracle waters. Yet technological nature can’t completely satisfy that ancient yearning.

What if, through novelty and convenience, digital nature replaces biological nature? Gradually, we may grow used to shallower and shallower experiences of nature. Studies show that we’ll suffer. Richard Louv writes of widespread “nature deficit disorder” among children who mainly play indoors—an oddity quite new in the history of humankind. He documents an upswell in attention disorders, obesity, depression, and lack of creativity. A San Diego fourth-grader once told him: “I like to play indoors because that’s where all the electrical outlets are.” Adults suffer equally. It’s telling that hospital patients with a view of trees heal faster than those gazing at city buildings and parking lots. In studies conducted by Peter H. Kahn and his colleagues at the University of Washington, office workers in windowless cubicles were given flat-screen views of nature. They reaped the benefits of greater health, happiness, and efficiency than those without virtual windows. But they weren’t as happy, healthy, or creative as people given real windows with real views of nature.

As a species, we’ve somehow survived large and small ice ages, genetic bottlenecks, plagues, world wars, and all manner of natural disasters, but I sometimes wonder if we’ll survive our own ingenuity. At first glance, it seems like we may be living in sensory overload. The new technology, for all its boons, also bedevils us with speed demons, alluring distractors, menacing highjinks, cyber-bullies, thought-nabbers, calm-frayers, and a spiky wad of miscellaneous news. Some days it feels like we’re drowning in a twittering bog of information.[23] But, at exactly the same time, we’re living in sensory poverty, learning about the world without experiencing it up close, right here, right now, in all its messy, majestic, riotous detail. Like seeing icebergs without the cold, without squinting in the Antarctic glare, without the bracing breaths of dry air, without hearing the chorus of lapping waves and shrieking gulls. We lose the salty smell of the cold sea, the burning touch of ice. If, reading this, you can taste those sensory details in your mind, is that because you’ve experienced them in some form before, as actual experience? If younger people never experience them, can they respond to words on the page in the same way?

The farther we distance ourselves from the spell of the present, explored by all our senses, the harder it will be to understand and protect nature’s precarious balance, let alone the balance of our own human nature. I worry about our virtual blinders. Hobble all the senses except the visual, and you produce curiously deprived voyeurs. At some medical schools, future doctors can attend virtual anatomy classes, in which they can dissect a body by computer—minus that whole smelly, fleshy, disturbing human element.[24] Stanford’s Anatomage (formerly known as the Virtual Dissection Table) offers corpses that can be nimbly dissected from many viewpoints, plus ultrasound, X-ray and MRI.[25] At New York University, medical students can don 3D glasses and explore virtual cadavers stereoscopically, as if swooping along Tokyo’s neon-cliffed streets on Google Maps. The appeal is easy to understand. As one twenty-one-year-old female NYU student explains, “In a cadaver, if you remove an organ, you cannot add it back in as if it were never removed. Plus, this is way more fun than a textbook.” Exploring virtual cadavers offers constant change, drama, progress. It’s more interactive, more lively, akin to a realistic video game instead of a static corpse that just lies there.

When all is said and done, we only exist in relation to the world, and our senses evolved as scouts who work together to bridge that divide and provide volumes of information, warnings, and rewards. But they don’t report everything. Or even most things. We’d collapse from sheer exhaustion. They filter experience, so that the brain isn’t swamped by so many stimuli that it can’t focus on what may be lifesaving. Some of our expertise comes with the genetic suit, but most of it must be learned, updated, and refined, through the fine art of focusing deeply, in the present, through the senses, and combining emotional memories with sensory experience.

Once you’ve held a ball, felt its smooth contour, turning it in your hands, your brain need only see another ball to remember the feel of roundness. You can look at a Red Delicious apple and know the taste will be sweet, the sound will be crunchy, and feel the heft of it in your hand. Strip the brain of feedback from the mansion of the senses and life not only feels poorer, learning grows less reliable. Digital exploration is predominantly visual, and nature, pixilated, is mainly visual, so it offers one-fifth of the information. Subtract the other subtle physical sensations of smell, taste, touch, and sound, and you lose a wealth of problem-solving and lifesaving detail.

When I was little, children begged to go outside and play, especially in winter when snow fell from the sky like a great big toy that clotted your mittens, whisked up your nose, slid underfoot, shape-shifted in your hands, made great projectiles, and outlined everything, linking twigs and branches, roofs and sidewalks, car hoods and snow forts with white ribbons. Some still do. But most people play more indoors now, mainly alone and stagestruck, staring at our luminous screens.

I relish technology’s scope, reach, novelty, and remedies. But it’s also full of alluring brain closets, in which the brain may be well occupied but has lost touch with the body, lost the intimacy of the senses, lost a visceral sense of being one life form among many on a delicately balanced planet. A big challenge for us in the Anthropocene will be reclaiming that sense of presence. Not to forgo high-speed digital life, but balance it with slow hours of just being outside, surrounded by nature, and watching what happens next.

Because something wonderful always happens. When a sense of presence steals up the bones, one enters a mental state where needling worries soften, careers slow their cantering, and the imaginary line between us and the rest of nature dissolves. Then for whole moments one may see nothing but snow, gathering thick and wet along the limbs of an old magnolia. Or, indoors, one may watch how a vase full of tulips, whose genes have traveled eons and silk roads, arch their spumoni-colored ruffles and nod gently when the furnace gusts. On the periodic table of the heart, somewhere between wonderon and unattainium, lies presence, which one doesn’t so much take as steep in, like a romance, and without which one can live just fine, but not thrive. Those sensory bridges need to stay sharp, not just for our physical survival, but so we feel fully engaged and alive.

A digital identity in a digital landscape figures indelibly in our reminted sense of self. Electronic work and dreams fuel most people’s lives, education, and careers. Kindness, generosity, bullying, greed, and malice all blink across our devices and survive like extremophiles on invisible nets. Sometimes, still human but mentally fused with our technologies, we no longer feel compatible with the old environment, when nature seemed truly natural. To use an antique metaphor, the plug and socket no longer fit snugly. We’ve grown too large, and there’s no shrinking back. Instead, so that we don’t feel like we’re falling off the planet, we’re revising and redefining nature. That includes using the Internet as we do our other favorite tools, as a way to extend our sense of self. A rake becomes an extension of one’s arm. The Internet becomes an extension of one’s personality and brainpower, an untethered way to move commerce and other physical objects through space, a universal diary, a stew of our species’ worries, a hippocampus of our shared memories. Could it ever become conscious? It’s already the sum of our daily cogitations and desires, a powerful ghost that can not only haunt with aplomb but rabble-rouse, wheel and deal, focus obsessively, pontificate on all topics, speak in all tongues, further romance, dialogue with itself, act decisively, mumble numerically, and banter between computers until the cows come home. Then find someone to milk the cows.

It’s been suggested that we really have two selves now, the physical one and a second self that’s always present in our absence—an online self we also have to groom and maintain, a self people can respond to even when we’re not available. As a result everyone goes through two adolescences on the jagged and painfully exposed road to a sense of identity.

Surely we can inhabit both worlds with poise, dividing our time between the real and the virtual. Ideally, we won’t sacrifice one for the other. We’ll play outside and visit parks and wilds on foot, and also enjoy technological nature as a mental seasoning, turning to it for what it does best: illuminate all the hidden and mysterious facets of nature we can’t experience or fathom on our own.

The Interspecies Internet

At the Toronto Zoo, Matt offers Budi one of several musical apps—a piano keyboard—and Budi stretches four long fingers through the bars and knuckle-taps an atonal chord, then several more.

“There you go! That’s good!” Matt says encouragingly. “Do a couple more.” One prismatic chord follows another, as Budi knuckle-dances across the iPad.

I’m reminded of the YouTube video in which Panbanisha, a nineteen-year-old bonobo at the Language Research Center in Atlanta, is introduced to a full-size keyboard for the first time by the musician Peter Gabriel. Sitting on the piano bench, she considers the keyboard for a moment, then noodles around on it, discovers a note she likes, then finds the octave and picks out notes within it, creating a melody that floats above Gabriel’s improvised background. Especially wondrous is her sense of musical timing, the negative space between notes when, neither rushed nor dragged, each note hovers in the air like a diver at the arc of a dive, before falling into a shared pool of reverberating silence, from which, at a pleasing interval, another note arises. After a while, she cuts loose and jams harmonies with his vocals.

“There was clear, sharp, musical intelligence at work,” Gabriel says. She was “tender and open and expressive.”

Her brother Kanzi came in next, and even though he’d never sat at a piano before, when he saw how much attention his sister was getting, “he threw down his blanket like James Brown discarding one of his cloaks,” Gabriel says, “and then does this, you know, fantastic sort of triplet improvisation.”

Gabriel finds orangutans the bluesmen of the ape world, “who always look a little sad but they’re amazingly soulful.”

At seven, Budi is still a kid, not a bluesman, and he enjoys playing memory and cognitive games on the iPad, or using the musical and drawing apps, but he’s most fascinated by YouTube videos of other orangutans.

Matt explains, tenderly, that he believes in offering orangutans a way to communicate nonverbally with other apes, including us. Keepers could always hand them things, but if the orangs “could tell anybody what they want, then their lives would get a lot more fulfilling.”

The most ambitious version of that desire is known as the Interspecies Internet. Matt has heard of it, and thinks it would be a cool thing to do, though the logistics might be tough. Ever since the 1980s, the cognitive psychologist Diana Reiss, who studies animal intelligence, has been teaching dolphins to use an underwater keyboard (soon to be replaced with a touchscreen) to ask for food, toys, or favorite activities. She and the World Wide Web pioneer (and Chief Internet Evangelist at Google) Vent Cerf, Peter Gabriel, and Neil Gershenfeld, director of MIT’s Center for Bits and Atoms, are combining their wide-ranging talents to launch a touchscreen network for cockatoos, dolphins, octopuses, great apes, parrots, elephants, and other intelligent animals to communicate directly with humans and each other.

When the four introduced the idea to the world at a TED Talk, Gabriel said: “Perhaps the most amazing tool man has created is the Internet. What would happen if we could somehow find new interfaces—visual, audio—to allow us to communicate with the remarkable beings we share the planet with?” He told of his great respect for the intelligence of apes, and how, growing up on a farm in England, he used to peer into the eyes of cattle and sheep and wonder what they were thinking.

In response to those who say, “The Internet is dehumanizing us. Why are we imposing it on animals?” Gabriel replied: “If you look at a lot of technology, you’ll find that the first wave dehumanizes. The second wave, if it’s got good feedback and smart designers, can superhumanize.” He’d love for any intelligent species that is interested to explore the Internet in the same way we do.

Cerf added that we shouldn’t restrict the Internet to one species. Other sentient species should be part of the network, too. And, in that spirit, the most important aspect of the project is learning how to communicate with species “who are not us but share a sensory environment.”

Gershenfeld said that when he saw the video clip of Panbanisha jamming with Gabriel, he was struck by the history of the Internet. “It started as the Internet of mostly middle-aged white men,” he said. “I realized that we humans had missed something—the rest of the planet.”

If the Interspecies Internet is the next logical step, what will it be a prelude to? Gershenfeld looks forward to “computers without keypads or mice,” controlled by reins of thought, prompted by waves of feelings and memories. It’s one thing to be able to translate our ideas into the physical environment, but a giant step for humankind to do that with thoughts alone. Telekinesis used to belong only to science fiction, but we’re well on our way to that ascendancy now, as paralyzed patients learn to wield prosthetic arms and propel exoskeleton legs via muscular thoughts. These possibilities change how we imagine the brain, no longer a skull-bound captive.

“Forty years ago,” Cerf said, “we wrote the script of the Internet. Thirty years ago we turned it on. We thought we were building a system to connect computers together. But we quickly learned that it’s a system for connecting people.” Now we’re “figuring out how to communicate with something that’s not a person. You know where this is going,” Cerf continued. “These actions with other animals will teach us, ultimately, how we might interact with an alien species from another world. I can hardly wait.” Cerf is leading a NASA initiative to create an Interplanetary Internet, which can be used by crews on spacecraft between the planets. Who knows what spin-off Internets will follow.

Reiss pointed out that dolphins are mighty alien. “These are true nonterrestrials.”

The Apps for Apes program is but one part of our postindustrial, nanotech, handcrafted, digitally stitched world in which luminous webs help us relate to friends, strangers, and other intelligent life forms, whether or not they have a brain.

Your Passion Flower Is Sexting You

Life takes many forms, as does intelligence—plants may not possess a brain, but they can be diabolically clever, manipulative, and vicious. So it was only a matter of time. Plants have begun texting for help. Thanks to clever new digital devices, a dry philodendron, undernourished hibiscus, or sadly neglected wandering Jew can either text or tweet to its owner over the Internet. Humans like to feel appreciated, so a begonia may also send a simple “Thank you” text—when it’s happy, as gardeners like to say, meaning healthy and well tended. Picture your Boston fern home alone placing botanicalls. But why should potted plants be the only ones to reassure their humans? Another company has found a way for crops to send a text message in unison, letting their farmer know if she’s doing a good enough job to deserve a robust harvest. Sensors lodged in the soil respond to moisture and send prerecorded messages customized by the owner. What is the sound of one hand of bananas clapping?

Plants texting humans may be new, but malcontent plants have always been chatting among themselves. When an elm tree is being attacked by insects, it does the chemical equivalent of broadcasting I’m hurt! You could be next! alerting others in its grove to whip up some dandy poisons. World-class chemists, plants vie with Lucrezia Borgia dressed in green. If a human kills with poison, we label it a wicked and premeditated crime, one no plea of “self-defense” can excuse. But plants dish out their nastiest potions every day, and we wholeheartedly forgive them. They may lack a mind, or even a brain, but they do react to injury, fight to survive, act purposefully, enslave humans (through the likes of coffee, tobacco, opium), and gab endlessly among themselves.

Strawberry, bracken, clover, reeds, bamboo, ground elder, and lots more all grow their own social networks—delicate runners (really horizontal stems) linking a grove of individuals. If a caterpillar chews on a white clover leaf, the message races through the colony, which ramps up its chemical weaponry. Stress a walnut tree and it will brew its own caustic aspirin and warn its relatives to do the same. Remember Molly Ivins’s needle-witted quip about an old Texan congressman: “If his IQ slips any lower, we’ll have to water him twice a day”? She clearly misjudged the acumen of plants. Plants are not mild-mannered. Some can be murderous, seductive, deceitful, venomous, unscrupulous, sophisticated, and downright barbaric.

Since they can’t run after a mate, they go to phenomenal lengths to con animals into performing sex for them, using a vaudeville trunk full of costumes. For instance, some orchids disguise themselves as the sex organs of female bees so that male bees will try to mate with them and leave wearing pollen pantaloons. Since they can’t run from danger, they devise a pharmacopeia of poisons and an arsenal of simple weapons: hideous killers like strychnine and atropine; ghoulish blisterers like poison ivy and poison sumac; slashers like holly and thistle waving scalpel-sharp spines. Blackberries and roses wield belts of curved thorns. Each hair of a stinging nettle brandishes a tiny syringe full of formic acid and histamine to make us itch or run.

Just in case you’re tempted to cuddle your passion flower when you teach it to send text messages—resist the urge. Passion flowers release cyanide if their cell walls are broken by a biting insect or a fumbling human. Of course, because nature is often an arms race, leaf-eating caterpillars have evolved an immunity to cyanide. Not us, alas. People have died from accidentally ingesting passion flower, daffodils, yew, autumn crocuses, monkshood, rhododendron, hyacinths, peace lilies, foxglove, oleander, English ivy, and the like. And one controversial theory about the Salem witch trials is that the whole shameful drama owes its origin to an especially wet winter when the rye crop was infected with ergot, an LSD-like hallucinogen that, perhaps breathed in by those grinding it into flour, caused women to act bewitched.

Today we’re of two minds about undisciplined plants just as we are about wild animals. We want them everywhere around us, but not roaming freely. We keep pet plants indoors or outside, provided they’re well behaved and don’t run riot. Weeds alarm us. And yet, as Patrick Blanc points out, “it is precisely this form of freedom of the plant world that most fascinates us.” Devious and dangerous as plants can be, they adorn every facet of our lives, from courtship to burial. They fill our rooms with piquant scents, dazzling tableaux, and gravity-defying aerial ballets and contortions as they unfold petals and climb toward the sun. Think of them as the original Cirque du Soleil. Many an African violet has given a human shrinking violet a much-needed interkingdom friendship.

Since they do demand looking after, and we do love our social networks, I expect texting will sweep the plant world, showering us with polite thank-yous and rude complaints. What’s next, a wisteria sexting every time it’s probed by a hummingbird? A bed of zinnias ranting to online followers as they go to seed?

Surely some playful wordsmiths need to dream up spirited texts for the botanicalling plants to send, telegrams of fulsome fawning or sarcastic taunt. Maybe a little soft soap: “You grow girl! Thanks for the TLC.” Or think how potent it would be, in the middle of a dinner date, to receive a text from your disgruntled poinsettia that reads: “With fronds like you who needs anemones?!”

When Robots Weep, Who Will Comfort Them?

It’s an Anthropocene magic trick, this extension of our digital selves over the Internet, far enough to reach other people, animals, plants, interplanetary crews, extraterrestrial visitors, the planet’s Google-mapped landscapes, and our habitats and possessions. If we can revive extinct life forms, create analog worlds, and weave new webs of communication—what about new webs of life? Why not synthetic life forms that can sense, feel, remember, and go through Darwinian evolution?


HOD LIPSON IS the only man I know whose first name means “splendor” in Hebrew and a V-shaped wooden trough for carrying bricks over one shoulder in English. The paradox suits him physically and mentally. He looks strong and solid enough to carry a hod full of bricks, but he would be the first to suggest that the bricks might not resemble any you’ve ever known. They might even saunter, reinvent themselves, refuse to be stacked, devise their own mortar, fight back, explore, breed more of their kind, and boast a nimble curiosity about the world. Splendor can be bricklike, if graced by complexity.

His lab building at Cornell University is home to many a skunkworks project in computer sciences or engineering, including some of DARPA’s famous design competitions (agile robots to clean up toxic disasters, superhero exoskeletons for soldiers, etc.). Nearby, two futuristic DARPA Challenge cars have been left like play-worn toys a few steps from a display case of antique engineering marvels and an elevator that’s old and slow as a butter churn.

On the second floor, a black spider-monkey-like robot clings to the top left corner of Lipson’s office door, intriguing but inscrutable, except to the inner circle for whom it’s a wry symbol and tradesman’s sign of the sort colonial shopkeepers used to hang out to identify their business: the apothecary’s mortar and pestle, the chandler’s candles, the cabinetmaker’s hickory-spindled armchair, the roboticist’s apprentice. Though in its prime the leggy bot drew the keen gaze of students, students come and go, as do the smart-bots they work on, which, coincidentally, seem to have a life span of about 3.5 years—how long it takes a student to finish a dissertation and graduate.

A man with curly hair, chestnut-brown eyes, and a dimpled chin, Hod welcomes me into his cheerful office: tall windows, a work desk, a Dell computer with a triptych of screens, window boxes for homegrown tomatoes in summer, and a wall of bookshelves, atop which sits an array of student design projects. To me they look unfamiliar but strangely beautiful and compelling, like the merchandise in an extraterrestrial bazaar. A surprisingly tall white table and its chairs invite one to climb aboard and romp with ideas. At Lipson’s round table, if you’re under six feet tall, your feet will automatically leave the planet, which is good, I think, because even this limited levitation aids the imagination, untying gravity just enough to make magic carpet rides, wing-walkers, and spaceships humble as old rope. There’s a reason we cling to such elevating turns of phrase as “I was walking on air,” “That was uplifting,” “heightened awareness,” “surmounting obstacles,” or “My feet never touched the ground.” The mental mischief of creativity—which thrives on such fare as deep play, risk, a superfluity of ideas, the useful application of obsession, and willingly backtracking or hitting dead ends without losing heart—is also fueled by subtle changes in perception. So why not cast off mental moorings and hover a while each day?

What’s the next hack for a rambunctious species full of whiz kids with digital dreams? Lipson is fascinated by a different branch of the robotic evolutionary tree than the tireless servant, army of skilled hands, or savant of finicky incisions with which we have become familiar. Over ten million Roomba vacuum cleaners have already sold to homeowners (who sometimes find them being ridden as child or cat chariots). We watch with fascination as robotic sea scouts explore the deep abysses (or sunken ships), and NOAA’s robots glide underwater to monitor the strength of hurricanes. Google’s robotics division owns a medley of firms, including some minting life-size humanoids—because, in public spaces, we’re more likely to ask a cherub-faced robot for info than a touchscreen. Both Apple and Amazon are diving into advanced robotics as well. The military has invested heavily in robots as spies, bionic gear, drones, pack animals, and bomb disposers. Robots already work for us with dedicated precision in factory assembly lines and operating rooms. In cross-cultural studies, the elderly will happily adopt robotic pets and even babies, though they aren’t keen on robot caregivers at the moment.

All of that, to Lipson, is child’s play. His focus is on a self-aware species, Robot sapiens. Our own lineage branched off many times from our apelike ancestors, and so will the flowering, subdividing lineage of robots, which perhaps needs its own Linnaean classification system. The first branch in robot evolution could split between AI and AL—artificial intelligence and artificial life. Lipson stands right at that fork in that road, whose path he’s famous for helping to divine and explore in one of the great digital adventures of our age. It’s the ultimate challenge, in terms of engineering, in terms of creation.

“At the end of the day,” he says with a nearly illegible smile, “I’m trying to recreate life in a synthetic environment—not necessarily something that will look human. I’m not trying to create a person who will walk out the door and say ‘Hello!’ with all sorts of anthropomorphic features, but rather features that are truly alive given the principles of life—traits and behaviors they have evolved on their own. I don’t want to build something, turn it on, and suddenly it will be alive. I don’t want to program it.”

A lot of robotics today, and a lot of science fiction, is about a human who schemes at a workbench in a dingy basement, digitally darning scraps, and then figuring out how to command his scarecrow to do his bidding. Or a mastermind who builds the perfect robots that eventually go haywire in barely discernible stages and start to massacre us, sometimes on Earth, often in space. It assumes an infinite power that humans have (and so can lose) over the machine.

Engineering’s orphans, Lipson’s brainchildren would be the first generation of truly self-reliant machines, gifted with free will by their soft, easily damaged creators. These synthetic souls would fend for themselves, learn, and grow—mentally, socially, physically—in a body not designed by us or by nature, but by fellow computers.

That may sound sci-fi, but Lipson is someone who relishes not only pushing the envelope but tinkering with its dimensions, fabric, inertia, and character. For instance, bothered by a question that nags sci-fi buffs, engineers, and harried parents alike—Where are all the robots we were told would be working for us by now?—he decided to go about robotics in a new way. And also in the most ancient of ways, by summoning the “mother of all designers, Evolution,” and asking a primordial soup of robotic bits and pieces to zing through millions of generations of fluky mutations, goaded by natural selection. Of course, natural evolution is a slapdash and glacially slow mother, yielding countless bottlenecks for every success story. But computers can be programmed to “evolve” at great speed with digital finesse, and adapt to all the rigors of their environment.

Would they be able to taste and smell? I wonder, realizing at once how outmoded the very question is. Taste buds rise like flaky volcanoes on different regions of the tongue, with bitter at the back, lest we swallow poisons. How hard would it be to evolve a suite of specialized “taste buds” that bear no resemblance to flesh? Flavor engineers at Nestlé in Switzerland have already created an electronic “taster” of espresso, which analyzes the gas different pulls of ristretto give off when heated, translating each bouquet of ions into such human-friendly, visceral descriptions as “roasted,” “flowery,” “woody,” “toffee,” and “acidy.”

However innovative, Lipson’s entities are still primitive when compared to a college sophomore or a bombardier beetle. But they’re the essential groundwork for a culture, maybe a hundred years from now, in which some robots will do our bidding, and others will share our world as a parallel species, one that’s creative and curious, moody and humorous, quick-witted, multitalented, and 100 percent synthetic. Will we regard them as life, as a part of nature, if they’re not carbon-based—as are all of Earth’s plants and animals? Can they be hot-blooded without blood? How about worried, petulant, sly, envious, downright cussed? The future promises fleets of sovereign silicants and, ultimately, self-governing, self-reliant robotic angels and varmints, sages and stooges. To be able to ponder such possibilities is a testament to the infinite agility of matter and its great untapped potential.

Whenever Lipson talks of robots being truly alive, gently stressing the word, I don’t hear Dr. Frankenstein speaking, at one in the morning, as the rain patters dismally against the panes,

when, by the glimmer of the half-extinguished light, I saw the dull yellow eye of the creature open; it breathed hard, and a convulsive motion agitated its limbs. How can I describe my emotions at this catastrophe, or how delineate the wretch whom with such infinite pains and care I had endeavoured to form?[26]

As in the book’s epigraph, lines from Milton’s Paradise Lost: “Did I request thee, Maker, from my clay / To mould Me man?” Mary Shelley suggests that the parent of a monster is ultimately responsible for all the suffering and evil he has unleashed. From her early years of seventeen to twenty-one, Shelley was herself consumed by physical creation and literally sparking life, becoming pregnant and giving birth repeatedly, only to have three of her four children die soon after birth. She was continually pregnant, nursing, or mourning—creating and being depleted by her own creations. That complex visceral state fed her delicately horrifying tale.

In her day, scientists were doing experiments in which they animated corpses with electricity, fleetingly bringing them back to life, or so it seemed. Whatever the image of Frankenstein’s monster may have meant to Shelley, it has seized the imagination of people ever since, symbolizing something unnatural, Promethean, monstrous that we’ve created by playing God, or from evil motives or through simple neglect (Dr. Frankenstein’s sin wasn’t in creating the monster but in abandoning it). Something we’ve created that, in the end, will extinguish us. And that’s certainly been a key theme in science-fiction novels and films about robots, androids, golems, zombies, and homicidal puppets. Such ethical implications aren’t Lipson’s concern; that’s mainly for seminars and summits in a future he won’t inhabit. But such discussions are already beginning on some campuses. We’ve entered the age of such college disciplines as “robo-ethics” and Lipson’s specialty, “evolutionary robotics.”

Has it come to this, I wonder, creating novel life forms to prove we can, because a restless mind, left to its own devices and given enough time, is bound to create equally restless devices, just to see what happens? It’s a new threshold of creators creating creative beings.

“Creating life is certainly a tall pinnacle to surmount. Is it also a bit like having children?” I ask Lipson.

“In a different way.… Having children isn’t so much an intellectual challenge, but other kinds of challenges.” His eyebrows lift slightly to underline the understatement, and a memory seems to flit across his eyes.

“Yes, but you set them in motion and they don’t remake themselves exactly, but…”

“You have very little control. You can’t program a child…”

“But you can shape its brain, change the wiring.”

“Maybe you can shape some of the child’s experiences, but there are others you can’t control, and a lot of the personality is in the genes: nature, not nurture. Certainly in the next couple of decades we won’t be programming machines, but… like children, exactly… we’ll shape their experiences a little bit, and they’ll grow on their own and do what they do.”

“And they’ll simply adjust to whatever job is required?”

“Exactly. Adaptation and adjustment, and with that will come other issues, and a lot of problems.” He smiles the smile of someone who has seen dust-ups on a playground. “Emotions will be a big part of that.”

“You think we’ll get to the point where machines have deep emotions?”

“They will have deep emotions,” Hod says, certain as the tides. “But they won’t necessarily be human emotions. And also machines will not always do what we want them to do. This is already happening. Programming something is the ultimate control. You get to make it do exactly what you want when you want it. This is how robots in factories are programmed to work today. But the more we give away some of our control over how the machine learns…”

As a cool gust of October air wafts through the screenless window, carrying a faint scent of crumbling magnolia leaves and damp earth, it trails gooseflesh across my wrist.

“Let me close the window.” Hod slides gingerly off the tall chair as if from a soda fountain seat and closes the gaping mouth of the window.

We were making eye contact; how did he notice my gooseflesh? Stare at something and only the center of your vision is in focus; the periphery blurs. Is his visual compass wider than most people’s, or is he just being a thoughtful host and, sensing a breeze himself, reasoning that since I’m sitting closer to the window I might be feeling chillier? As we talk, his astonishingly engineered biological brain—with its flexible, self-repairing, self-assembling, regenerating components that won’t leave toxic metals when they decompose—is working hard on several fronts: picturing what he wants to say in all of its complexity; rummaging through a sea of raw and thought-rinsed ideas; gauging my level of knowledge—very low in his field; choosing the best way to translate his thoughts into words for this newly met and unfamiliar listener; reading my unconscious cues; rethinking some of his words when they’re barely uttered; revising them right as they’re leaving his mouth, in barely perceptible changes to a word’s opening sound; choosing the ones most accurate on several levels (literally, professionally, emotionally, intellectually) whose meaning I may nonetheless give subtle signs of not really understanding—signs visible to him though unconscious to me, as they surface from a dim warehouse of my previous thoughts and experiences and a vocabulary in which each word carries its own unique emotional valence—while at the same time he’s also forming impressions of me, and gauging the impression I might be forming of him…

This is called a “conversation,” the spoken exchange of thoughts, opinions, and feelings. It’s hard to imagine robots doing the same on as many planes of meaning, layered emotions, and spring-loaded memories.

Beyond the windows with their magenta-colored accordion blinds, and the narrow Zen roof garden of rounded stones, twenty yards across the courtyard and street, behind a flimsy orange plastic fence, giant earth-diggers and men in hard hats are tearing up rock and soil with the help of machines wielding fierce toothy jaws. Such brutish dinosaurs will one day give way to rational machines that can transform themselves into whatever the specific task requires—perhaps the sudden repair of an unknown water pipe—without a boss telling them what to do. By then the din of jackhammers will also be antiquated, though I’m sure our hackles will still twitch at the scrape of clawlike metal talons on rock.

“When a machine learns from experience, there are few guarantees about whether or not it will learn what you want,” Lipson continues as he remounts his chair. “And it might learn something that you didn’t want it to learn, and yet it can’t forget. This is just the beginning.”

I shudder at the thought of traumatized robots.

He continues, “It’s the unspoken Holy Grail of a lot of roboticists—to create just this kind of self-awareness, to create consciousness.”

What do roboticists like Lipson mean when they speak of “conscious” robots? Neuroscientists and philosophers are still squabbling over how to define consciousness in humans and animals. On July 7, 2012, a group of neuroscientists met at the University of Cambridge to declare officially that nonhuman animals “including all mammals and birds, and many other creatures, including octopuses” are conscious. To formalize their position, they signed a document entitled “The Cambridge Declaration on Consciousness in Non-Human Animals.”

But beyond being conscious, humans are quintessentially self-aware. Some other animals—orangutans and other cousins of ours, dolphins and octopuses, and some birds—are also self-aware. A wily jay might choose to cache a seed more quietly because other jays are nearby and it doesn’t want the treasure stolen; an octopus might take the lid off its habitat at night to go for a stroll and then replace the lid when it returns lest its keepers find out. They possess a theory of mind, and can intuit what a rival might do in a given situation and act accordingly. They exhibit deceit, compassion, the ability to see themselves through another’s eyes. Chimpanzees feel deeply, strategize, plan, think abstractly to a surprising degree, mourn, empathize some, deceive, seduce, and are all too conscious of life’s pressures, if not its chastening illusions. They’re blessed and burdened, as we are, by strong family ties and quirky personalities, from madcap to martinet. They jubilate when happy, mope when sad.

I don’t think they fret and reason endlessly about mental states, as we do. They simply dream a different dream, probably much like the one we used to dream, before we crocheted into our neural circuitry the ability to have ideas about everything. Other animals may know you know something, but they don’t know you know they know. Other mammals may think, but we think about having thoughts. Linnaeus categorized us in the subspecies of Homo sapiens sapiens, adding the extra sapiens because we don’t just know, we know that we know. Our infants respond to their surroundings and other people, and start evolving a sense of self during their first year. Like orangutans, elephants, and even European magpies, they can identify themselves in a mirror, and they gather that others have a personal point of view that differs from their own.

So when people talk about robots being conscious and self-aware, they mean a range of knowing. Some robots may be smarter than humans, more rational, more skillful in designing objects, and better at anything that requires memory and computational skills. I reckon they can be deeply curious (though not exactly the way we are), and will grow even more so. They can already do an equivalent of what we think of as ruminating and obsessing, though in fewer dimensions. Engineers are designing robots with the ability to attach basic feelings to sensory experience, just as we do, by interacting with the world, filing the memory, and using it later to predict the safety of a situation or the actions of others.

Lipson wants his robots to make assumptions and deductions based on past experiences, a skill underlying our much-prized autobiographical memory, and an essential component of learning. Robots will learn through experience not to burn a hand on a hot stove, and to look both ways when crossing the street. There are also subtle, interpersonal clues to decipher. For instance, Lipson uses the British “learnt” instead of the American “learned,” but the American “while” instead of the British “whilst.” So, from past experience, I deduce that he learned English as a child from a British speaker, and assume he has lived in the United States just long enough to rinse away most of the British traces.

Yet however many senses robots may come to possess—and there’s no reason why they shouldn’t have many more than we, including sharper eyesight and the ability to see in the dark—they’ll never be embodied exactly like us, with a thick imperfect sediment of memories, and maybe a handful of diaphanous dreams. Who can say what unconscious obbligato prompts a composer to choose this rhythm or that—an irregular pounding heart, tinnitus in the ears, a lover who speaks a foreign language, fond memories evoked by the crackle of ice in winter, or an all too human twist of fate? There would be no Speak, Memory from Nabokov, or The Gulag Archipelago from Solzhenitsyn, without the sentimental longings of exile. I don’t know if robots will be able to do the sort of elaborate thought experiments that led Einstein to discoveries and Dostoevsky to fiction.

Yet robots may well create art, from who knows what motive, and enjoy it based on their own brand of aesthetics, satire (if they enjoy satire), or humor. We might enjoy it, too, especially if it’s evocative of work by human artists, if it appeals to our senses. Would we judge it differently? For one of its gallery shows, Yale’s art museum accepted paintings inspired by Robert Motherwell, only to change its mind when it learned they’d been painted by a robot in Lipson’s Creative Machines Lab. It would be fun to discover robots’ talents and sensibility. Futurologists like Ray Kurzweil believe, as Lipson does, that a race of conscious robots, far smarter than we, will inhabit Earth’s near-future days, taking over everything from industry, education, and transportation to engineering, medicine, and sales. They already have a foot in the door.

At the 2013 Living Machines Conference, in London, the European RobotCub Consortium introduced their iCub, a robot that has naturally evolved a theory of mind, an important milestone that develops in children at around the age of three or four. Standing about three feet tall, with a bulbous head and pearly white face, programmed to walk and crawl like a child, it engages the world with humanlike limbs and joints, sensitive fingertips, stereo vision, sharp ears, and an autobiographical memory that’s split like ours into the episodic memory of, say, skating on a frozen pond as a child and the semantic memory of how to tilt the skate blades on edge for a skidding stop. Through countless interactions between body and world it codifies knowledge about both. None of that is new. Nor is being able to distinguish between self and other, and intuit the other’s mental state. Engineers like Lipson have programmed that discernment into robots before. But this was the first time a robot evolved the ability all by itself. iCub is just teething on consciousness, to be sure, but it’s intriguing that the bedrock of empathy, deception, and other traits that we regard as conscious can accidentally emerge during a robot’s self-propelled Darwinian evolution.

It happened like this. iCub was created with a double sense of self. If he wanted to lift a cup, his first self told his arm what to do, while predicting the outcome and adjusting his knowledge based on whatever happened. His second—we can call it “interior”—self received exactly the same feedback, but, instead of acting on the instructions, it could only try to predict what would happen in the future. If the real outcome differed from a prediction, the interior self updated its cavernous memory. That gave iCub two versions of itself, an active one and an interior “mental” one. When the researchers exposed iCub’s mental self to another robot’s actions, iCub began intuiting what the other robot might do, based on personal experience. It saw the world through another’s eyes.

As for our much-prized feats of scientific reasoning and insight, Lipson’s lab has created a Eureqa machine, a computer scientist able to make a hypothesis, design an experiment, contemplate the results, and derive laws of nature from them. Plumbing the bottomless depths of chaos, it divines meaning. Assigned a problem in Newtonian physics (how a double pendulum works), “the machine took only a couple of hours to come up with the basic laws of motion,” Lipson says, “a task that occupied Newton for years after he was inspired by an apple falling from a tree.”

Eureqa takes its name from a legendary moment in the annals of science, two thousand years ago, when Archimedes—already a renowned mathematician and inventor with formidable mastery in his field—was soaking in his bathtub, his senses temporarily numbed by warm water and weightlessness, and the solution to a problem came to him in a flash of insight. Leaping from the tub, he supposedly ran naked through the streets of Athens yelling, “Eureka!” (“I have found it!”)

For two thousand years, that’s how traditional science has run: solid learning and mastery, then the kindling of observation and a spark of insight. The Eureqa machine marks a turning point in the future of how science is done. Once upon a time, Galileo studied the movement of the heavenly bodies, Newton watched an apple fall in his garden. Today science is no longer that simple because we wade through oceans of information, generate vast amounts of additional data, and analyze it on an unprecedented scale. Virtuoso number-crunchers, our computers can extract data without bias, boredom, vanity, selfishness, or greed, quickly doing the work that used to take one human a lifetime.

In 1972, when I was writing my first book, The Planets: A Cosmic Pastoral, a suite of scientifically accurate poems based on the planets, I used to hang out in the Space Sciences Building at Cornell. The astronomer Carl Sagan was on my doctoral committee, and he kindly gave me access to NASA photographs and reports. At that time, it was possible in months to learn nearly everything humans knew about the other planets, and the best NASA photos of the outermost planets were only arrows pointing to balls of light. Over the decades, I attended flybys at the Jet Propulsion Laboratory in Pasadena, California, and watched the first exhilarating images roll in from distant worlds as Viking and Voyager reached Mars, Jupiter, Saturn, Neptune, and an entourage of moons. In the 1980s, it was still possible for an amateur to learn everything humans knew about the planets. Today that’s no longer so. The Alps of raw data would take more than one lifetime to summit, passing countless PhD dissertations at campsites along the trail.

But all that changes with a tribe of Eureqa-like machines. A team of scientists at the University of Aberystwyth, led by Professor Ross King, has revealed the first machine able to deduce new scientific knowledge about nature on its own. Named Adam, the two-armed robot designed and performed experiments to investigate the genetics of baker’s yeast. Carrying out every stage of the scientific process by itself without human intervention, it can perform a thousand experiments a day and make discoveries.

More efficient science will solve modern society’s problems faster, King believes, and automation is the key. He points out that “automation was the driving force behind much of the nineteenth- and twentieth-century progress.” In that spirit, King’s second-generation laboratory robot, named Eve, is even faster and nimbler than Adam. It’s easy to become mesmerized watching a webcam of Eve testing drugs, her automated arms and stout squarish body shuffling trays, potions, and tubes with tireless precision, as she peers through ageless nonblinking eyes, while saving the sanity of countless graduate students, spared sleepless nights in the lab tending repetitive experiments.

How extraordinary that we’ve created peripheral brains to discover the truths about nature that we seek. We’re teaching them how to work together calmly as a society, share data at lightning speed, and cooperate so much better than we do, rubbing brains together in the invisible drawing room we sometimes call the “cloud.” Undaunted, despite our physical and mental limitations, we design robots to continue the quest we began long ago: making sense of nature. Some call it Science, but it’s so much larger than one discipline, method, or perspective.

I find it touchingly poetic to think that as our technology grows more advanced, we may grow more human. When labor, science, manufacturing, sales, transportation, and powerful new technologies are mainly handled by savvy machines, humans really won’t be able to compete in those sectors of the economy. Instead we may dominate an economy of interpersonal or imaginative services, in which our human skills shine.

Smart robots are being nurtured and carefully schooled in laboratories all over the world. Thus far, Lipson’s lab has programmed machines to learn things unassisted, teaching themselves the basic skills of how to walk, eat, metabolize, repair wounds, grow, and design others of their kind. At the moment, no one robot can do everything; each pursues its own special destiny. But one day, all the lab machines will merge into a single stouthearted… being—what else would we call it?

One of Lipson’s robots knows the difference between self and other, the shape of its physique, and whether it can fit into odd spaces. If it loses a limb, it revises its self-image. It senses, recollects, keeps updating its data, just as we do, so that it can predict future scenarios. That’s a simple form of self-awareness. He’s also created a machine that can picture itself in various situations—very basic thought experiments—and plan what to do next. It’s starting to think about thinking.

“Can I meet it?” I ask.

His eyes say: If only.

Leading me across the hall, into his lab, he stops in front of a humdrum-looking computer on a desk, one of many scattered around the lab.

“All I can show you is this ordinary-looking computer,” he says. “I know it doesn’t look exciting because the drama is unfolding in the software inside the machine. There’s another robot,” he says, gesturing to a laptop, “that can look at a second robot and try to infer what that other robot is thinking, what the other robot is going to do, what the other robot might do in a new situation, based on what it did in a previous situation. It’s learning another’s personality. These are very simple steps, but they’re essential tools as we develop this technology. And with this will come emotions, because emotions, at the end of the day, have to do with the ability to project yourself into different situations—fear, various needs—and anticipate the rewards and pain in many future dramas. I hope that, as the machines learn, eventually they’ll produce the same level of emotions as in humans. They might not be the same type of emotions, but they will be as complex and rich as in humans. But it will be different, it will be alien.”

I’m fascinated by the notion of “other types of emotions.” What would a synthetic species be like without all the lavish commotion of sexual ardor, wooing, jealousy, longing, affectionate bonds, shared experiences? Just as I long to know about the inner (and outer) lives of life forms on distant planets, I long to know about the obsessions, introspections, and emotional muscles that future species of robots might wrestle with. A powerful source of existential grief comes from accepting that I won’t live long enough to find out.

“Emotional robots… I’ve got a hunch this isn’t going to happen in my lifetime.” I’m a bit crestfallen.

“Well, it will probably take a century, but that’s a blip in human history, right?” he says in a reassuring tone. “What’s a century? It’s nothing. If you look at the curve of humans on Earth,” he says, curving one hand a few inches off the table, “we’re right there. That’s a hundred years.”

“So much has happened in just the last two hundred years,” I say, shaking my head. “It’s been quite an express ride.”

“Exactly. And the field is accelerating. But there’s good and bad, right? If you say ‘emotions,’ then you have depression, you have deception, you have creativity and curiosity—creativity and curiosity we’re already seeing here in various machines.

“My lab is called the Creative Machines Lab because I want to make machines which are creative, and that’s a very very controversial topic in engineering, because most engineers—close the door, speak quietly—are stuck in the Intelligent Design way of thinking, where the engineer is the intelligent person and the machines are being created, they just do the menial stuff. There’s a very clear division. The idea that a machine can create things—possibly more creatively than the engineer that designed the machine—well, it’s very troubling to some people, it questions a lot of fundamentals.”

Will they grow attached to others, play games, feel empathy, crave mental rest, evolve an aesthetics, value fairness, seek diversion, have fickle palates and restless minds? We humans are so far beyond the Greek myth of Icarus, and its warning about overambition (father-and-son inventors and wax wings suddenly melting in the sun). We’re now strangers in a strange world of our own devising, where becoming a creator, even the Creator, of other species is the ultimate intellectual challenge. Will our future robots also design new species, bionts whose form and mental outlook we can’t yet imagine?

“What’s this?” I ask, momentarily distracted by a wad of plastic nestled on a shelf.

He hands me the strange entanglement of limbs and joints, a small robot with eight stiff black legs that end in white ball feet. The body is filamental, like a child’s game of cat’s cradle gone terribly wrong, and it has no head or tentacles, no bulging eyes, no seedlike brain. It wasn’t designed as an insect. Or designed by humans, for that matter.

Way back in our own evolution, we came from fish that left the ocean and flopped from one puddle to another. In time they evolved legs, a much better way to get around on land. When Lipson’s team asked a computer to invent something that could get from point A to point B—without programming it how to walk—at first it created robots reminiscent of that fish, with multihinged legs, flopping forward awkwardly. A video, posted on YouTube, records its first steps, with Lipson looking on like a proud parent, one who appreciates how remarkable such untutored trials really are. Bits of plastic were urged to find a way to combine, think as one, and move themselves, and they did.

In another video, a critter trembles and skitters, rocks and slides. But gradually it learns to coordinate its legs and steady its torso, inching forward like a worm, and then walking insectlike—except that it wasn’t told to model an insect. It dreamt up that design by itself, as a more fluent way forward. Awkward, but effective. Baby steps were fine. Lipson didn’t expect grace. He could make a spider robot that would run faster, look better, and be more reliable, but that’s not the point. Other robots are bending, flexing, and running, using replica tendons and muscles. DARPA’s “cheetah” was recently clocked at a tireless 30 mph sprint. But that cheetah was programmed; it would be a four-legged junkpile without a human telling it what to do. Lipson wants the robot to do everything on its own, eclipsing what any human can design, unfettered by the paltry ideas of its programmers.

It’s a touching goal. Surpassing human limits is so human a quest, maybe the most ancient one of all, from an age when dreams were omens dipped in moonlight, and godlike voices raged inside one’s head. A time of potent magic in the landscape. Mountains attracted rain clouds and hid sacred herbs, malevolent spirits spat earthquakes or drought, tyrants ruled certain trees or brooks, offended waterholes could ankle off in the night, and most animals parleyed with at least one god or demon. What was human agency compared to that?

Robots on a Date

Looking around Lipson’s quiet lab, I sense something missing. “You have real students sitting at the computer benches. I don’t see any chatbots.”

Lipson smiles indulgently. His chatbots have been a YouTube craze. “That was just an afternoon hack. It went viral in twenty-four hours and took us completely by surprise.”

He doesn’t mean “hack” in its usual sense of breaking into a computer with malicious intent, but as highwire digital artistry. The Urban Dictionary defines its slang use like this: “v. To program a computer in a clever, virtuosic, and wizardly manner. Ordinary computer jockeys merely write programs; hacking is the domain of digital poets. Hacking is a subtle and arguably mystical art, equal parts wit and technical ability, that is rarely appreciated by non-hackers.”

One day, Lipson asked two of his PhD students to bring a demo chatbot to his Artificial Intelligence class. Acting a bit like a portable, rudimentary psychotherapist, a chatbot is an online program that reflects what someone says in slightly different words and asks open-ended questions. It can come across as surprisingly lifelike (which says a lot about the clichés that pass for everyday chitchat). But in 1997 a “Cleverbot,” designed by the British AI expert Rollo Carpenter, went online with a teeming arcade of phrases compiled from all of its past conversations. Each encounter had taught it more about how to interact with humans, including the subtleties of innuendo and pricks of friendly debate, and it learned to apply those nuances in the next chat. Since then it’s held twenty million conversations, and its verbal larder is a treasury (or a snakepit) of useful topics, ripe phrases, witty responses, probing questions, defensive expressions, and the subtle rules of engagement, gleaned from years of bantering with humans.

Lipson’s grad students set the laptops face-to-face on a table so that they could tête-à-tête in a virtual parlor. On one screen a computer-generated male materialized, on the other screen a female. The man spoke with a slight British accent, the woman in a syncopated Indian voice. Fortunately, the grad students videotaped the encounter and posted it online, where the chatty Cleverbots have now enchanted over four million people with their oddly human conversation.

The robots begin with a simple “Hello there,” followed by pleasantries, but as they respond to one another they soon start to disagree, and the exchange grows funny, poignant, snarky, and thoroughly hypnotic.

“You were mistaken,” Mr. Cleverbot says to Ms. Cleverbot, adding sarcastically, “which is odd, since memory shouldn’t be a problem for you!”

“What is God to you?” she asks him at one point.

“Not everything,” he says. It’s a surprisingly plausible answer.

“Not everything could still be something,” she insists with jesuitical aplomb.

“Very true,” he concedes.

“I would like to believe it is.”

“Do you believe in God?” he asks.

“Yes I do,” she says emphatically.

“So you’re Christian…”

“No I am not!” she snaps.

They bicker and make nice-nice. He calls her a “meanie,” for not being helpful. She suddenly asks him a painful question, one any human might wonder about. Still, it’s disquieting to hear.

“Don’t you want to have a body?”

And then, surprisingly, like someone who has accepted a fate he nonetheless laments, he answers: “Sure.”[27]

What else is there to say? Abruptly they freeze into replica humans once more, and the video clip is over. Some people detect animosity or sexual tension between the man and woman, others a marital spat. We’re ready to accept fictional robots in movies and stories, but are we ready for a synthetic life form that feels regret, introspects, and conducts relationships—creatures opaque to us, whose minds we can’t fully mirror? Do the chatbots appeal because they’re so like us, or because we’re so like them?

There are scores of people in robotics who can fine-tune a robot’s movements, even design truly lifelike robots with delicately mobile faces. Italian roboticists, for example, have created a series of realistic-looking heads that synchronize thirty-two motors hidden beneath the robots’ polymer skin, and mimic all of our facial expressions, based on muscle movements, and can even capture the emotional space between furrowing the brows, say, and frowning. Such robots have already passed the stage of being a mere sensation in the robotics world. Fully-featured human faces are smiling, grimacing, exchanging knowing looks the world over. Unlike Madame Tussaud’s wax-museum stars, today’s robots look lifelike enough to seem a bit creepy, with facial expressions that actually elicit empathy and make your mirror neurons quiver. Equally realistic squishy bodies aren’t far behind. One can easily imagine the day, famously foretold in the movies Blade Runner and Alien, when computers with faces feel silicon flavors of paranoia, love, melancholy, anger, and the other stirrings of our carbon hearts. Then the already lively debate about whether machines are conscious will really heat up. This was always the next step toward designing a self-aware, agile, reasoning, feeling, moody other, who may look like you or your sibling (but have better manners).

No doubt “robot sociology” and “robot psychology” will emerge as important disciplines, because there’s an interesting thing that happens when robots become self-aware. Just like people, they sometimes get wrong impressions of themselves, skewed enough to create robot delinquents, and we might start to see traits parallel to psychological problems in humans.

When I used to volunteer as a telephone Crisis Line counselor, it wasn’t always easy finding ways to help the callers who phoned in deep despair or creased by severe personality disorders. Self-aware robots with social crises, neuroses, even psychoses? That might prove a challenge. Would they identify with and prefer speaking to others of their kind? Suppose it concerned a relationship with humans? Colleges have popular schools of “International Labor Relations,” “Human Ecology,” and “Social Work.” Can “Interspecies Labor Relations,” “Robot Ecology,” and “Silicon Social Work” be far behind? How about a relief order for aged, infirm, or incarcerated robots, such as “Android Daughters of Charity” or “Our Sisters of Perpetual Motion?”

What would the Umwelt (worldview encompassing thoughts, feelings, and sensations) of a self-aware robot be like? We’re no longer entertaining such ideas merely as flights of imagination, but contemplating how to behave in a rapidly approaching future with the startling technology we’re generating. If, as Lipson says, our new species of conscious, intelligent robots will learn through curiosity and experience, much as children do, then even robo-tots will need good parenting. Who will set those codes of behavior—individuals or society as a whole?


CAN WE LIVE inside a house that’s a robotic butler, protector, and chatbot companion all rolled into one, an entity with its own personality and metabolism?[28] Its brain would be a robotic Jeeves (or maybe Leaves), who tends the meadow walls and human family with equal pride, and is a good listener, with a bevy of facial expressions. A fully butlered house with a face that rises from a plastic wall would monitor the energy grid, fuel the car (with hydrogen), while exchanging news, ordering groceries, piloting a personal drone to the post office, and preparing a Moosewood Restaurant lunch recipe that includes herbs from the herb-garden island in the kitchen, and arugula and tomatoes from the rooftop garden. In some high-tech enclaves, smart locks are now opened by virtual keys on iPhones, and family members wear a computer tracking chip that stores their preferences. As they move through each room, lights turn on ahead of them and fade away behind, a thermostat adjusts itself, the song or TV show or movie they were enjoying greets them, favorite food and drink are proffered. The house’s nervous system is what’s known as the “Internet of Things.”

In 1999, the technology pioneer Kevin Ashton coined the term for a cognitive web that unites a mob of physical and virtual digital devices—furnace, lights, water, computers, garage door, oven, etc.—with the physical world, much as cells in the body communicate to coordinate actions. As they cabal among themselves, synchronizing their energy use and activities, they can also share data with the neighborhood, city, and wired world.

Combining animal, vegetable, mineral, and machine, his idea is playing out in the avant-garde new city of Songdo, South Korea, where the Internet of Things is nearly ubiquitous. Smart homes, shops, and office buildings stream data continuously to a cadre of computers that sense, scrutinize, and make decisions, monitoring and piloting the whole synchronous city, mainly without human help. They’re able to analyze picayune details and make sure all the infrastructure hums smoothly, changing traffic flow during rush hour as needed, watering parks and market gardens, or promptly removing garbage (which is sucked down through subterranean warrens to a processing center where it’s sorted, deodorized, and recycled). Toiling invisibly in the background, the council of computers can organize massive subway repairs, or send you a personal cell phone alert if your bus is running late.

It’s a little odd thinking of computers taking meetings on the fly and gabbing together in an alien argot. But naming it the Internet of Things domesticates an idea that might otherwise frighten us. We know and enjoy the Internet, already older than many of its users, and familiar now as a pet. In an age where even orangutans Skype on iPads, what could be more humdrum than the all-purpose, nondescript word “things”? The Internet of Things reassures us that this isn’t a revolutionary idea—though, in truth, it is—just an everyday technology linked to something vague and harmless sounding. It doesn’t suggest brachiating from one reality to another; it just expands the idea of last century’s cozy new technology, and animates the idea of home.

In J. G. Ballard’s sci-fi short story “The Thousand Dreams of Stellavista,” there are psycho-sensitive houses that can be driven to hysteria by their owners’ neuroses. Picture sentient walls sweating with anxiety, a staircase keening when an occupant dies, roof seams fraying from a mild sense of neglect. Some days I swear I’m living in that house right now.

Printing a Rocking Horse on Mars

For centuries, the world’s manufacturing has been a subtractive art, in which we created artifacts by cutting, drilling, chiseling, chopping, scraping, carving. As a technology, it’s been both mind-blowing and life-changing, launching the Industrial Revolution, spawning the rise of great cities, spreading the market for farm-raised goods, and wowing us with everything from ballpoint pens to moonwalkers. It’s still a wildly useful method, if sloppy; it creates heaps of waste and leftovers, which means extracting even more raw materials from the earth. Also, mass-produced items, whether clothing or electronics, require a predicament of cheap labor to add the final touches.

In contrast, there’s “additive manufacturing,” also known as 3D printing, a new way of making objects in which a special printer, given the digital blueprint for a physical item, can produce it in three dimensions. Solidly, in precise detail, many times, and with minimal overhead. The stuff of Star Trek “replicators” or wish-granting genies.

3D printing doesn’t cut or remove anything. Following an electronic blueprint as if it were a musical score, a nozzle glides back and forth over a platform, depositing one microscopic drop after another in a molten fugue, layer upon layer until the desired object rises like a sphinx from the sands of disbelief. Aluminum, nylon, plastic, chocolate, carbon nanotubes, soot, polyester—the raw material doesn’t matter, provided it’s fluid, powder, or paste.

Hobbyists share their favorite digital blueprints via the Internet, and some designs are licensed by private companies. Like many other technologies, 3D printing does have a potential dark side. People have already printed out handguns, brass knuckles, and skeleton keys that can open most police handcuffs. Future laws will undoubtedly restrict access to illegal and patented blueprints, and also to dangerous metals and gases, explosives, weapons, and maybe the fixings for street drugs.

Imagine being able to press the print button whenever you want a candelabra, toothbrush, matching spoon, necklace, dog toy, keyboard, bike helmet, engagement ring, car rack, hostess gift, stealth aircraft rivets—or whatever else need or whim dictates. The Obama administration announced that it had seen the future and was investing $1 billion in 3D printing “to help revitalize American manufacturing.” According to scientists and financial analysts alike, within a decade household 3D printers will be as common as TVs, microwaves, and laptops. However, people will still need to buy supplies and copyrighted blueprints for home printing, and many will order 3D objects ready-made from cottage industries.

In the future, even in the Mars colony Olivine calls home, she could fabricate a rocking horse of exactly the right height and dappled pattern on the morning of her daughter’s birthday. Or she might print an urgently needed pump, and then a set of demitasse spoons with Art Deco stems. Or paint shades that don’t yet exist in tubes. Artifacts that can’t be created in any other way, such as a ball within a ball within a ball within a ball. Or an item with a hundred moving parts that’s printed as a single piece. From this strange new forge, who knows what artworks and breakthroughs will emerge. The creative opportunities are legion.

We may ignore all the traditional limits set by conventional manufacturing. With micrometer-scale precision, we can seal materials within materials, and weave them into stuff with bizarre new structural behaviors, like substances that expand laterally when you pull them longitudinally. A brave new world of objects.

What is an object if you can grow it in your living room drop by drop or molten coil upon coil? How will we value it? Today, because 3D printing is still a novelty for many people, we value its products highly, in wonderment. But when cheap home 3D printers become commonplace (today’s cost anywhere from $400 to $10,000), and factory 3D printing replaces the assembly lines and warehouses, and even body parts and organs can be made to order, we’ll live in an even more improbable world, where some objects continue to exist as tangible things, as merchandise, but a great many will exist concretely but in nonmaterial form, in a cloud or in a cartridge of fluid or powder, the way e-books do, as quickly accessible potential.

As cars, rockets, furniture, food, medicine, musical instruments, and much more become readily printable (some of those already are), it’s bound to temporarily unnerve the world’s economies. After all, we value things according to their scarcity. When gold is plentiful, it’s cheaper. But if objects lurk as software codes, inside computers, and are abundantly available at the push of a button, they’ll exist as another class of being. How will that change our idea of matter and the physical reality of all that surrounds us? Will it lead to an even more wasteful world? Will handcrafted objects become all the dearer? Will the Buddhist doctrine of nonattachment to worldly things flourish? Will we become more reckless?

This may all seem far-fetched, but not so long ago the Xerox machine was a leap of faith from carbon paper. When I first worked as a professor, making a carbon copy—what the “cc” on e-mail stands for—was a part of daily life. It’s still somewhat astonishing to me that we can now print images in color, from home machines that can connect to our computers through the air.

Many companies won’t look the same, because they won’t need to hire scores of workers, buy raw materials, ship or stock or produce anything. Industry, as we know it, may end. Financial advisers, business magazines, and online investment sites such as the Motley Fool believe 3D printing companies will clean up big-time, because their overhead will be so much lower, and they’ll sell only the clever designs or raw materials.

Not right away. Most people will probably still find it more convenient to buy ready-made things. But soon enough, in the next fifteen years, 3D printing will revolutionize life from manufacturing to art, and practical visionaries like Lipson feel certain it will usher in the next great cultural and psychological revolution. For some, that future is the obvious sequel to the digital revolution. For others, it’s as magical as a picture painted on water.

“Just like the Industrial Revolution, the assembly line, the advent of the internet and the Social Media phenomenon,” Forbes magazine forecasts, “3D Printing will be a game changer.”

How close are we to that day? It’s already dawned. 3D printers are whipping up such diverse marvels as drone aircraft, designer chocolates, and the parts to build a moon outpost from lunar soil. Already, the TV host Jay Leno uses his personal 3D printer to mint hard-to-find parts for his collection of classic cars. The Smithsonian uses its 3D printer to build dinosaur bones. Cornell archaeologists used a 3D printer to reproduce ancient cuneiform tablets from Mesopotamia. Restorers at Harvard’s Semitic Museum used their 3D printer to fill in the gaps of a lion artifact that was smashed three thousand years ago. In China’s Forbidden City, researchers use a 3D printer to inexpensively restore damaged buildings and artworks. NASA used 3D printing to build a prototype of a two-man Space Exploration Vehicle (an oversized SUV astronauts can live in while they explore Mars). A USC professor, Behrokh Khoshnevis, has devised a method known as Contour Crafting for printing out an entire house, layer by layer—including the plumbing, wiring, and other infrastructure—in twenty hours. When 3D printers are linked to geological maps, houses can be made to fit their terrain perfectly. Khoshnevis is designing both single houses and colonies for urban planning, or for use after hurricanes, tornadoes, and other natural disasters when fully functional emergency houses will be 3D-printed from the ground up.

Boeing is 3D-printing seven hundred parts for its fleet of 747s; it’s already installed twenty thousand such parts on military aircraft. The military’s innovative design branch, DARPA, which began funding 3D printers two decades ago, finds them invaluable for repairing fighter jets in combat or supporting ground troops on the front lines. They’re superb at coining parts instantly, remotely, to exact specifications, without having to wait for urgently needed supplies, or risk lives to ferry them through hostile terrain. Companies like Mercedes, Honda, Audi, and Lockheed Martin have been fashioning prototypes and creating numerous parts inside 3D printers for years. Audi plans on selling its first 3D-printed cars (modules printed then robot-assembled) in 2015.

The Swiss architect Michael Hansmeyer has 3D-printed the world’s most complex architecture: nine-foot-tall Doric columns of breathtakingly intricate swirling organic laces, crystals, grilles, pyramids, webs, beehives, and ornaments, madly rippling around, fainting through, vaulting from, and imbedded into each other as layers of exquisitely organized chaos that began as a mirage in the mind and hardened. Containing sixteen million individual facets and weighing a ton, it looks like a roller-coaster ride down a scanning electron microscope into the crystalline spikes of amino acids. It’s easy to imagine a cathedral by Antoni Gaudí with such columns in Barcelona. Or the labyrinthine short stories the Argentine fabulist Jorge Luis Borges might unleash among them.

“Twenty-five-year-olds today aren’t burdened with traditional methods and rules,” says Scott Summit, who heads Bespoke Innovations, a San Francisco–based firm that uses 3D printing to create elegant, tailor-made prosthetic devices. “There are guys who have been doing 3D modeling since they were eleven and are caffeinated and ready to go. They can start a product company in a week and, in general, have a whole new take on what manufacturing can be.”

Since anything that can be designed on a computer and squirted through a nozzle is 3D-printable, people overwintering in Antarctica or other remote outposts will soon print their own cleaning products, medicines, and hydroponic greenhouses.

This blossoming technology widens the dream horizon of research, paving the way for new pharmaceuticals and new forms of matter. At the University of Glasgow, Lee Cronin and his team are perfecting a “chemputer,” as well as a portable medicine cabinet so that NATO can disperse drugs to remote villages, especially simpler drugs such as ibuprofen. Despite unleashing an inner circus, most drugs are only a combination of oxygen, hydrogen, and carbon. With those simple inks and a supply of recipes, a 3D printer could concoct a sea of remedies. Flasks, tubes, or unique implements might also be printed on the spot. Creating new substances with 3D printers, researchers will be able to mix molecules together like a basket of ferrets and see how they interact. Then, as drug companies patent the recipes, those recipes (not the drugs) will hold value, just as apps do.

With 3D printers, complexity is free.[29] For the first time, making something complicated with crisp details and ornate features is no harder than making a spoon or a paper weight. After the design component, it requires the same amount of resources and skills. That’s a first in manufacturing, and a first in human history. If one person, regardless of skill or strength, can replace an entire factory, then identity and sense of volition are bound to shift. Will we all feel like kingpins of industry? No more so than most people do today, I imagine. But we should.

In research labs and medical centers all over the world, bioengineers are printing living tissue and body parts. That, too, is a first in human history, and a radical departure in how we relate to our bodies—not as fragile sacks of chemicals and irreplaceable organs, but as vehicles whose worn or damaged parts may be rebuilt.

In 2002, the bioengineer Makoto Nakamura noticed that the ink droplets deposited by his inkjet printer were about the same size as human cells. By 2008, he had adapted the technology to use living cells as ink. A regular 3D printer extrudes melted plastic, glass, powder, or metal and deposits the droplets in minuscule layers. More droplets follow, carefully placed on top of the previous ones in a specific pattern. The same is true for bioprinting, but using the patient’s own cells reduces the chance of rejection. Each drop of ink contains a cluster of tens of thousands of cells, which fuse into a shared purpose. Although one can’t control the details, one doesn’t need to, because living cells by their fundamental nature organize themselves into more complex tissue structures. The hope is to be able to repair any damaged organ in the body. No more worrying about size or rejection, no more waiting for a kidney or liver to become available.

Today, in university and corporate labs around the world, bioengineers are busily printing ersatz blood vessels, nerves, muscles, bladders, heart valves and other cardiac tissues, corneas, jaws, hip implants, nose implants, vertebrae, skin that can be printed directly onto burns and wounds, windpipes, capillaries (made elastic by pulses from high-energy lasers), and mini-organs for drug testing (bypassing the need for animal trials). An Italian surgeon recently transplanted a bespoke windpipe into a patient. Washington State University researchers have printed tailor-made human bones for use in orthopedic procedures. An eighty-three-year-old woman, suffering from a chronic infection in her entire lower jaw, had it replaced with a custom-built 3D titanium jaw, complete with all the right grooves and dimples to speed nerve and muscle attachment. Already speaking with it in post-op, she went home four days later.

A team of European scientists has even grown a miniature brain for drug tests (though, fortunately, it’s not capable of thought). Organovo, a leading biotech company in San Diego, has 3D-printed working blood vessels and brain tissue, and successfully transplanted them into rats. Human trials begin soon. After that, Organovo plans to provide 3D-printed tissues for heart bypass surgery. Meanwhile, a kidney is the first whole organ they’re working on—because it’s a relatively simple structure.

Thin body parts like these are the easiest to design. Thicker organs, such as hearts and livers, require a stronger frame. For that, a lattice of sugar—like the haute cuisine sugar cages some chefs confect for desserts—is often used to provide a firm scaffolding, and then cells are layered over it. Sugar is nontoxic and melts in water, so when the organ is finished, the sugar scaffold is rinsed away, leaving hollow vessels for blood flow where they’re needed. The goal isn’t to create an exact replica of a human heart, lung, or kidney—which after all took millions of years to evolve—nor does it need to be. A kidney cleans the toxins from the blood, but it doesn’t have to look like a kidney bean or a kidney-shaped swimming pool. So it could become body art, a sort of interior tattoo: a heart-shaped kidney for a romantic, a football-shaped one for a sports fan. Or would that alter the brain’s mental atlas of the body, a landscape we know by heart, even in the dark? Suppose you have a suitcase. You replace the handle, you replace the lock, you replace the panels. Is it the same suitcase? If we replace enough body parts, or don’t choose exact replicas, will our brain still recognize us as the same self?

Загрузка...