CAN THEY READ MY BRAIN?

STANISLAS DEHAENE

Neuroscientist, experimental cognitive psychologist, Collège de France, Paris; author, Reading in the Brain: The Science and Evolution of a Human Invention


Like many other neuroscientists, I receive my weekly dose of bizarre e-mails. My correspondents seem to have a good reason to worry, though: They think their brain is being tapped. Thanks to new “neurophonic” technologies, someone is monitoring their mind. They can’t think a thought without its being immediately broadcast to Google, the CIA, news agencies worldwide… or their spouses.

This is a paranoid worry, to be sure. Or is it? Neuroscience is making giant strides, and you don’t have to be schizophrenic to wonder whether it will ever crack the lockbox of your mind. Will there be a time, perhaps in the near future, when your innermost feelings and intimate memories will be laid bare for others to scroll through? I believe that the answer is a cautious no—at least for a while.

Brain-imaging technologies are no doubt powerful. More than fifteen years ago, at the dawn of functional magnetic resonance imaging, I was already marveling at the fact that we could detect a single motor action: Any time a person clicked a button with the left or right hand, we could see the corresponding motor cortex being activated, and we could tell with more than 98-percent accuracy which hand the person had used. We could also tell which language the scanned person spoke. In response to spoken sentences in French, English, Hindi, or Japanese, brain activation would either invade a large swath of the left hemisphere, including Broca’s area, or stay within the confines of the auditory cortex—a sure sign that the person did or did not understand what was being said. Recently we also managed to tell whether someone had learned to read a given script simply by monitoring the activation of the “visual word form area,” a brain region that holds our knowledge of legal letter strings.

Whenever I lectured on this research, I insisted on our methods’ limitations. Action and language are macrocodes of the brain, I explained. They mobilize gigantic cortical networks that lie centimeters apart and are therefore easily resolved by our coarse brain-imagers. Most of our fine-grained thoughts, however, are encrypted in a microcode of submillimeter neuronal-activity patterns. The neural configurations that distinguish my thought of a giraffe from my thought of an elephant are minuscule, unique to my brain, and intermingled in the same brain regions. Therefore they will forever escape decoding, at least by noninvasive imaging methods.

In 2008, Tom Mitchell’s beautiful Science paper proved me partly wrong.[i] His research showed that snapshots of state-of-the-art functional MRI contained a lot of information about specific thoughts. When a person thought of different words or pictures, the brain-activity patterns they evoked differed so much that a machine-learning algorithm could tell them apart much better than would be expected by chance. Strikingly, many of these patterns were macroscopic, and they were even similar in different people’s brains. This is because when we think of a word, we do not merely activate a small set of neurons in the temporal lobes that serves as an internal pointer to its meaning. The activation also spreads to distant sensory and motor cortices that encode each word’s concrete network of associations. In all of us, the verb “kick” activates the foot region of the motor cortex, “banana” evokes a smell and a color, and so on. These associations and their cortical patterns are so predictable that even new, untrained words can be identified by their brain signature.

Why is such brain decoding an interesting challenge for neuroscientists? It is, above all, a proof that we understand enough about the brain to partially decrypt it. For instance, we now know enough about number sense to tell exactly where in the brain the knowledge of a number is encrypted. And, sure enough, when Evelyn Eger, in my lab, captured high-resolution MRI images of this parietal-lobe region, she could tell whether the scanned person had viewed two, four, six, or eight dots, or even the corresponding Arabic digits.[j]

Similarly, in 2006, with Bertrand Thirion, we tested the theory that the visual areas of the cortex act as an internal visual blackboard where mental images get projected. Indeed, by measuring their activity, we managed to decode the rough shape of what a person had seen, and even of what she had imagined in her mind’s eye, in full darkness.[k] Jack Gallant, at Berkeley, later improved this technique to the point of decoding entire movies from the traces they evoke in the cortex. His reconstruction of the coarse contents of a film, as deduced by monitoring the spectator’s brain, was an instant YouTube hit.

Why, then, do I refuse to worry that the CIA could harness these techniques to monitor my thoughts? Because many limitations still hamper their practical application in everyday circumstances. First of all, they require a ten-ton superconducting MR magnet filled with liquid helium—an unlikely addition to airport security portals. Furthermore, functional MRI works only with a cooperative volunteer who stays perfectly still and attends to the protocol; total immobility is a must. Even a millimeter of head motion, especially if it occurs in tight correlation with the scanning protocol, can ruin a brain scan. In the unlikely event that you are scanned against your will, rolling your eyes rhythmically or moving your head ever so slightly in sync with the stimuli may suffice to prevent detection. In the case of an electroencephalogram, clenching your teeth will go a long way. And systematically thinking of something else will, of course, disrupt the decoding.

Finally, there are limitations arising from the nature of the neural code. MRI samples brain activity on a coarse spatial scale and in an indirect manner. Every millimeter-sized pixel in a brain scan averages over the activity of hundreds of thousands of neurons. Yet the precise neural code that contains our detailed thoughts presumably lies in the fast timing of individual spikes from thousands of intermingled neurons—microscopic events we cannot see without opening the skull. In truth, even if we did, the exact way in which thoughts are encoded still escapes us. Crucially, neuroscience is lacking even the inkling of a theory as to how the complex combinatorial ideas afforded by the syntax of language are encrypted in neural networks. Until we do, we have very little chance of decoding nested thoughts such as “I think that X…,” “My neighbor thinks that X…,” “I used to believe that X…” “He thinks that I think that X…,” “It is not true that X…,” and so on.

There’s no guarantee, of course, that these problems will not be solved—next week or next century, perhaps using electronic implants or miniaturized electromagnetic recording devices. Should we worry then? Millions of people will rejoice instead. They are the many patients with brain lesions whose lives may soon change thanks to brain technologies. In a motivated patient, decoding the intention to move an arm is far from impossible, and it may allow a quadriplegic to regain his or her autonomy, for instance by controlling a computer mouse or a robotic arm. My laboratory is currently working on an EEG-based device that decrypts the residual brain activity of patients in a coma or vegetative state and helps doctors decide whether consciousness is present or will soon return. Such valuable medical applications are the future of brain imaging, not the devilish sci-fi devices that we wrongly worry about.

Загрузка...