THINKING

THE REAL STORY

IN 1996, I CAME ACROSS an article in The New York Times Magazine about the explosion of personal memoirs in the publishing business and read the following: “If Proust were writing today about his penchant for observing handsome young men stick hatpins in live rats, he wouldn’t hide behind the narrator of his novel. A la Recherche du Temps Perdu would be a memoir.”1 I found the comment highly irritating and have never forgotten it. The implication is that while fiction once served as a convenient screen for taboo personal material, it has now outlived its usefulness in a confessional culture that permits, even welcomes, every revelation, no matter how sordid. It is doubtful that the author, James Atlas, was serious about his claim; the sentences have the ironic, condescending tone we have come to expect from a good deal of cultural journalism, but it is interesting, nevertheless, to entertain how the memoir is different from the novel and see what comes of these musings. It is indisputable that Proust and many other novelists have borrowed events, feelings, thoughts, and people from life, and, in one way or another, transported them into their works. Scholars have diligently picked over Proust’s biography for every morsel of his “real” experience, just as they have analyzed and reanalyzed the seven volumes of his masterwork. But the two Marcels, the one in life and the one in fiction, are not identical. Even when there is a close familial resemblance between an author and his fictional character, the two remain distinct. What about an author and her persona in an autobiographical work? The question touches on the puzzling boundary between what we regard as the real and the imaginary.

Writing fiction takes place in a mental zone of free invention that memoir does not (or should not), for the simple reason that when a person picks up a book labeled “memoir,” she expects that the writer of the volume has told the truth. The implied contract between writer and reader is simple: the author is not prevaricating. The contract holds even though the explicit, conscious memories we retain are only a fraction of what we remember implicitly, unconsciously, and the autobiographical memories we keep are not stable but subject to change, as Freud repeatedly observed. Memories are revised over time, and their meanings change as we age, something now recognized by neuroscience and referred to as the reconsolidation of memory. The act of remembering is not retrieving some original fact stored in the brain’s “hard drive.” What we recall is the last version of a given memory. The writer of a memoir is not asked to occupy a third-person perspective, but rather to inhabit his or her first-person position fully and write what he or she remembers. That said, my husband and I, who have now been living together for almost thirty years, often recall the same event differently. He argues that the moment our daughter declared she wanted to be a performer the three of us were in the subway; I say it happened in a cab. Sophie, the object of the dispute, does not remember where it occurred or exactly what she said. Even more dramatically, a memory I am convinced belongs to me alone, is, according to my husband, his private mental property. He remembers it perfectly and is sure I must be mistaken. One of us is in error. What this anecdote clarifies about memory is that when we listen to a person tell a story, perhaps especially a person with whom we are intimate, that tale can spawn a mental image so vivid, it enters the mind as a subjective experience that originated outside the mind, not within it. The I adopts the recollection of the you. Memory, like perception, is not passive retrieval but an active and creative process that involves the imagination. We are all always reinventing our pasts, but we are not doing it on purpose. Delusion, however great, is not the same as mendacity. We know when we are lying. Lying is a form of double consciousness. There are two utterances: the one spoken or written and the unsaid, unrecorded one. The public outrage over memoirs that are actually fictions suggests that the contract implied by a work of nonfiction is still in effect, despite the fact that many memoirists seem to be equipped with supernatural abilities for recalling the past.

Several times over the years, I have heard one novelist or another refer to him or herself as “a professional liar.” The words signal the fact that fiction writers can make up anything, that they, unlike their comrades writing nonfiction, are not tied to describing what actually happened. This is indubitably true, and yet I have always balked at the idea of the novel as a form of falsehood, which is to say, I believe that some novels do lie, but the good ones do not. How can a novel be fallacious? It cannot be held to any absolute standard. If I want my novel’s narrator to have been born underwater from the eye of a giant octopus, who is going to stand in my way? If I begin my autobiography in this manner, however, there are those who will object. A birth certificate is on record that states otherwise. I might begin a memoir with an eight-legged aquatic mother had she been an early fantasy of mine, however. When my niece Ava was three, she repeatedly told her parents she had been born from a Chinese egg. Neither her mother nor her father is able to trace the origin of this personal myth. And no ambitious journalist can document the veracity of my inner imaginative life. The lies that have gotten memoirists into trouble are inevitably whoppers that can easily be identified by searching public records. The controversy over James Frey’s A Million Little Pieces centered on the fact that the author had exaggerated or fabricated his crimes and arrests, had made them worse than they actually were. Like a novelist, he created a storytelling persona, one he apparently preferred to his own more benign and possibly more bathetic self. Frey hid behind the narrator of his memoir, who served as a novelistic vehicle of disguise, but the deeper issues of masks and revelations in nonfiction and fiction are multiple and go far beyond an addict rewriting his own pathology for more dramatic reading.

Many successful memoirs read like novels. They borrow the established conventions, indeed clichés, of the form to use for autobiographical writing. I have read elaborate descriptions of people’s physiognomies, their clothing, of rooms and landscapes, and page after page of continuous dialogue in “memoirs.” Frankly, I regard most of these passages as improbable, if not impossible. Although I remember the rooms, for example, in the house where I grew up in some detail, the particularity of the interiors I occupied only briefly — a hostel in London, say, when I was seventeen — have nearly vanished from my mind or have been supplanted by some vague but workable fictionalized space. I remember a single sentence or two uttered by people important to me over the years and the gist of significant conversations, but I could never reproduce them verbatim, nor would I attempt to. Even the face of my mother as she was in my childhood cannot be reproduced in my mind’s eye, and that is why I sometimes take out photographs to remind myself of her youthful image. And I have a fairly good visual memory. The popular memoir has little to do with the peculiar realities of human memory. It has become a successful literary form, often fashioned on the journeyman novel, and, as with every hardened genre, it arrives with a set of expectations. A number of the memoirs exposed as frauds (to greater and lesser degrees) in recent years share a single quality: they are all stories of inspired survival: Frey’s book, Herman Rosenblat’s death camp love story, Angel at the Fence, and Margaret P. Jones’s story of growing up among violent gangs, Love and Consequences, follow the same essential narrative line. Against all odds, the hero or heroine of the tale triumphs in the end. In each there is an obstacle — drug addiction, Nazi horror, and the Bloods. Although these three can hardly be called parallel afflictions, the broader narrative in which they find themselves is the same, and it exploits a deep human wish: to conquer (whatever it is) and stay alive, not as a broken, traumatized, weak bit of human wreckage, but as a strong, reborn noble figure.

The narrative machinery of such tales is as old as literature itself. The trickster who outwits death is a figure in many tribal and folk cultures. Odysseus finally comes home. The seven voyages of that inimitable sailor, Sinbad, are survival tales par excellence. Over and over again he is saved by chance or by his own wiles, often literally from the jaws of death — snakes, sea monsters, gigantic birds, cannibals. The fairy-tale child of multiple traditions suffers adversity but overcomes evil in the end. Moll Flanders, Defoe’s resilient heroine, endures multiple assaults and the startling twists and turns of fortune to die of old age, and a penitent at that. These are characters of irresistible Darwinian appeal. Like Wile E. Coyote from the Looney Tunes of my girlhood, they have the wonderful gift of popping back into shape. There are true stories, as well, of people who defy the odds, people who despite grotesque experiences do not end up in hospitals, who, with far more than Beckettian resignation, go on.

In his book Abnormalities of Personality, Michael H. Stone, a professor of clinical psychiatry, presents the reader with two briefly summarized cases of men who had what appear to be equally horrible childhoods. Both men were extremely introverted when they were young. Both had parents who mistreated them, and both were the victims of violence and sexual molestation. One man was Stone’s patient, “a person of a decidedly paranoid caste” who after years of inertia and therapy was able to resume his graduate studies. The other was Jeffrey Dahmer, the notorious mass murderer. “The point of these stories,” writes Dr. Stone, “is that if either one had been identified in advance as the serial killer of young men, most people would have said, ‘Well, with a background like that!’”2 But true stories cannot be told “in advance,” only on hindsight. Whether Stone’s patient or others like him have a particularly robust genetic temperament or whether there is some person in his story, a teacher or aunt, a grandmother or sibling, who helped to keep him from disintegration, I do not know. But Stone’s comment is relevant to memoir: there is no formula for predicting the evolution of a particular human story. And yet, we cling to our standard narratives, although they change over time. Think of all the stories of seduced and fallen women in eighteenth- and nineteenth-century novels. Child sexual abuse and subsequent ruination is a more contemporary narrative, but even this explosive category, which includes everything from a grope in the locker room to brutal rape, cannot stand in as an explanation for an entire life. While it is certainly true that years of research on attachment phenomena — the psychobiological dynamics of emotional bonds between infant and caretaker — have shown how vital early interactions are to a person’s development and that many people who suffer neglect and/or violence as children grow up to have psychiatric problems, we must be careful about making simple equations. Fraudulent or otherwise, many memoir narratives partake of the broader culture’s need for crude reductions of complex human realities into a salable package of victimology. In this, they are no different from many popular novels that employ precisely the same formula but lack the stamp of reality.

Fake or partly faked autobiographical works would not exist if they were not valued more highly than fiction in the contemporary American marketplace. When Frey’s book was first submitted as a novel, it met with rejection. Were publishers equally attracted to fiction, we might be flooded with countless versions of the roman à clef. True crime, true sex, true abjection, reality TV, movie stars who debase themselves in private or public are daily media fare. Our many technologies give us access to high doses of Schadenfreude or inspiration, depending on one’s point of view. But this, too, is nothing new. Since the expansion of literacy, people have greedily consumed stories, both fictional and nonfictional, that titillate and shock them. As the reading public grew in England in the late seventeenth and through the eighteenth century, so did the materials to satisfy their needs. Accounts of the lives of criminals were especially popular and were often published as inexpensive chapbooks or broadsides. A typical title: News from Newgate: or an exact and true account of the most remarkable tryals of several notorious malefactors. The words true and authentic recur continually in this literature. There was also a hankering for “Last Dying Speeches” and verbatim reports from the trials at Old Bailey, all written with an eye to entertainment. Newspapers competed as well with their accounts of crimes, arrests, and trials:

17 September 1734. Yesterday Mary Freeman, alias Frisky Nan, but commonly called by the Name of Diving Moll, was committed to the Gatehouse, Westminster, by Justice Cotton, for picking a gentleman’s pocket of 15 guineas, a silver snuff box, and two gold rings of considerable value. (Daily Journal)

The same “notorious Moll,” with another alias, Talboy, gets much fuller treatment in the Grub Street Journal the same day. The reader is told that this “creature,” declared an “idle and disorderly person” by the justices, was supported by “several noted gamesters and sharpers about Covent Garden” and was wont to “draw in young cullies in order to make a prey of them.” The writer of the account indulges in a bit of pathos as well: “… and poor Moll, to her great mortification, was remanded back again to perform her task of beating hemp till the next Quarter session.” The fledgling novel soon got into the act, and fictional accounts of crime, debauchery, and seduction competed with “true” stories for the attention of readers.

The world is so taken up of late with novels and romances that it will be hard for a private history to be taken for genuine where the names and other circumstances of the person are concealed; and on this account we must be content to leave the reader to pass his own opinion upon the ensuing sheets and take it just as he pleases.3

So reads the ambiguous first sentence of the preface to another Moll story, Daniel Defoe’s Moll Flanders. The fictitious editor informs the reader that an original manuscript exists, but he has rewritten it to cleanse it of what might be morally contaminating. What the reader has in his hands is a “new dressing up of the story.” Moll, whose first-person saga relates her adventures of extreme poverty; thievery; prostitution; five marriages, including bigamy; multiple children, all, except one, dead or cast off; imprisonment; deportation; and eventual reformation, is a creature born of the popular genre of criminal biography mingled with another form ascendant in the seventeenth century: the Protestant spiritual autobiography in which the sinner finds his way to God and redemption. Many recent memoirs partake of this very same movement from a state of damnation — abuse, addiction, handicap, or potentially fatal illness — which is then overcome by an act of will or some form of personal enlightenment. The conventional forms for relating true-life stories — memoirs, letters, trial reports, rogue and whore biographies — infected eighteenth-century fictions because they were deeply concerned with the idea that they depicted ordinary human beings as they really were.

The ideas of authenticity and realism were essential to the raison d’être of early novels, even when they didn’t include prefaces by editors claiming to have cleaned up a genuine confession for polite consumption. In Tom Jones, Fielding’s narrator regularly justifies his story as truthful to nature, albeit with heavy doses of irony: “… it is our business to relate the facts as they are; which when we have done it, it is the part of the learned sagacious reader to consult that original book of nature; whence every passage in our work is transcribed, though we quote not always the particular page for its authority.”4 In John Cleland’s famous and infamous Memoirs of a Woman of Pleasure or Fanny Hill, the narrator tells her correspondent on the very first page that hers will be a true story: “Truth! stark naked truth, is the word, and I will not so much as take pains to bestow the strip of a gauze-wrapper on it, but paint situations as they actually rose to me in nature…”5 Part of the author’s seduction of the reader is the promise of unvarnished realism, in this case, undressing rather than “dressing up” the story.

Novels were regularly decried as a cause of mental pollution during the eighteenth century. In his 1778 essay “On Novel Reading,” Vicessimus Knox articulated a commonly held view that resonates nicely with more recent anxieties about, for example, the dangers of violent computer games: “If it be true, that the present age is more corrupt than the preceding, the great multiplication of Novels has probably contributed to its degeneracy.” Reading novels weakened the mind and made it vulnerable to “the slightest impulse of libidinous passion.” Knox preferred romances because “their pictures of human nature were not exact.”6 Of course, what an exact picture of human nature might look like is a bewildering question. For Knox, novelistic exactitude seems to have meant the exposure of the seamier side of human beings — their appetites and frailties — an image of life that was more “real” than in earlier literary forms.

Jean-Jacques Rousseau promises to “tell all” or rather “tell everything” in his Confessions. He begins with this declaration to the reader: “I have begun on a work which is without precedent, whose accomplishment will have no imitator. I propose to set before my fellow-mortals a man in all the truth of nature and this man shall be myself.”7 Of course there were many memoirs before Rousseau’s, and there were many memoir novels. Among other fictional autobiographies, the philosopher from Geneva especially loved Defoe’s Robinson Crusoe, and this invented narrative influenced the telling of his own life. Both St. Augustine and Michel de Montaigne famously preceded Rousseau as writers who unveiled their personal lives. The difference was that for both of them self-revelation came in service of ideas beyond themselves. Rousseau believed in the validity of telling for the sake of telling. He believed in a transparency that would allow the reader to peer into his very soul. In this, he was modern. He is not confessing to God. He is unburdening himself in the sight of all humanity, warts and all.

I have to admit that when I first read Rousseau’s life history at twenty, I felt variously amazed, delighted, appalled, and embarrassed. I had not expected him to reveal the sexual pleasure he had taken in being spanked as a boy, one he claims shaped his lifelong “affection for acts of submission.”8 I was horrified by the fact that he had abandoned his children, something which he attempts to justify, and amazed by his candor about any number of other shameful acts — both petty and more serious — of disloyalty, self-deception, and meanness. Writing about his own life, Rousseau is intent on not prettifying himself. The appetites and frailties depicted in the novels that so worried Knox are on full display in Rousseau’s autobiography. At the same time, the narrator of The Confessions echoes the English critic across the channel. The philosopher admits that his imagination has a quixotic side and claims that reading novels as a child has addled him permanently. These fictions are responsible for “bizarre and romantic notions of human life, which experience and reflection have never really been able to cure me.”9 Or, to put it another way: the novels he read became part of who he was.

Was Rousseau truthful? A number of details and dates he cites have been proven wrong. Moreover, he has been accused of fabricating or softening some scenes in the book. The busy scholar, like the journalist sniffing out memoir fraud, can cite inaccuracies and obfuscations in Rousseau’s account of himself, a narrative that also periodically falls into the trap I mentioned earlier of including dialogue and speeches too long for any normal human being to remember, which undermine the urgent, autobiographical tone of the book. At the same time, when I read him I never feel that Rousseau is an out-and-out liar. He is at junctures brilliant, wise, tender, hyperbolic, paranoid, totally convincing, and less so because I feel he is working so hard to justify himself that he moves into the terrain of self-deception. Then again, he openly declares that while he may stumble over facts, what he cannot be mistaken about are his own feelings. His appeal is to the truth of sentiment, to emotional truth. Although it is doubtful that any one of us can fully recover feelings and sensations from the past any more than we can perfectly reproduce events, it is clear that memory is consolidated by emotion, that the fragments of the past we recall best are those colored by feeling, whether it is joy or grief or guilt. Also, it is fair to allow that the writing self looking back on a former self can at least be acutely aware of feelings in the present about that earlier incarnation.

The art of autobiography, as much as the art of fiction, calls on the writer to shape himself as a character in a story, and that shaping requires a form mediated by language. What scientists call episodic or autobiographical memory is essential for creating a coherent narrative sense of a self over time. It is part of our consciousness, but that consciousness is also shaped by unconsciousness. What it means to be a thinking subject is an enormously complex philosophical and neurobiological issue, which remains unsolved. But if you ask yourself how you would tell the story of your life or tell a particularly dramatic part of your life, you will soon discover the quandaries involved. My own memories can only be called hodgepodge. The images and words retained in my brain-mind are not sequential; they come and go in my reveries. They are triggered by the words of others, by my own associative thoughts, by smells and sounds and sights. As William James stated in The Principles of Psychology (1890): “There is no such thing as mental retention, the persistence of an idea from month to month or year to year in some mental pigeon-hole from which it can be drawn when wanted. What persists is a tendency to connection.”10 Memory is flux.

Moreover, the first two years of my life are lost to amnesia. In order to report on them, I would have to rely on the stories my mother and father told me, not my own memory. I know half a dozen people who grew up deceived about their parentage. We are not truly present at our own births and, although learning and development are rapid and crucial during those initial months of our lives, the self-reflective recollections of the autobiographical “I” have not yet begun. In human beings, that “I” has tremendous flexibility. It is dependent on the fact that we recognize ourselves in a mirror, and so begin to imagine ourselves through the eyes of another person. A human being can become a character to herself, if you will, a being seen from the outside, a personage we can place in the past and imagine in the future.

After about the age of six, I begin to have what some have called a continuous autobiographical memory, but what does this mean? It does not mean that I can recall every day of my life and its incidents. As David Hume writes in A Treatise of Human Nature: “For how few of our past actions are there, of which we have any memory? Who can tell me, for instance, what were his thoughts and actions on the first of January 1715, the eleventh of March 1719, and the third of August 1733?”11 A continuous memory means only that I can locate the memories I do have in places I have known: my family house; Longfellow School; Way Park; an apartment in Bergen, Norway; Butler Library at Columbia University. By summoning these mentally familiar spaces, I put my young self within them and construct a rhythmic formula of a routine reality, punctuated by events significant to me. Between those important events are fogs and lapses. I forget. I forget. I forget. And I sometimes displace, condense, project, and generally get things wrong about my life as well. I have stolen “memories” from photographs, unwittingly, it is true, but when confronted with snapshots of my early childhood, I have been forced to accept that what I imagined was a mental picture of my own is, in fact, an image borrowed from an album. My errors are hardly unique. No doubt there are innumerable others I will never discover, but I accept the imaginative dimension of my remembering. The writing of memoir, then, is not about my “real” life in some documentary sense. Rousseau’s optimism about recovering the past, even its feelings, is unwarranted. Writing a memoir is a question of organizing remembrances I believe to be true and not invented into a verbal narrative. And that belief is a matter of inner conviction; what feels true now.

When I’m writing a novel, it is very much like dredging up a memory, trying hard to find the “real” story that is buried somewhere in my being, and when I find it, it feels true. But I have also written passages that are wrong, that feel like lies, and then I must get rid of them and start again. I am measuring the truth of my fictional story against some inner emotional reality that is connected to my memories. That is why I rebel against the idea of novelists as “professional liars.”12 It demeans an enterprise that for me is exactly the opposite. The link between recollection and creativity has long been acknowledged in philosophy, as well as disavowed. In The New Science (1725), Giambattista Vico equated the two. “Hence memory is the same as imagination … Memory has three aspects: memory when it remembers things, memory when it alters or imitates them, and invention when it gives them a new turn and puts them in a proper arrangement and relationship.”13 Wilhelm Wundt, the German researcher who is credited with establishing psychology as a distinct field of study, blurs the two entirely in his Outlines of Psychology (1897). “It is obvious that practically no clear demarcation can be drawn between images of imagination and those of memory … All our memories are therefore made up of ‘fancy and truth’ (Wahrheit und Dichtung). Memory changes under the influence of our feelings and volition to images of imagination and we generally deceive ourselves with their resemblance to real experiences.”14 Invention is part of human experience, whether voluntary or involuntary, and everyone has fantasies and daydreams made possible by the projected “I.” We can move ourselves mentally into real and unreal spaces:

I can see myself as a famous singer performing in Yankee Stadium.

What if my beloved died, and I were left alone?

I have sprouted wings and am soaring happily over New York City.

Young people are particularly prone to spending hours inside their private fantasies. In “Imagination and Creativity of the Adolescent” (1931), the Russian psychologist Lev Vygotsky beautifully articulates the double experience of fiction:

When with the help of fantasy, we construct some sorts of unreal images, the latter are not real, but the feeling which they evoke is experienced as being real. When a poet says: “I will dissolve in tears over this fiction,” he realizes that this figment is something unreal, but his tears belong to the realm of reality. In this way an adolescent finds a means of expressing his rich inner emotional life and his impulses in fantasy.15

There is mounting neurobiological evidence that the same regions or systems of the brain are at work in both episodic memory and the imaginative act of projecting oneself into the future. The neuroscientists Randy Buckner and Daniel Carroll put it this way in their paper “Self-Projection and the Brain,” published in Trends in Cognitive Sciences:

Accumulating data suggest that envisioning the future (prospection), remembering the past, conceiving the viewpoint of others (theory of mind), and possibly some forms of navigation reflect the workings of the same brain network. These abilities emerge at a similar age and share a common functional anatomy that includes frontal and medial temporal systems that are traditionally associated with planning, episodic memory, and default (passive) cognitive abilities.16

The “default system” is the rather ugly name scientists have given to what happens in our brains when we are not busy with some specific task, when we are at rest and not concentrating on stimuli outside ourselves — reverie mode, fantasy mode. It turns out that the brain is very active when we are doing nothing but hanging out inside ourselves. Note the scientists’ list of connected activities. Envisioning the future is an out-and-out fictional act, a projection of the present self elsewhere into a time that has not yet arrived, but then so is autobiographical memory to an important degree. We must reimagine a former self and move it backward in time. All animals remember. Eric Kandel’s groundbreaking work on the snail, aplysia, has shown that even that simple animal learns and remembers what it has learned. Living creatures all have motor-sensory memories that are implicit, and these mostly unconscious learned abilities underscore much of our habitual movement in the world. But a sea snail does not have episodic autobiographical memories, nor does it fantasize or imagine itself as another aplysia. Without the ability to conceive the viewpoint of others — to imagine being that other person — we would not be self-conscious, and without self-consciousness we could not construct the labile self we all have, the one that can be cut off from the present and navigate in other realms, both real and unreal. The authors of the paper do not mention philosophy or psychology, nor do they note that the increasing mobility of this projected self is connected to mirror recognition and our later acquisition of language and abstract thought. They do not extend their argument to fantasy, creativity, or the imagination in general. In their very cautious way, however, they step into the memory versus fiction question.

… this explanation helps us to understand why memory is constructive and why it is prone to errors and alterations. Perhaps a feature of this core network that is involved in self-projection is its flexibility in simulating multiple alternatives that only approximate real situations. The flexibility of the core network might be its adaptive function, rather than the accuracy of the network to represent specific and exact configurations of past events.17

It seems to me that we have come to a cultural moment in the United States that is inherently suspicious of fiction and attached to an idea of “real memory” or “the true story” that is in itself a fantasy. Why write a novel when you can tell the real story? Isn’t this what James Atlas was proposing in his reference to Proust? Not long ago, I received a novel in the mail with a letter asking me for a quote. In the form letter, the editor explained that the book is about a rape and that the author herself had been raped. Moreover, the author was willing to talk about her experience of the real rape while doing publicity for the novel about a fictional rape. I have not read the book; it might be good. It might be subtle. What interests me is the inference that the fiction I was being asked to read was more genuine because life and fiction had crossed, as it were. What are we to make of this? The book in question is not a memoir but a novel, and yet it is being marketed as a form of hybrid. The factual rape is cited to give credence to the fictional rape. There is nothing new about novelists using their own experiences to make a fiction. What seemed new was the need for the publisher to declare what those experiences were.

Let us return to Proust. Proust’s biographers have written extensively about his sadomasochism. Apparently, he did enjoy torturing rats. His housekeeper Celeste Albaret relates that after one of his visits to a male brothel, he returns and tells her that he had peered through a small window and witnessed a man being whipped “till the blood spurts all over everything. And it is only then that the unfortunate creature experiences the heights of pleasure.”18 Proust then talks to the shocked Celeste for hours about the flagellation he has seen. In her book on Proust, Time and Sense, Julia Kristeva argues that “Proust uses the good woman’s participation-indignation to help him reconstruct the scene from a distance. In this way, he can create a quasi-comedy that will allow him to detach himself from his sensations…”19 In Time Regained, Proust’s narrator also sees a flagellation, but before he sees it, he hears groans of pain, then two voices, one begging for pity, the other menacing, “and if you yell and drag yourself about on your knees like that, you’ll be tied to the bed, no mercy for you,” then the noise of a cracking whip. Moments later the narrator discovers “a small oval window” and peers through it.

And there in the room, chained to a bed like Prometheus to his rock, receiving the blows that Maurice rained upon him with a whip which was in fact studded with nails, I saw, with blood already flowing from him and covered with bruises, which proved that the chastisement was not taking place for the first time — I saw before me M. de Charlus.20

There is nothing “quasi-comic” about this passage in isolation, but in the context of the novel, it is, in fact, kind of funny. After his bout in the theater of cruelty, the baron, weak and tottering to be sure, but, remarkably, up on his pins, jokes with the hotel’s “boys.” We discover that Maurice plays the torturer for a few francs and that his heart is decidedly not in it. But this comes later, after I, as a reader, have participated in the fascinated horror of that other “I” who creeps “stealthily in the darkness” toward the peephole and then sees that the abject person on the bed is someone I know. This is not the nameless “unfortunate creature” Proust described to Celeste Albaret; it is the Baron de Charlus, the depraved but also generous dandy, the gallant, absurd survivor of his own perversion for pain, the same man, who, recovering from a stroke in his dotage, delights in reeling off the names of dead friends as World War I rages not far away and men howl and die in the trenches. Does it matter that Proust stole some of the baron’s other character traits from Count Robert de Montesquiou-Fezensac, who was privately mortified by the novel because he recognized parts of himself? The transpositions from real experience to art are like the strange mingling that happens in dreams. I dream of a friend, who does not look like my friend, but like someone else. The name seems to be attached to the wrong person. One person, one thing, blends into another, or two stories from my waking life collapse into one. A minor character in my novel What I Loved (2003), Lazlo Finkelman, gained his first name and his hairdo from a friend’s baby boy; his last name from my daughter’s pediatrician; his hip, laconic patter from yet another friend; and his penchant for stealing food at parties and stuffing it into his raincoat from a story I once heard about the young, impoverished Henry Miller.

Would we prefer that Proust had catalogued his voyeuristic experiences as his very own, that he had stuck to the facts and not taken the horrific spectacle of an unknown man being whipped and turned it into an image of the baron? Would Proust’s autobiographical memory of real events, in his vast book about memory and time, have served him better than his memories and fantasies, that Janus face of a single human capacity? There is a distance in writing fiction, in writing as another person, in allowing the slippage of remembering and imagining, always in the service of emotional truths, but from another perspective than one’s own, even when the narrator is a kind of self-double, as Proust’s narrator is. Comedy and irony both rest on this step backward. I often see more clearly from somewhere else, as someone else. And in that imagined other, I sometimes find what I may have been hiding from myself. In the free play of the imagination, in the words that rise from unconscious sources, as well as in the bodily rhythms that accompany the act of writing a novel, I am able to discover more than I can when I simply try to remember. This is not a method for disguising reality but for revealing the truth of experience in language.

In my first novel, The Blindfold (1992), I took my own first name and reversed it for my narrator: Iris. She was given my mother’s maiden name as her last name: Vegan. I placed her in the apartment where I once lived: 309 West 109th Street. I robbed some close friends of mine of their various characteristics and mixed them together with fictional ones, and I used parts of my own experience as a patient in the neurology ward of Mount Sinai Medical Center. But Iris’s adventures are not mine. I invented them. They came to me as necessary, as true, but they are fictions. And yet, there were those, including a close friend of my parents, who were certain that everything had happened exactly as it was related in the book. I have been told that inhabitants of the town where I grew up have amused themselves by identifying every single character in my second novel, The Enchantment of Lily Dahl (1996), with real people in the town. Although the book’s Webster is a fictional version of Northfield, Minnesota, and there are several minor characters based on actual persons, many, including all the main characters, were born of my imagination. Throughout my fiction I have borrowed bits and pieces from my life and the lives of others and reimagined, combined, condensed, and reconstituted them, but this is a far cry from telling my life story. In the autobiographical essays I have written and in the single memoir (which is less about me than about medicine, diagnosis, and what illnesses of the mind-body might mean), I have honored the pact of nonfiction with the reader — which is simply not to lie knowingly.

And, however unreliable our memories may be, we can tell the difference between our present given reality — the chair on which I am sitting now before my computer in my study as I write this essay — and the world of my fantasies, which I inhabit when I write a novel and am sitting in the very same chair. As Maurice Merleau-Ponty argues in The Phenomenology of Perception, “the normal subject” does not confuse the phenomenal present with the potential space of the imagination.21 I would say it like this: unless we are mad, we can recognize the difference between the here and now and the mental there and elsewhere of remembering and fantasizing.

But robbing one’s own memory for fiction can have a peculiar effect on the recollection itself. In his memoir, Speak Memory, Vladimir Nabokov addresses this change.

I have often noticed that after I had bestowed on the characters of my novels some treasured item of my past, it would pine away in the artificial world where I had so abruptly placed it. Although it lingered on in my mind, its personal warmth, its retrospective appeal had gone, and presently, it became more closely identified with my novel than with my former self, where it had seemed to be so safe from the intrusion of the artist.22

I have noticed this myself. Once a person or place or even just parts of those persons and places from the past have been transplanted into a novel, the fictional transformation can at times subsume the memory, but it happens only when the person or place is no longer a part of your daily experiences. Once, years after I had written my first novel, I met the original model for one of my characters in a restaurant. We chatted amiably and, when we parted, and I was about to say good-bye to him, the name of the character appeared in my mind before his real name, and I had to suppress the former. I almost said good-bye to a figment. This slip of the tongue, which I fortunately censored in time, revealed not only that the book had, to one degree or another, supplanted my memory of him, but that my emotional connection to the novel, to the writing of it, and to the character I had made, had become in some way more real to me than the man himself. As Kristeva puts it when she is writing about Proust, the beaten man, and the flagellant’s reappearance with another face in fiction: “what is experienced gradually becomes what is represented.”23 And word representations are different from both mental images and sensory experiences. Don’t we all have memories that have hardened into stories? We remember how to tell the story, even though the sensations and pictures that accompanied it have begun to fade. The words master a dimming past.

Proust’s dream was to bring back the past, not to cheat time so much as reincarnate it and its vicissitudes as it surged back into the real present, and he wrote and wrote and wrote, and the writing was an active, aching search for bringing then into now. Johannes the Seducer, a character in Søren Kierkegaard’s Either/Or, has a similar wish:

It would be of real interest to me if it were possible to reproduce very accurately the conversations I have had with Cordelia. But I easily perceive that it is an impossibility, for even if I managed to recollect every single word exchanged between us, it is nevertheless out of the question to reproduce the element of contemporaneity, which actually is the nerve in conversation, the surprise in the outburst, the passionateness, which is the life principle in conversation.24

To long for the immediacy and presence of what we have lost is human. What we retain from that former time in words, images, and feelings is not stable. Only rarely can we measure our memories against documentary evidence and, even then, our phenomenological perspective is missing — the “I” who is experiencing the family gathering, the sailboat, the dinner party. For memory itself exists only in the present. I remember Arne Krohn Nilsen. He was my history teacher during the year I spent in Norway when I was twelve and turned thirteen. I am calling him forth now. He had remarkably long eyebrows that reminded me of sprouting plants, and he moved in a jerky, awkward way, and he could be very irritable with the children, who poked fun at him in his absence. But I loved him. I cannot summon him whole, not in an inviolable present, not as he was in 1967. And yet, what if he returns as a character in a work of fiction? Then he might find a new reality, an immediacy and presence born of a recollection that moves forward rather than backward. That is the magic of fiction. Great memoirs also partake of a vivid re-experiencing, a re-seeing of the past that is also a fantasy, but it is nevertheless true to the present self, the one who recalls hoarfrost on a window long ago and, with that image, experiences an intense feeling of melancholy.

Just as memoirs may lie by borrowing hackneyed forms and spouting nothing but received knowledge, novels can do the same. And both genres can reveal small or large truths about what it means to be human. Perhaps my most gratifying moment as a novelist occurred after I had published What I Loved. I read from the book in Iowa City, and when the reading was over, a woman came up to me and said she had a verbal message for me from her father. She explained that like the novel’s narrator, Leo Hertzberg, her father was a Jew who had been born in Berlin. He had fled the Nazis with his parents, first to London, but had eventually ended up in the United States. He was now living in Florida. The message was: ‘Tell her I am Leo.’”

Of course, this cannot be true in any literal way, despite the similarities between the fictional and the true story. Leo recounts his early life only briefly, and it takes up very little of the book. The woman’s father was speaking to some other reality, one of feeling, perhaps a feeling of exile and grief. I don’t know. What I do know is that in my own life as a reader, I, too, have felt I was the narrator of a novel. I also know that, like Rousseau, I have taken those people and their stories into myself, and they have changed who I am. Fictions are remembered, too, and they are not stored any differently in the mind from other experiences. They are experience.

2010

EXCURSIONS TO THE ISLANDS OF THE HAPPY FEW

Both the semiotic model of the index and the linguistic model of performativity (and often their combination) become central to the aesthetic of Conceptual art and they also define the specific visuality and textuality of Lamelas’s filmic and photographic work of the late sixties. If, in the former, depiction and figuration are displaced by the mere trace and pure record that the photograph or the film or video recording supply when reduced to pure information, then we encounter in the latter a model of texuality where rhetoric, narrative plot and fiction, agency and psychobiography, are all dismissed as integrally participating in the conditions of the ideological and of myth (in Barthes’s definition).

Benjamin H. D. Buchloh1

More fundamentally, however, an examination by pharmacological means, of the mechanism by which the granularity of activation is engendered (results not shown) indicates that the areas of silence between patches of activity at 40 Hz are generated by active inhibition. Thus, in the presence of GABA A blockers, the spatial filtering of cortical activity described above disappears. These results are clearly in accordance with the findings that cortical inhibitory neurons are capable of high-frequency oscillation (Llinás et al. 1991) and with the view that, if such neurons are synaptically coupled and fire in synchrony, they might be formative in generating cortical gamma-band activity.

R. Llinás, U. Ribary, D. Contreras and C. Pedroarena2

Primary narcissism, however, is not in the focus of the ensuing developmental considerations. Although there remains throughout life an important direct residue of the original position — a basic narcissistic tonus which suffuses all aspects of the personality — I shall turn our attention to two other forms into which it becomes differentiated: the narcissistic self and the idealized parent imago.

Hans Kohut3

I DIDN’T GO FAR TO find the passages quoted above. Buchloh’s book, the scientific paper on consciousness, and Kohut’s essay on narcissism are all in my study, on bookshelves only steps away from my desk. I have read all of them because in one way or another they have been part of my research. Countless other books I’ve read could have served my purpose just as well, which was simple: I wanted to show that without considerable reading in the fields represented above — theoretical art history, neuroscience, and psychoanalysis — it would be difficult to understand what these people are talking about. What they share is more important to me than what distinguishes them. They are rarefied texts that rely on words known to “the happy few” who are reading them. They all require a reader who is steeped in the subject at hand. He or she already knows what Roland Barthes thought about myth and ideology, can readily identify GABA as gamma-aminobutyric acid, an important inhibitory neurotransmitter in the central nervous system, can distinguish GABA A from GABA B, and has some idea what a “basic narcissistic tonus” might be.

We live in a culture of hyperfocus and expertise. “Experts” on this or that particular subject are continually consulted and cited in newspapers and magazines, on television and the radio. Just think how many times we’ve heard the term “Middle East expert” in recent years, perhaps with some further qualification: “Dr. F. is the leading expert on Syrian and Iranian relations at Carbuncle University.” Each field carves out a domain and pursues it relentlessly, accumulating vast amounts of highly specific knowledge. Except when brought in to make declarations to the culture at large, these people inhabit disciplinary islands of the like-educated and the like-minded. As a roaming novelist and essayist with an academic background in literature, I’ve found myself swimming toward various islands for some time. I’ve reached the shores of a few of them and even stayed on for a while to check out the natives. What I’ve discovered is both exciting and dismaying. Despite the hardships I’ve had penetrating abstruse texts and learning foreign vocabularies (not to speak of innumerable acronyms and abbreviations), I’ve been forever altered by my excursions. My thoughts about what it means to be a human being have been changed, expanded, and reconfigured by my adventures in art theory, neuroscience, and psychoanalysis. At the same time, I’ve been saddened by the lack of shared knowledge. It can be very hard to talk to people, have them understand you, and for you to understand them. Dialogue itself is often at risk.

Some years ago, I did extensive research on psychopathy, also called sociopathy and antisocial personality disorder, for a novel I was writing. I read everything I could get my hands on without discrimination. I read psychoanalytic and psychiatric books. I read statistical analyses of psychopaths in inmate populations. I read science papers that measured seratonin levels in criminal sociopaths, and I read neurological cases of people with frontal lobe damage who shared traits with classic psychopaths. It turned out that bibliographies tell all. You see those the authors quote or refer to and you know where they live intellectually. Even people researching the same subject are often wholly isolated from one another. For example, a statisician doesn’t give a jot about what Winnicott had to say about sociopathy in his book Deprivation and Delinquency,4 and neurologists don’t bother to investigate what John Bowlby wrote about the effects of early separation on both children and primates in his masterwork, Attachment and Loss.5 Ours is a world of intellectual fragmentation, in which exchanges between and among fields have become increasingly difficult.

Thomas Kuhn, in his book The Structure of Scientific Revolutions, identified these circles in science as “disciplinary matrixes.”6 These groups share a set of methods, standards, and basic assumptions — a paradigm of values. In other words, the people in these groups all speak the same language. In a lecture, the German philosopher Jürgen Habermas addressed these isolates, which are by no means limited to science: “The formation of expert cultures, within which carefully articulated spheres of validity help the claims to propositional truth, normative rightness, and authenticity, attain their own logic (as well, of course, as their own life, esoteric in character and endangered in being split off from ordinary communicative practice)…”7 The parenthetical comment is crucial. It has become increasingly difficult to decipher the logic of these expert cultures because their articulations are so highly refined, so remote from ordinary language that the layperson is left thoroughly confused. Indeed, when reading some of these specialized texts, I can’t help but think of Lucky’s tirade in Waiting for Godot, during which his creator, the erudite Samuel Beckett, made inspired poetic nonsense from the overwrought articulations of academe: “Given the existence as uttered forth in the public works of Puncher and Wattman of a personal God quaquaquaqua with white beard quaquaquaqua outside time without extension who from the heights of divine apathia divine athymbia divine aphasia loves us dearly with some exceptions for reasons unknown but time will tell … that as a result of the labors left unfinished crowned by the Acacacacademy of Anthropopopometry of Essy-in-Possy of Testew and Cunard it is established beyond all doubt all other doubt than that which clings to the labors of men…”8

About a year ago, I was on a flight from Lisbon to New York and beside me was a man reading a neurology paper. Although I usually refrain from speaking to people on airplanes, my abiding curiosity about neurology was too great, and I asked him about his work. He was, as I had expected, a neurologist, an expert on Alzheimer’s disease, it turned out, who ran a large research center in the United States and worked indefatigably with both patients and their families. He was bright, open, affable, and obviously knew as much about Alzheimer’s disease as any human being in the world, an esoteric knowledge I could never hope to penetrate. After we had talked for a while, he looked down at the book I had with me and asked what it was. I told him I was rereading Kierkegaard’s Either/Or. He gave me a blank look. I then explained that Kierkegaard was a Danish philosopher and refrained from using the word famous because I was no longer sure what fame was. I don’t think everyone in the world should have read Kierkegaard. I don’t even believe that everyone should know who Kierkegaard is. My point is that I, too, often find myself in a closed world, one in which I make assumptions about common knowledge only to discover it isn’t common at all. Somewhat later in the conversation I asked him what he thought about “mirror neurons.” Mirror neurons, first discovered by Giacomo Rizzolatti, Vittorio Gallese, and their colleagues in 1995, made a splash in neuroscience and beyond. The researchers discovered that there were neurons in the premotor cortex of macaque monkeys that fired in animals who were performing a task, such as grasping, and also fired in those who were merely watching others perform that same task. A similar neural system has been found in human beings. The implications of the discovery seemed enormous and a great deal of speculation on their meaning began. My Alzheimer’s companion didn’t know about mirror neurons. No doubt they had never been crucial to his research, and I had made another presumptuous gaffe.

The truth is that being an expert in any field, whether it’s Alzheimer’s or seventeenth-century English poetry, takes up most of your time, and even with heroic efforts, it’s impossible to read everything on a given topic. There was an era before the Second World War when philosophy, literature, and science were all considered crucial for the truly educated person. The Holocaust in Europe, the expansion of education beyond elites, the postwar explosion of information, and the death of the Western canon (no more necessity for Greek and Latin) killed the idea that any human being could master a common body of knowledge that traversed many disciplines. That world is gone forever, and mourning it may well be misplaced for all kinds of reasons, but its loss is felt, and a change is in the air, at least in some circles. In his introduction to Autopoiesis and Cognition, a book written by Humberto Maturana and Francisco Varela, Sir Stafford Beer praises the authors for their ability to create “a higher synthesis of disciplines” and assails the character of modern scholarship. “A man who can lay claim to knowledge about some categorized bit of the world, however tiny, which is greater than anyone else’s knowledge of that bit, is safe for life: reputation grows, paranoia deepens. The number of papers increase exponentially, knowledge grows by infinitesimals, but understanding of the world actually recedes, because the world really is an interacting system.”9 Anyone who has even a passing acquaintance with academic life must recognize that Beer has a point.

I remember a conversation I had with a young woman at Columbia University when I was a student there. She told me she was writing her dissertation on horse heads in Spenser. Of course it’s entirely possible that examining those heads led to staggering insights about Spenser’s work, but I recall that I nodded politely and felt a little sad when she mentioned her topic. Years of work on that particular subject did seem incommensurate with my fantasies of an impassioned intellectual labor that uncovered something essential about a work of literature. When I did research for my own dissertation on language and identity in Charles Dickens’s novels, I plodded dutifully through the endless volumes written about his work and realized in the end that only a handful of books had meant anything to me. Linguists, philosophers, psychoanalysts were far more helpful and, had I known then what I know now, I would have turned to neurobiology as well to explicate the endlessly fascinating world of Dickens. The theories I drew from then often came from the same pool as my fellow students in arms. In the late seventies and early eighties, French theory was the intellectual rage in the humanities, and we eagerly digested Derrida, Foucault, Barthes, Deleuze, Kristeva, Guattari, Lacan, and various others who were called upon to explicate not just literature but the textual world at large. Hegel, Marx, Freud, Husserl, and Heidegger also lurked in the wings of every discussion, but science was never part of a single conversation I had during those years. Wasn’t science part of the ideological superstructure that determined our dubious cultural truths?

While I was doing research for an essay I was writing on the artist Louise Bourgeois, I read a book called Fantastic Reality by Mignon Nixon.10 The author uses psychoanalytic concepts, Melanie Klein’s idea of part objects, in particular, to elucidate Bourgeois’s work. Nixon’s analysis is intelligent and often persuasive. Along the way she mentions Klein’s famous patient, Little Dick, whose behavior, she says, resembles Bruno Bettelheim’s description of autism in The Empty Fortress. After discussing Bettelheim’s machinelike patient, the autistic Joey, she moves on to Deleuze and Guattari’s description of him in Anti-Oedipus: Capitalism and Schizophrenia. They also use Joey to further their particular argument. My purpose is not to critique Nixon’s analysis of Bourgeois or Anti-Oedipus, a book I read years ago with interest, but rather to suggest that at every turn Nixon’s sources are predictable. They indicate a theoretical education rather like the one I acquired during my years as a graduate student. She follows a preestablished line of thinkers worth mentioning, moving from one to another, but never steps beyond a particular geography of shared references. An investigation of contemporary ideas about Theory of Mind or the current science on autism, which is entirely different from and at odds with Bettelheim’s ideas, doesn’t enter her discussion. She is not alone. Islands are everywhere, even within a single discipline. I’ve noticed, for example, that continental and Anglo-American analytical philosophers often don’t acknowledge that the other exists, much less do they deign to read each other.

The realization that the strict borders drawn between one field and another, or between one wing and another within a field, are at best figments may well be behind a new desire for communication among people with varying specialties. Philosophers have turned to neuroscientists and cognitive researchers to help ground their theories of consciousness. Their various musings are printed regularly in the Journal of Consciousness Studies, which has even published a literature professor or two. The philosopher Daniel Dennett draws from both neuroscience and artificial intelligence to propose a working metaphor for the mind — multiple drafts — in his book Consciousness Explained.11 The neurologists Antonio Damasio12 and V. S. Ramachandran13 evoke both philosophy and art in their investigations about the neural foundations of what we call “the self.” The art historian David Freedberg, author of The Power of Images, has leapt into neuroscience research to explore the effects of images on the mind-brain.14 He was among the organizers of a conference I attended at Columbia University that hoped to establish a dialogue between neuroscientists and working visual artists. And yet, it isn’t easy to make forays out of one’s own discipline. The experts lie in wait and often attack the interlopers who dare move onto their sacred ground. This defensiveness is also understandable. Specialists in one field can make reductive hash of another they know less well. To my mind, conversations among people working in different areas can only benefit everyone involved, but the intellectual windows that belong to one discipline do not necessarily belong to another. The result is a scrambling of terms and beliefs, and often a mess is made. The optimistic view is that out of the chaos come interesting questions, if not answers.

In the last couple of years, I’ve attended the monthly neuroscience lectures at the New York Psychoanalytic Institute, and through Mark Solms, a psychoanalyst and brain researcher who has spearheaded a dialogue between neuroscience and psychoanalysis, I became a member of a study group that took place after those lectures led by the late psychoanalyst Mortimer Ostow and the neuroscientist Jaak Panksepp. The group has since been reconfigured, but during the year I regularly attended meetings of the first group, I found myself in a unique position. I was the lone artist among analysts, psychiatrists, and neuroscience researchers and was able to witness the dissonant vocabularies of the various disciplines, which nevertheless addressed the same mystery from different perspectives: how does the mind-brain actually work? Although the language of psychoanalysis had long been familiar to me, I knew next to nothing about the physiology of the brain. During the first meeting, I worked so hard to understand what was being said and felt so mentally exhausted afterward that I fell asleep at a dinner party that same evening. I ordered a rubber brain to try to learn its many parts, and began to read. It took innumerable books and many more papers before I was able to penetrate, even superficially, the neuroscientific aspects of the discussion, but as the fog lifted somewhat, I felt I was in a position to make a few observations. Unsurprisingly, our conversations were riddled by language problems. For example, was what Freud meant by primary process equivalent to a similar idea in neuroscience used by Jaak Panksepp? Both sides agreed that most of what the brain-mind does is unconscious, but is the unconscious of neuroscience harmonious with Freud’s idea of the same thing? Or, for example, when scientists use the word neural representations when they talk about brain function, what do they mean? Exactly how do neurons represent things, and what are those things they’re representing? Is this a form of mental translation — the internal reconfigurations of perception? The words neural correlates are also interesting. I have a better sense of this. I’m feeling angry, and when I’m feeling angry, there are neuronal networks in parts of my brain that are actively firing. In order to avoid saying that those excited neurons are anger, scientists speak of correlates. Language counts. Not always, but often, I listened as people talked right past each other, each one speaking his own language.

Words quickly become thing-like. It has often fascinated me how a psychoanalytic concept such as internal object, for example, can be treated as if it weren’t a metaphor for our own inner plurality that necessarily includes the psychic presence of others, but as if it were something concrete that could be manipulated — like a shovel. I’ve also listened in amazement to analysts talk about the ego almost as if it were an internal organ — a liver, say, or a spleen. (I can’t help feeling that Freud’s Ich, our English I, with its pronominal association, might have been a better translation choice than ego.) A pharmacologist I met in the group referred to this phenomenon as a “hardening of the categories,” a phrase that struck me as both funny and wise. Names divide, and those divisions can easily come to look inevitable. Neurons, of course, aren’t like internal objects, egos, or ids. They are material in a way that psychic categories aren’t, but making sense of them calls for interpretation, nevertheless. As Jaak Panksepp likes to say, scientific research doesn’t make reality pop up like magic. He is not a victim of a naïve realism that reaches out for “the thing in itself” and grabs it. Hard science is a plodding business of findings and refindings and findings again. It is incremental, often contradictory, and dependent on the creativity of the mind doing the research, a mind that can grasp what the research means at the time, a meaning that may well change. At the end of many science papers there is a section called Discussion, where the researchers tell the reader how their study might be understood or how it could be followed up. Results rarely speak for themselves.

In the group, the problem of the self came up several times. What constitutes a self ? True to my education, I had often located the self in self-consciousness, a dialogical mirroring relation between an I and a you, and I believed that we create ourselves through others in narratives that are made over time. From this point of view, the self is a convenient if necessary fiction we construct and reconstruct as we go about the business of life. It was always clear to me that most of what we are is unconscious, but that unconscious reality has never seemed singular to me, but plural — a murmuring multitude. I have since modified my views. Language is important to forms of symbolic self-consciousness, an ability to see oneself as another, to hurl oneself forward into the future and remember oneself in the past, but not to consciousness or a feeling of selfhood. Surely animals, even snails, have some form of a self and are awake, alive, and aware. Does that awakeness, with its desires and survival instincts, its aggressions and attachments, constitute a core or primordial self, as some would have it, or is a self simply the sum of all our conflicted and fragmented parts, conscious and unconscious? People with damage to left language areas of their brains often hold on to a strong sense of themselves and can be profoundly aware of what they have lost. Some people with devastating lesions nevertheless retain an inner sense of who they are. Luria recorded a famous case of just such a person, Zazetsky, in The Man with a Shattered World.15 After suffering terrible head injuries in the Second World War, he spent his days trying to recover the bits and pieces of his ruined memory in a journal he kept until his death. Other forms of neurological injury, in the right hemisphere especially, can cause far greater disruption to a person’s sense of being, which suggests that while language is important, it doesn’t determine our identities. But for this discussion what interests me is not my own evolving view of what a self might be, but that for neurobiologists and analysts alike this global question of what we are is tortuous and necessarily calls upon a philosophical orientation, an ability to loosen the categories, juggle the frames, and be free enough to question even those ideas one has held most dear. Human inquiry requires making borders and categories, and dissecting them, and yet these divisions belong to a shared, articulated vision of how things are that is not arbitrary, but neither is it absolute. Unless one believes in an ultimate view, a supreme, disembodied scientific observer, we cannot find a perfect objective image of things as they are. We are neither angels nor brains in vats, but active embodied inhabitants of a world we internalize in ways we don’t fully understand. Merleau-Ponty makes a distinction between philosophy and science that is valuable: “Philosophy is not science, because science believes it can soar over its object and holds the correlation of knowledge with being as established, whereas philosophy is the set of questions wherein he who questions himself is himself implicated by the question.”16 Whether one agrees with Merleau-Ponty’s phenomenology or not, it seems clear that in science, as well as in philosophy, the observer’s relation to the object must be considered.

Every discipline needs its philosophy, or at least its ground rules. Another one of my island excursions has been to a hospital. I have a volunteer job as a writing teacher for psychiatric inpatients at the Payne Whitney Clinic. For its inmates, the clinic on the eleventh floor of New York Hospital is a world unto itself. The patients live in locked wards, are under continual supervision, and many of them don’t know when they will be able to leave. Some are eager to get out. Others are afraid to return to their lives. Some of those who are released come back before too long, and I wonder how warmly I should greet them when they walk through the door of the classroom, even when I’ve missed them. Each person has been diagnosed with one or several of the many disorders that are found in the Diagnostic and Statistical Manual of Mental Disorders, now in its fourth edition. The authors of the manual state in their introduction: “In the DSM IV, there is no assumption that each category of mental disorder is a completely discrete entity with absolute boundaries dividing it from other mental disorders or from no mental disorder.”17 Despite this caveat, I’ve discovered that at least for patients these diagnoses can become surprisingly rigid. The diagnosis and the patient are identified to such a degree that there is no escape, no blur, nothing left over for the person to hold on to once he’s been designated as bipolar, say, or schizophrenic. In many ways, this is not strange. A mental disorder isn’t a virus or a bacteria that attacks the body from the outside. If I am continually hearing voices from another dimension or am so depressed that I lie in my bed without moving day after day or am churning out fifty pages of poetry during a period of a few hours in a flight of manic joy or am reliving an assault in horrifying uncontrollable flashbacks, who is to say these experiences are not of me?

One afternoon, several students in my class began to talk about their diagnoses. A young woman turned to me and said in a pained voice, “To call someone a borderline is the same as saying ‘I hate you.’” It wasn’t a stupid comment. She had understood that the characteristics that define borderline personality disorder in the DSM, “inappropriate, intense anger” among them, could well be construed as unflattering traits, and ironically and perhaps true to her diagnosis, she had interpreted the clinical name as a blow. Another patient then helpfully volunteered: “My doctor says she treats symptoms, not diagnoses.” This more flexible position is also complicated. I have noticed that patients are often dealt with as a jumble of symptoms — sleeplessness, anxiety, thought disorder, auditory hallucinations — almost as if these traits were disembodied and easily distinguishable problems, each of which calls for a pharmacological solution. I am not an antidrug person. I have seen people who over a period of a couple of weeks improve remarkably when they change their medicines. I also witnessed a wild, incoherent, out-of-control patient, who after ECT, once called electroshock, the nightmare treatment of popular imagination, seemed so much better, so normal, that it took my breath away. I know the effects don’t always last. I know there are risks. A psychiatrist I met who works with inpatients said she was continually aware, not only of the potential dangers of treatment, but that fixing one thing may mean the loss of another. “How are you?” I asked a talented writer who had been in my class for over a month. She seemed quieter than I had ever seen her and far less delusional. “Not so good,” she answered, tapping her temple. “Lithium head. I’ve gone dead in there.” Is the dead self the well self? Would a little less lithium help to create a more normal self? As a person who has what psychiatrists would call hypomanic phases, which manifest themselves in excessive or perhaps obsessive reading and writing, often followed by crashes into migraine, I was full of sympathy for the patient. The difference between us is one of degree. Somehow I manage, and she doesn’t.

No one would argue that a person is his diagnosis, and yet no one would argue that the characteristics that define an illness aren’t part of the person. Even I, layperson that I am, have found myself silently diagnosing students in my classes, especially those with florid symptoms. Once you are immersed in the jargon and have read enough, it seems to come naturally. And yet, I know that in another era, or even a few years ago, the names and boundaries for the various illnesses were different and, to my mind, not necessarily worse than the ones that exist now. With each new edition, the DSM shifts its descriptive categories. New illnesses are announced. Old ones drop out. What interests me is that my perception of patients’ disorders, colored by doubt as it is, has been shaped by the givens of current psychiatric expert culture. I have begun to see what I’m looking for. The discursive frames orient my vision. My familiarity with them certainly makes it possible for me to talk to people in that world, to ask informed questions, to find out more, to continue my life as a curious adventurer. But one may well ask: just because I like hanging out on islands and chatting up the inhabitants, is there anything really wrong with specialization, with knowing as much as one can about Alzheimer’s or borderline personality or horse heads in Spenser? Would reading philosophy, history, art theory, linguistics, neuroscience, literature, or even psychoanalysis (now that it’s marginal to the profession) be beneficial, for example, to the doctors on the wards where I go to teach every week? Do we really want psychiatrists deep in Immanuel Kant, Hippolyte Taine, Erwin Panofsky, Roman Jakobson, D. O. Hebb, Fyodor Dostoyevsky, and the inimitable but still controversial Sigmund Freud? What’s the point? Nobody can know everything. Not even in the lost world of the thinkers who shared a vision of the educated man (it was a man, I’m sorry to say) did people know all. Even then there were too many books, too many fields, too many ideas to keep track of.

In The Man Without Qualities, Robert Musil’s character, General Stumm, hopes to search out “the finest idea in the world.” Chapter 100 of Musil’s huge, unfinished novel is Stumm’s account of his visit to the library. He is too embarrassed to use the phrase “the finest idea in the world” when he asks for a book, confessing that the phrase sounds like a “fairy tale,”18 but he hopes that the librarian, familiar with the millions of books to be found in that palace of knowledge, will guide him toward it. He can’t specify what he wants — the book in his mind isn’t about a single subject.

My eyes must have been blazing with such a thirst for knowledge that the man suddenly took fright, as if I were about to suck him dry altogether. I went on a little longer about needing a kind of timetable that would enable me to make connections among all sorts of ideas in every direction — at which he turns so polite it’s absolutely unholy, and offers to take me into the catalog room and let me do my own searching, even though it’s against the rules, because it’s only for the use of the librarians. So I actually found myself inside the holy of holies. It felt like being inside an enormous brain.19

The endless volumes, the bibliographies, the lists, categories, compartments for this subject and the other have a traumatizing effect on poor Stumm, who becomes only more disoriented when the librarian confesses that he never actually reads the books in the collection, only their titles and tables of contents. “Anyone who lets himself go and starts reading a book is lost as a librarian,” he tells Stumm. “He’s bound to lose perspective.”20

Musil’s comedy summons a truth. Losing perspective is an intellectual virtue because it requires mourning, confusion, reorientation, and new thoughts. Without it, knowledge slogs along in its various narrow grooves, but there will be no leaps, because the thinner my perspective, the more likely it is for me to accept the preordained codes of a discipline as inviolable truths. Doubt is the engine of ideas. A willingness to lose perspective means an openness to others who are guided by a set of unfamiliar propositions. It means entertaining a confounding, even frightening and radical form of intersubjectivity. It also means that however happy you are among the few residents of your particular island, that little island is not the whole world.

2007

ON READING

READING IS PERCEPTION AS TRANSLATION. The inert signs of an alphabet become living meanings in the mind. It is all very strange when you think about it, and perhaps not surprising that written language came late in our evolutionary history, long after speech. Literacy, like all learned activities, appears to alter our brain organization. Studies have shown that literate people process phonemes differently from illiterate people. Their knowledge of an alphabet seems to strengthen their ability to understand speech as a series of discrete segments. Before my daughter learned to read, she once asked me a question I found hard to answer. She pointed at a blank space between two words on the page of a book we were reading and said, “Mommy, what does the nothing mean?” Articulating the significance of that emptiness did not come easily. My illiterate three-year-old did not understand the sequencing and divisions that are inherent to language, ones that are more apparent on the page than in speech.

There are any number of theories about how reading works, none of which is complete because not enough is known about the neurophysiology of interpreting signs, but it is safe to say that reading is a particular human experience of collaboration with the words of another person, the writer, and that books are literally animated by the people who read them because reading is an embodied act. The text of Madame Bovary may be fixed forever in French, but the text is dead and meaningless until it is absorbed by a living, breathing human being.

The act of reading takes place in human time; in the time of the body, and it partakes of the body’s rhythms, of heartbeat and breath, of the movement of our eyes, and of our fingers that turn the pages, but we do not pay particular attention to any of this. When I read, I engage my capacity for inner speech. I assume the written words of the writer who, for the time being, becomes my own internal narrator, the voice in my head. This new voice has its own rhythms and pauses that I sense and adopt as I read. The text is both outside me and inside me. If I am reading critically, my own words will intervene. I will ask, doubt, and wonder, but I cannot occupy both positions at once. I am either reading the book or pausing to reflect on it. Reading is intersubjective — the writer is absent, but his words become part of my inner dialogue.

It happens that I find myself half-reading. My eyes follow the sentences on the page and I take in the words, but my thoughts are elsewhere, and I realize that I have read two pages but haven’t understood them. Sometimes I speed-read abstracts of science papers, zooming through the text to glean whether I want to read the whole article. I read poems slowly, allowing the music of the words to reverberate inside me. Sometimes I read a sentence by a philosopher again and again because I do not grasp its meaning. I recognize each word in the sentence, but how they all fit together requires all of my concentration and repeated reading. Various texts call for different strategies, all of which have become automatic.

I have vivid memories of some books that last in my consciousness. Novels often take pictorial form in my recollection; I see Emma Bovary running down a grassy hill on her way to the chemist’s shop, her cheeks flushed, her hair loosened by the wind. The grass, the cheeks, the hair, the wind are not in the text. I provided them. Philosophy usually does not stay with me in pictures, but in words, although I have formed images for Kierkegaard, for example, because he is a philosopher-novelist, a thinker-storyteller. I see Victor Eremita, the pseudonymous editor of Either/Or, with his hatchet as he smashes the piece of furniture in which two manuscripts have been hidden. Other books have vanished almost entirely from my conscious mind. I recall reading Danilo Kis’s A Tomb for Boris Davidovich and liking it very much, but I cannot bring back a single aspect of the story.1 Where did it go? Could an association prompt it back to mind? I clearly recall the title, the author, and my feeling — admiration for the book — but that is all that has remained.

And yet, explicit memories, however dim, are only part of memory. There are implicit memories, too, which can’t be retrieved on demand but are nevertheless part of our ongoing knowledge of the world. A simple example is reading itself — a learned skill I do, but I can’t remember how I do it. The rigors of long ago, puzzling over letters and sounding out words, have disappeared as conscious processes. Another example of subliminal recollection is looking for a particular passage in a book. I take the volume off the shelf, often without any sense of where the passage is among the many hundreds of pages. I certainly have no number in my mind, but once I have the object in hand, I am able to go directly to the paragraph I want. My fingers seem to remember. This is a proprioceptive ability. Proprioception is our motor-sensory capacity to orient ourselves in space — our ability to negotiate our way into chairs, dodge obstacles, pick up cups, and remember unconsciously where the crucial passage is to be found.

Cognitive scientists often talk about encoding, storage, and retrieval in relation to memory. These are computer metaphors that only approximate the actual experience of remembering and, I would argue, distort it as well. There are no warehouses in our brains where material is stored and waits to be retrieved in original form. Memories aren’t photographs or documentary films. They shift over time, are creatively and actively perceived, and this applies to the books we remember as well. They dim in time and may mutate. Others seem to imprint themselves deeply. Books, of course, are made only of words, but they may be recalled in images, feelings, or in other words. And sometimes we remember without knowing we remember. This insight is hardly new. In The Passions of the Soul (1649), Descartes claimed that a terrible episode in childhood could live on in a person despite his amnesia for the event.2 In his Monadology (1714), Leibniz developed an idea of unconscious or insensible perceptions that we aren’t aware of but which can nevertheless influence us.3 In the nineteenth and into the early twentieth century, William Carpenter, Pierre Janet, William James, and Sigmund Freud all investigated unconscious memories, although memories of reading did not figure in their contemplations.

I recently reread George Eliot’s Middlemarch. I had read it three times before, but many years had passed since my last encounter with the book. I had not forgotten the broad sweep of the novel or its characters, but I could not have reproduced each one of the multiple plots in detail. And yet, the act of reading the book again triggered specific memories of what would come later in the same text. Rereading became a form of anticipatory memory, of remembering what I had forgotten before I reached the next passage. This suggests, of course, that reacquaintance unearths what has been buried. The implicit becomes explicit. Cognitive scientists have a term called “repetition priming.” They don’t give their subjects eight-hundred-page novels but various verbal tasks, and then much later quiz them on a list of words, for example, and others not previously seen. It is clear that even without any conscious awareness of their earlier exposure to the words, participants in the studies have unconscious recall and perform better than if they had not been primed.

But no reading experience, even of an identical text, is the same. I discovered ironies in Middlemarch I had not fully appreciated before, no doubt the product of my advancing age, which has been paralleled by the internal accumulation of more and more books that have altered my thoughts and created a broader context for my reading. The text is the same, but I am not. And this is crucial. Books are either unleashed or occluded by the reader. We bring our life stories, our prejudices, our grudges, our expectations, and our limitations with us to books. I did not understand Kafka’s humor when I first read him as a teenager. I had to get older to laugh at “The Metamorphosis.” Anxiety can also block access to books. The first time I tried to read Joyce’s Ulysses at eighteen, I worried so much about my ignorance that I couldn’t get through it. A couple of years later, I told myself to relax, to take in what I could, and the novel has become a vivid jumble of visual, sensory, and emotional memories that are dear to me.

Some readers read a book and wish it had been another, one closer to their own lives and concerns. Writers have the fortune (and sometimes misfortune) to encounter their readers, either in person or through reviews and academic articles. A reviewer of one of my books, The Shaking Woman or A History of My Nerves (which is about the ambiguities of diagnosis and how illnesses are framed in various disciplines) was annoyed because I did not address the sufferings of those who care for sick people. This particular subject was so entirely outside the book’s argument, I couldn’t help wondering if there wasn’t some personal reason for the reviewer’s irritation: she wanted a book about caretakers, not about the people stricken by an illness. Sometimes books get scrambled in memory. Not long ago, a friend of mine told me she had reread Catch-22, expectantly waiting for her favorite scene. It never came. Her guess was that two books had mingled in her mind. And the beloved passage? To what novel did that belong? She couldn’t remember.

Openness to a book is vital, and openness is simply a willingness to be changed by what we read. This is not as easy as it sounds. Many people read to solidify their own views. They read only in their own fields. They believe they know what a book is before they have opened it or they have rules they imagine should be followed and react with dismay if their predictions are dashed. To some degree, this is the nature of perception itself. Repetitive experience creates expectation, which shapes how we perceive the world, books included. In recent years a lot of work has been done on something called “change blindness.”4 Most of these experiments involve visual scenes in which large numbers of people fail to notice significant changes. For example, in a film: after a cut, two cowboys switch heads. Buñuel’s That Obscure Object of Desire makes use of this form of ingrained expectation: two very different-looking actresses play the same role, but many viewers failed to notice for quite some time that one woman was not the other.

Reading has its own forms of change blindness. We necessarily come to books of a particular literary genre — detective fiction, romance novels, memoirs, and autobiography — with prejudgments about what the work will be. And if we don’t pay attention, we may miss essential departures from the form and fail to recognize what is there. Similarly, ideas of greatness or badges of honor, such as prizes attached to a book, predispose readers to think well of what they are reading. As a high school student I remember reading and disliking Archibald MacLeish’s poetry. I remember thinking that something was wrong with me because the man had won every literary prize it was possible to win in America. I now believe my early opinion was right. I also no longer feel alone. MacLeish’s star has plummeted.

It also happens, however, that I recognize a writer’s intelligence or her fluid and elegant style, but I am left feeling very little else. Such books seem to evaporate almost immediately after I have read them. Experiences of powerful emotions linger in the mind; experiences of tepid ones don’t. Great books, it seems to me, are distinguished by an urgency in the telling, a need that one can feel viscerally. Reading is not a purely cognitive act of deciphering signs; it is taking in a dance of meanings that has resonance far beyond the merely intellectual. Dostoyevsky is important to me, and I can place him in Russian intellectual history. I can talk about his biography, his ideas, his epilepsy, but that is not why I feel so close to his works. My intimacy is a product of my reading experiences. Every time I remember Crime and Punishment, I relive my feelings of pity, horror, despair, and redemption. The novel is alive in me.

But books may also return from thought’s underground into the daylight without our knowing where they have come from. I know that when I write the books I have read are implicated in what I am composing. Even novels I have forgotten may come to play a role in the unconscious generation of my own texts. Exactly how books are carried inside us after we have read them is not well understood and varies from person to person. Most of us are not savants. Except for poems or passages we have actively memorized, the books we read are not fixed in recollection as complete texts that we can turn to as we would a volume on the shelf. Books are made between the words and spaces left by the writer on the page and the reader who reinvents them through her own embodied reality, for better and for worse. The more I read, the more I change. The more varied my reading, the more able I am to perceive the world from myriad perspectives. I am inhabited by the voices of others, many of them long dead. The dead speak, and they speak in shouts and whispers and in the music of their poetry and prose. Reading is creative listening that alters the reader. Books are remembered consciously in pictures and words, but they are also present in the strange, shifting rooms of our unconsciousness. Others, which for some reason have no power to rearrange our lives, are often forgotten entirely. The ones that stay with us, however, become us, part of the mysterious workings of the human mind that translates little symbols on a page into a lived reality.

2011

STIG DAGERMAN

The snakes are loose.

Robert Mitchum in Crossfire (1947), directed by Edward Dmytryk

THE SNAKE IS A NOVEL of hallucinatory urgency written by a person barely out of his adolescence.1 Stig Dagerman was twenty-two years old when the book was published in 1945, and this fact alone makes it a rare work in the history of the novel. Poets, musicians, mathematicians, and visual artists often bloom young, but great novels have usually been produced by somewhat older people. Like F. Scott Fitzgerald, who published This Side of Paradise when he was twenty-four, Dagerman hit it big fast, and his early genius has become part of his identity as a writer.[1]

But this is received knowledge. What is the power of The Snake? When I opened the book for the first time and began reading, I confess I felt hammered by the metaphors and similes that came down on me hard and fast, one after the other. I asked myself if I had run into a hopped-up European version of Raymond Chandler and his tough-guy prose, but as I read on, it became clear that something altogether different was happening. This is a text in which the metaphorical and the literal mingle to such a degree that by its end, the two have merged entirely. The process begins in the introductory paragraph of the novel. Dagerman’s narrator views the train station of a “sleepy hamlet” on a burning-hot day, a town which is personified when it gets a “dig in its flank,” a dig that presumably wakes it up. The theme of sleep, dreams, and stupors is under way from this sentence on and will return again and again before the novel ends.

In the paragraphs that immediately follow, the reader is introduced to further tropes that will dominate the book and from time to time metamorphose into actual creatures and objects: the old woman at the station “with her quick rat-like eyes” who becomes “Rat-Eyes”; her grotesque companion (later identified as Irene’s mother), has a tongue that sways in her mouth “like the head of a snake”; and the train that cuts the silence “like a razor blade.” Rodents, snakes, and cutting instruments will reappear relentlessly in multiple guises and incarnations, as will images of mouths and throats, of suffocation and strangulation, of biting and the fear of being bitten, of suppressed or actual screams, of silence and speech.

The novel is structured as a concatenation of stories told from varying perspectives. It begins with its longest section, “Irene,” a third-person account, closely identified with the heroine but which also enters the mind of the sadistic soldier Bill and, briefly, Sergeant Bowman, who is terrified of the snake Bill shows him. The following section, “We Can’t Sleep,” about half the length of the first, is a collective narrative, told in the first person plural, but which incorporates the stories of individual men, as the soldiers lie awake on their bunks in a state of shared insomnia and dread, after which come five additional third-person tales, all titled, and none of which is more than half the length of the novel’s second section. The five stories revisit individual conscripts whom we have met earlier and do not follow a chronological timetable. In this way, the structure of the novel’s second half can reasonably be called snakelike; it moves, not straight ahead, but slithers in and out and around the characters’ narratives.

The two halves of the novel are bound by place and by a literal snake that appears and disappears in the camp, terrifying the men who feel it is lurking somewhere near, ready to strike. And even after it is found dead, one of the soldiers, Gideon, is startled to realize that his fear is not extinguished; it lives on. The snake’s metaphorical aptness for Sweden’s precarious neutrality during the Second World War is clear, but what interests me is Dagerman’s evocation of embodied dread, a human state that is notoriously difficult to pin down because, as Gideon notices, it needn’t have an object. Anxiety’s object can be nameless.

In Inhibitions, Symptoms, and Anxiety (Hemmung, Symptom, und Angst) (1926), Sigmund Freud writes about the ambiguity of angst: “What we clearly want is something to tell us what anxiety really is, some criterion that will enable us to distinguish true statements from false. Anxiety is not so simple a matter.”2 Freud changed his theory of angst in this book, but the central thought that applies here is that the person suffering from anxiety has been blocked from releasing a necessary discharge of energy, without which he can’t find satisfaction. The feeling of anxiety is a signal of danger, a danger, Freud argues, we save ourselves from through repression.

The characters in The Snake are bellyful of seething anxiety. Both their emotions and their words are smothered by all manner of restraints. Eventually, however, the internal pressure mounts, becomes unbearable, and the characters give in to volcanic outbursts. Besieged by jealous thoughts that Bill is with her rival Vera, Irene “holds a [figurative] knife to her throat to prevent her from saying anything…” Later, after she had pushed her mother off the train, she fears “being suffocated, of not getting enough air to her voice…” After being raped or nearly raped by the butcher boy, she breaks out in hysterical laughter. The fear and guilt she suppresses are compared to a little rodent inside her, gnawing away at her insides, but finally it must be let out. “Her terror is the little animal that nothing can keep shut in any longer. All of a sudden, she screams, in fact it’s the little animal that screams.” Bill presses down on his opponent during a fight “in order to loosen the scream” that will clinch his victory, but instead he sees the man’s strangled tongue, one that resembles the earlier snakelike tongue of Irene’s mother and is unmistakably phallic: “He could see his tongue being sort of pressed out of his mouth, stretching out like a neck.” The soft-spoken, well-mannered girl in “The Mirror” reaches her limit and howls at the dense Lucky, “Don’t you see, you idiot, I’m blind! Blind! Blind!” In “The Rag Doll,” the boy Sorensen could have saved from sexual molestation by a predator returns home and, in a scene of genuine horror, vomits his guts out on the man who should have acted to help him. Gideon, the outsider, tormented by his fellow conscripts, suppresses the scream of his mounting anxiety. After he is brutally attacked by them, he lies paralyzed on the floor long after his cohorts have done with him. “Then he screams.”

The sexual danger that runs through the stories turns on a need for and terror of orgasmic release. Irene’s vacillations between inhibition and freedom, between soporific boredom and arousal, between her dread of and masochistic attraction to Bill are subtly evoked in a kind of Dagermanian phenomenology of embodied emotion and sensation that tracks her shifting internal realities in an ongoing present. These fluctuations in the characters between lassitude and longing, silence and speech, are continually explored through the lens of their immediate experience. And for many of them this dialectical tug of feeling crescendos in an act of sudden violence. When the reader first encounters Irene, she is lying naked in bed in a state of sensual torpor and moral ambivalence, which continues as she talks to Bill through the window. Her lax mood ends abruptly when Bill hurls his bayonet into the wall that separates them, bites her mouth when they kiss, and then she cuts her wrist on the bayonet’s blade. He also nips his other conquest, “quickly as a razor blade, he bites Vera’s ear.” The sexual violence of “The Rag Doll” is never seen, but it is ominously portended when the pederast bribes his victim with a knife in a sheath. The myriad associations prompted by the knives in the novel, both metaphorical and real, of cutting, butchery, wounds, sexual sadism, and running blood, permeate the text. At one moment the earth itself appears to have been gored: “Then he comes into the dense, suffocating spruce forest, where clusters of berries have been ripped out of the ground, which seems to be bleeding with deep black wounds.”

The novel’s insistent tropes move seamlessly from the subjectivity of a single character to personifications of a town or a landscape as asleep or bleeding. Accompanying this metaphorical motion is a contagion of feeling that moves from one character into another and blurs the boundaries between them. The fear is individual and collective: it bleeds. Without Dagerman’s psychological acumen, however, I don’t think the roiling metaphors and linked stories would have the power they do. He is a master of the momentary, of the fleeting emotional and ethical muddles we all find ourselves in, and then wonder what on earth has happened. He is also sensitive to the ongoing delusions between self and other and how often we confound the two. In “The Mirror,” Lucky’s attraction to the girl is a projection and so, ironically, a form of blindness:

Lucky suddenly felt sorry for her, because she was so alone. It was his own self-pity that spread to her. He’d worked himself into one of those moods now — everyone around him seemed to have a companion they could exchange ideas with; he was the only one who was so wretchedly lonely.

Lucky’s painful isolation infects his perception, and his identification with the pitiable position of the blind girl culminates in a vicious blast of self-hatred, and he smashes a mirror in the apartment, after which he is summarily beaten by his cohorts.

In The Snake the furious repression of confused and confusing sexual desire and its sadistic aspects epitomized in the narrative of Irene, Bill, and Vera merges with the more overt political messages delivered in the book’s second half and the collective failure to speak out and act, perhaps best articulated by Edmund through another image of restraint: “I feel pressurized … I feel there’s an iron band pressing into my skull when I find there are laws nobody’s asked me to accept that make me practically defenseless.” This message is further complicated by an additional statement. “You’re not wearing it because you deserve it, but because of so many people’s cowardice and your own inadequacy.” In the “Iron Band” chapter, it is words that go “to sleep in their sleeping bags” and must be shaken and woken up. Edmund finds his voice and his words; he speaks loudly, too loudly, addressing the angst he feels directly, claiming that no one’s is greater than his. As Joker listens to his comrade, he feels a sudden desire to speak, to achieve “clarity,” but the words are “strangled” and instead of coherent sentences, he emits gibberish, which is followed by a hallucinatory dream of two adjacent rooms, which are contained in Joker himself — a hiccoughing, chortling, jiggling living room and a second threatening angst-room with chairs that speak and fear that hovers near the ceiling. The double rooms clearly address the political present: one is full of drunks, furniture, a radio; the other is an impoverished place rife with paralyzing dread, but the fantasy is too bizarre to be agitprop or to be read only in terms of the war.

In The Concept of Anxiety, that paradoxical, ironic, parodic, difficult text by Søren Kierkegaard, the pseudonymous Vigilius Haufniensis famously yokes anxiety and dizziness.

He whose eye happens to look down into the yawning abyss becomes dizzy. But what is the reason for this? It is just as much his own eye as in the abyss, for suppose he had not looked down. Hence anxiety is the dizziness of freedom, which emerges when the spirit wants to posit the synthesis and freedom looks down into its own possibility, laying hold of finiteness to support itself. Freedom succumbs in this dizziness. Further than this psychology cannot and will not go. In that very moment everything is changed, and freedom when it again rises sees that it is guilty. Between these two moments lies the leap, which no science has explained and which no science can explain.3

The Concept of Anxiety is a maddeningly complex examination of original sin and effect of the fall: the alienation from nature fundamental to human beings — guilt, innocence, and freedom. The spirit is that which is linked with God and the infinite, the body and the psyche are tied to the finite. As for Freud, for Kierkegaard anxiety serves as an inward subjective signal and needs no object. There is something ambiguous about dread itself, something that can’t be pinned down. Science has no access to this reality because it posits only a third-person objective view, and the “leap” made is not rationally explicable.

Dagerman’s snake necessarily evokes the story of the fall with its serpent, Eve, Adam, the original loss of innocence, and the insistent question of free will. In fact, the book may be read as a meditation on what it means to act freely. Joker’s hallucination of anxiety is the dizziness of freedom. The two rooms do not remain separate but merge in a terrifying image of collapsing walls, wild laughter, and blurring borders, in the monstrous confusion of unbearable dread. When Joker wakes from his horror dream, in which he feels he will burst apart, he again longs to speak his mind but doesn’t. He does not embrace his freedom or address his guilt. He does not act but remains silent in the world of his fantasy, only imagining that he has spoken, and his comrades have answered him by assuaging his sense of culpability with the words “You don’t need to have a guilty conscience.” The ethical dilemma, the parsing of innocence and guilt, is not simple and, as a reader, I have immense sympathy for Joker and his confused wish to be aided by his companions, somehow to articulate the complexity of his feelings, and for his inability to get the words out. Guilt spreads, too. It seeps into the whole culture and stains everyone.

It does not matter whether Dagerman read Kierkegaard or not, because in the early forties when he wrote The Snake, Kierkegaard was in the air, reread through the lens of Existentialism, a name mostly rejected by those who were labeled with it, but which nevertheless serves a useful purpose here because I am talking about ideas that seem to circulate like winds and blow into people’s ears as if no reading were necessary. When I was at Columbia University in the late seventies and early eighties, I sometimes felt that French theory had so permeated the humanities departments that one could simply stand on campus and inhale it. Such was existentialism in the early forties.

Although Sartre remained in denial about his debt to Kierkegaard, the Dane is all over Being and Nothingness (1943) in Sartre’s references to “vertigo” and “anguish” and in his insistence on freedom as an unavoidable human fate.4 But it is not only existentialism that infected the author of The Snake. Eras have moods, too, and in light of the monstrosities of Nazism, it is hardly strange that a fateful pessimism hung over the ideas and the art of the day and in the years that followed in American film noir, for example, which borrowed from European cinema to create its own dark stories of human brutality. Dagerman’s interest in American prose writers is well known, but their influence is reinvention in The Snake. Hard-boiled metaphors become a philosophical device, one that emphasizes, rather than limits, ambiguity.

Dagerman’s novel is a cry for individual responsibility and freedom, as well as a spirited work of resistance to the conventions of bourgeois life, which restrain and stupefy people. And it is a call for free thought and speech to clarify what should be done. It is acutely aware of the unspeakable horrors of the war, the sadism, the blood, and destruction, but none of these explains the book’s power. The allegory, the symbolism, the bleeding metaphors work because they are embodied in characters and scenes of genuine psychological force and nuance, because the visual world of the novel is astutely observed and refuses banal conventions, and because the narration has an intensity and drive that is irresistible. Also, the book is shot through with ironies and humor. The author even makes fun of his own metaphorical indulgences through his alter ego in the novel, Scriber, the scribe, the author character. In the second chapter, we are told that Edmund has poked fun of Scriber as a guy who compares a fire extinguisher to a bottle of India ink and a bottle of India ink to a fire extinguisher. “But just think if he needs both the India ink bottle and the fire extinguisher in the same sentence. How’s he going to manage it without mixing them up, without the firemen starting to spray ink on a fire and the artist drawing his sketches with carbon dioxide?” A welcome moment of self-referential levity.

In the last chapter of The Snake, “The Flight That Didn’t Come Off,” Scriber falls. After an unknown quantity of beer and an extended argument with two interlocutors, a cultural critic and a poet, during which Scriber has insisted that “the tragedy of modern man is that he no longer dares to be afraid,” he acts on an impulse to prove a point. He wants to “carry his reasoning to its logical conclusion.” Scriber climbs out the window and moves along the parapet. When the poet calls for him to come back because he might fall, Scriber says, “No bloody fear.” Surely this is a moment of hubris, not the dizziness of freedom for the one who looks into the abyss. There is something wrong with his fearlessness. Has he not been arguing for the merits of fear and dread? He loses his footing, and the last thing he hears is not his own scream but the cry of the prostitute who is standing in the doorway below.

The snake in the garden does not cause the Fall. He is only the tempter. But I don’t believe that Scriber’s fall can be read in any single way, nor do I think that the end evokes despair. It is, above all, ambiguous. Scriber’s fall is also a stupid accident. An energetic, argumentative, tipsy young man finds himself out on a ledge and falls to his death for no good reason. That is life, too — the sudden slip into the abyss. And there is irony in Scriber’s fickle fearlessness at the end of a novel that treats anxiety as its theme. Only a few pages before, Scriber has insisted that his own fear “is the greatest in the world.” We are not rational creatures. In the best art something always escapes us and bewilders us. If it didn’t, we would never return to it.

2010

THE ANALYST IN FICTION: Reflections on a More or Less Hidden Being

THE FOLLOWING PASSAGE IS FROM an early draft of my most recently published novel, The Sorrows of an American. The narrator, Erik Davidsen, is a psychiatrist/psychoanalyst who lives in New York City. I have rescued the deleted passage from my closet, home to dozens of manuscript boxes stuffed with rejected material, because although it never found its way into the finished book, it speaks to the uneasy position the psychoanalyst occupies in contemporary American society.

The story of psychiatry has been bedeviled by the problem of naming from the beginning, a tortured puzzle of herding a diffuse cluster of symptoms under a single designation. The wounded psyche is not a broken leg. An X-ray won’t reveal the fracture, and the brain images from PET scans and fMRIs cannot show us thoughts, only neuronal pathways. What invades or grows within a mind and causes people to suffer is not, as in a case of the measles, a single pathogen. Despite its earnestness and longing for precision, psychiatry’s Bible, the DSM, now in its fourth edition, is a muddle. “Disorder” is the word of choice these days. Mental illness is a state of chaos and the job of mental health professionals is to restore order by all means at their disposal. New disorders are added with each edition of the DSM; others fall away; their presence or absence isn’t necessarily founded on new science, but on consensus and, for lack of a better word, fashion. Half of Sonya’s classmates have been diagnosed with ADHD. The DSM begins its description like this: “Attention Deficit/Hyperactivity Disorder is a frequent pattern of inattention and/or hyperactivity-impulsivity that is more frequently displayed and more severe than is typically displayed in individuals at a comparable level of development.” Typically is the word to notice. Exactly what is typical? I am not alone in thinking that thousands of American boys have been fed stimulants for no reason. I do believe that the disorder is real and that medicines can sometimes help, but its presence as an epidemic is a cultural phenomenon, the product of an evolving idea of what a normal child is supposed to be.

I have prescribed many drugs in my day and have seen their undeniable benefits. When screaming inner voices fall silent, a depression lifts or panics subside, the relief can be incalculable. I’ve also seen what the profession politely calls adverse effects: ataxia, blackouts, seizures, incontinence, renal crises, akathisia — the restless, wiggling sensations that make it impossible to sit still — and tardive dyskinesia — the tongue wagging, jaw rotating, hand and foot jerking caused by narcoleptics. The inability to achieve orgasm is such a common “side” effect of SSRIs, drug of choice for the masses, few doctors even bother to mention it to their patients. Insurance companies will pay for only short-term care, which means that after a brief interview or during a short hospital stay, a physician must assign a name to an array of often murky symptoms and prescribe a drug. Most American psychiatrists have become little more than prescription-writing machines, who leave psychotherapy to social workers. What has been forgotten in all this is how we draw the lines between one thing and another, that the word is not the thing. The problem is not a lack of good will among physicians. It is, as Erwin Schrödinger once mourned, “the grotesque phenomenon of scientifically trained, highly competent minds with an unbelievably childlike — undeveloped and atrophied — philosophical outlook.”

I am also a psychoanalyst, a member of that beleaguered group of cultural outcasts who are only now regaining respect with the revelations of neuroscience. Psychoanalysis, too, has suffered from “hardening of the categories,” as a colleague of mine once put it, of treating metaphorical concepts as if they were chairs or forks, and yet, it is at its best a discipline that values patience and tolerates ambiguity. What happens between two people in the analytic room cannot be easily quantified or measured. Sometimes it cannot even be understood, but after years of practice I have become a man changed by the stories of others, a human vault of words and silences, of speechless sorrows and shrouded fears.

Erik’s use of the word outcast may be strong, but his view that for years psychoanalysis has been losing ground to drug-oriented psychiatry is a fact, and our culture’s representations of the analyst have suffered as a result. To see this clearly, one needs only to ask the question: How many absurd or demeaning caricatures of neuroscientists have you encountered in the media lately? Surely it would be easy to poke fun at some of their widely reported studies: a “God spot” discovered in the brain, for example, or fMRI results of Republicans and Democrats in the throes of “partisanship,” as if religion and American politics can be found in the temporal lobe or the amygdala, wholly isolated from language and culture. Neuroscientists often ridicule such research as examples of “a new phrenology.” But the doubts articulated by people inside the field do not reach the hordes of journalists eager to record the explorations of “the last frontier” and embrace the newest brain discoveries as if they were absolute truths handed down from some divine source.

I am deeply interested in the neurobiology of mental processes, in brain plasticity and its role in the development of every human being over time. But I do not believe that the subtle character of human subjectivity and intersubjectivity can be reduced to neurons. As Freud wrote in On Aphasia: A Critical Study, “The psychic is therefore a process parallel to the physiological, a dependent concomitant.”1 The conundrum of the brain/mind relationship is as mysterious now as it was when Freud wrote those words in 1892. Erik’s observation that the insights of neuroscience, some of which appear to confirm long-held psychoanalytic ideas about the unconscious, repression, identification, as well as the effectiveness of the talking cure, have helped redeem psychoanalysis is, I think, accurate. But it also reflects a truism: if you can locate an illness in some body part, it’s more real than if you can’t. Although this belief is philosophically naïve, it is nevertheless held by multitudes of people, including any number of doctors who have spent little time examining the taxonomies that shape their perceptions of illness and health.

Mass culture is often crude. The portraits of the analyst as a bearded, tight-lipped, aging character with a Viennese accent, a sly seducer hopping into bed with his clients, an egghead spouting jargon, a deranged monster, or merely an innocuous buffoon reflect various clichéd, and often hostile, views of psychoanalysis that have become familiar to many of us. But silly as these images are, they may also unearth a genuine suspicion of a discipline that, despite its enormous influence on popular thought, remains fundamentally misunderstood.

Priests, physicians, and psychoanalysts are repositories for, among other things, secrets, and the need for trust and the fear of betrayal are always present when a secret is told. Like the priest, the analyst inhabits a realm outside the ordinary social world. He or she is neither friend nor family member but nevertheless becomes the container of another person’s intimate thoughts, fantasies, fears, and wishes — precious materials that must be handled carefully. There are forbidden behaviors in the psychoanalyst’s office, but no subjects that cannot be spoken about.

The patient’s rare freedom of speech in a sacrosanct space has provided a number of writers with the perfect frame for the fictional confession. The very first psychoanalytic novel, Italo Svevo’s Zeno’s Conscience (1923), opens with a preface written by our hero’s analyst: “I am the doctor occasionally mentioned in this story, in unflattering terms. Anyone familiar with psychoanalysis knows how to assess the patient’s obvious hostility toward me.”2 Holden Caulfield of Salinger’s Catcher in the Rye (1951) unburdens himself to a hidden psychiatrist. In Lolita (1955), Nabokov, like Svevo, includes a “Foreword,” written by one John Ray, Jr., Ph.D., who offers Humbert Humbert’s story as a case study, and while acknowledging its author’s literary gifts also excoriates him as “a shining example of moral leprosy.”3 Philip Roth’s Portnoy’s Complaint (1969) also includes a brief introduction in the form of a dictionary entry, which defines the term “Portnoy’s complaint” and refers to the doctor who has coined the name for this particular “disorder,” O. Spielvogel, author of an article, “The Puzzled Penis.” After this little parody, the reader meets the garrulous narrator, who for 270 pages prattles, expounds, and fulminates at his analyst, who then famously utters a single line at the end of the book: “So (said the doctor) Now vee may perhaps to begin. Yes?”4

These books are essentially bracketed monologues. There is no back-and-forth, no dialogue, no world made between therapist and patient. They are not fictional versions of therapeutic practice but narratives that employ psychoanalysis as a literary device to unleash an uncensored first-person confession. The analyst or psychologist remains mostly outside the narrative. Svevo’s doctor, as he himself points out, plays only a small role in the pages to come. He also proclaims that he is publishing the memoirs “in revenge” for his patient’s untimely departure from treatment, and adds the vituperative quip “I hope he is displeased.” Nabokov’s condescension to American academics displays itself, not only in the text of his foreword, but in the addition of Jr. after his psychologist’s name. In Salinger and Roth, the analyst is a remote, hidden being, not a you for the narrative I. Salinger’s psychiatrist never speaks, and Roth’s is never answered. They are objects, not interlocutors. The image of a distant, implacable doctor who nods, says “Ah” or “Vell,” and only occasionally offers an abstruse comment, usually involving complexes or fixations, has become a stereotype, but it is one rooted in the history of psychoanalysis.

The analyst as a neutral figure has long struck me as a flawed idea, but then so does the notion of objectivity in the sciences. Is it possible to drain any person of subjectivity, whether she is an analyst or a researcher in a laboratory? Even in the lab, human beings must interpret results, and those interpretations cannot be expunged of the interpreter’s thought, language, and culture. There is no third-person or bird’s-eye view detached from a breathing bodily presence. Despite the fact that they are not free from human prejudice, the experiments of the hard sciences can be controlled and repeated over and over again. This is not true of the nuanced atmosphere of the analytic environment. From its early days, psychoanalysis has had to defend itself against the accusation that mutual suggestions passing between analyst and patient would contaminate the process and destroy its legitimacy. As George Makari points out in Revolution in Mind, “In the hopes of containing the analyst’s subjectivity, Freud created the ideal of an analyst whose desires and biases were held back. But there was a hitch. The imagined analyst floating in evenly suspended attention must be without resistances, without blind spots.”5 In other words, the ideal demands that the analyst be superhuman, that his or her first-person reality be transformed into the disembodied third-person view heralded by science. It is not hard to see how this perfectly neutral floating personage might be employed for comic or satirical purposes, or how that same withdrawn and mostly silent figure might vanish from a story altogether.

Although some psychoanalytic theorists, such as Kernberg,6 continue to champion an ideal neutrality, many have let it go for a more attainable posture, which recognizes that therapy is an intersubjective process, but not one between equals. The effective analyst holds back, maintains distance through her role, her professional attitude, her considered interventions. An analysis is necessarily hierarchical. The patient puts himself into the hands of an expert, but the substance of analysis is the patient and his inner life. The analyst’s thoughts become apparent only in moments, and only in relation to the patient. The analyst’s family, her joys, pains, and anxieties, remain hidden unless she chooses to share information for a particular purpose. If intimacy becomes truly two-sided, the treatment has failed. Alex Portnoy is free to rave, but his analyst isn’t. In some fundamental way, the psychoanalyst must be a mystery, a mystery filled by the patient’s loves and hates, emotions that can turn very quickly from one to the other. The most vulgar depictions of the psychoanalyst in our culture may be a form of splitting. The idol falls, and an evil demon takes his place. A truly human portrait of a working therapist, therefore, depends on a point of view that can accommodate ambivalence. It must also address the problem of the between, the charged space that is neither analyst nor analysand, but a mutual creation. This is not an easy territory to articulate. It is not subject and object, but two subjects who necessarily mingle. This is a human reality, which analysis magnifies, and which the history of the discipline has tried to find words for: transference and countertransference, Bion’s container and contained, Winnicott’s transitional object all touch on this bewildering area of the middle. Novels use many languages that slip and slide. Their diction moves from high to low and in and out of the voices of different characters. As a patient does in analysis, the writer searches for the words that will have a true meaning, not ultimately true, but emotionally true.

Works of fiction that depict the back and forth of psychoanalysis are quite rare. As Lisa Appignanesi pointed out in an article in The Guardian: “Shrinks in novels, if they appear at all, are largely devoid of that very inner life which is meant to be their trade; they often strut the fictional stage as grotesques.”7 She mentions the appearance of psychiatrists and analysts in a number of novelists’ works and finds the portraits largely hostile until the appearance of a couple of recent novels: Hanif Kureishi’s Something to Tell You and Salley Vickers’s The Other Side of You. Although she mentions Virginia Woolf, Vladimir Nabokov, Doris Lessing, Iris Murdoch, Philip Roth, D. M. Thomas, Sylvia Plath, Simone de Beauvoir, and Erica Jong, she does not write about F. Scott Fitzgerald’s Tender Is the Night, published in 1933. Fitzgerald’s psychoanalyst, Dick Diver, is not a figure of ridicule, an empty sounding board, or an authority introducing a “case.” Of all the novels I have read that treat analysis, Fitzgerald’s is the one that most deeply enters the land of the between. The novelist’s knowledge of psychiatry came mostly through the many physicians who treated his wife, Zelda, including Eugen Bleuler, who diagnosed her with schizophrenia. The novel’s strengths do not come from a mastery of psychoanalytic theory, however, although the ideas of transference and countertransference clearly caught Fitzgerald’s attention. Dr. Diver marries a rich patient, Nicole Warren, whose illness is central to the story, and the two are caught in an unsettling tug-of-war. Roles and personalities — doctor/husband/Dick, patient/wife/Nicole — merge, dissolve, and disentangle themselves over the course of the book, and for a time, Dick and Nicole refer to themselves by a single name that suggests a borderless psychosis: Dicole.

For me, the most wrenching passage in the book, however, takes place between Diver and another patient in the Swiss clinic where he works. This nameless woman is described as “particularly his patient.” She is an American painter who suffers from an agonizing skin affliction, which has been “unsatisfactorily catalogued as nervous eczema.”

Yet in the awful majesty of her pain he went out to her unreservedly, almost sexually. He wanted to gather her up in his arms, as he had so often done with Nicole, and cherish even her mistakes, so deeply were they part of her. The orange light through the drawn blind, the sarcophagus of her figure on the bed, the spot of face, the voice searching the vacuity of her illness and finding only remote abstractions.

As he arose the tears fled lava-like into her bandages.

“That is for something,” she whispered. “Something must come out of it.”

He stooped and kissed her forehead.

“We must all try to be good,” he said.8

The goodness is all Fitzgerald. Throughout his work, there are repeated strains of longing for the moral verities of his Midwestern childhood, a paradise of lost goodness that bears little resemblance to the founding Oedipal myth of Sigmund Freud. But Fitzgerald’s description of Diver’s inner motion toward his patient that is “almost sexual,” his aching compassion, and his understanding that a wide chasm lies between her speech and her suffering articulates truths about psychoanalytic work. Words are often circling the wordless, seeking an explanation for pain that will bring sense to what feels like nonsense. In art, as in psychoanalysis, what feels right must always have resonance, even when it is impossible to fully explain why a passage has taken on that strong emotional echo. It is no accident that the woman in Diver’s care is a painter. Just before he feels the urge to take her into his arms, he meditates on her fate, one he feels can never include her work.

The frontiers that artists must explore were not for her, ever. She was fine-spun, inbred — eventually she might find rest in some quiet mysticism. Exploration was for those with a measure of peasant blood, those with big thighs and thick ankles who take punishment as they took bread and salt, on every inch of flesh and spirit.9

While even a cursory survey of the lives of innumerable artists could easily serve as a disclaimer to this statement, its truth value is not what makes it compelling. Fitzgerald’s creatures are generated from the dreamlike action that produces fiction out of lived experience. As is the case with many writers, he robbed his own life and transfigured it. Fitzgerald was adamantly opposed to his wife’s ventures into the arts, to her writing, dancing, and painting, and one obvious way to read this passage is to turn it into a fictionalized explanation of his resistance: She wasn’t strong enough. Fitzgerald may even have been thinking about Zelda when he wrote the paragraph. And yet, I believe that the scabrous, bandaged woman on the bed, whom Diver feels for so intensely, is also an image of himself, and by extension his creator. Dr. Diver’s narrative wends its way toward alcoholism and failure. Fitzgerald’s drinking was legendary. After Tender Is the Night, he never finished another novel. He wasn’t strong either. And although he sometimes feared his femininity (he was homophobic), Fitzgerald, like Henry James, had an imagination as feminine as it was masculine. The miserable spectacle of artistic failure finds itself in the body of a woman too weak for work. Of course, if my little interpretation demonstrates anything, it is how quickly the reader of any literary text becomes like the analyst, and how much we writers of fiction are often unconscious of when we write.

In 1933, when Fitzgerald wrote his book with an analyst as hero, the image of the psychoanalyst had not hardened. The Second World War and its devastation lay ahead, and the field was still in the process of an often messy and fractured creativity. A serious, knowledgeable portrait of a postwar analyst can be found in Simone de Beauvoir’s character Anne in The Mandarins (1956). She is given extended first-person narrations inside the novel, and her patients are all people who have been traumatized by the war in some way.

The white-haired young woman was now sleeping without nightmares; she had joined the Communist Party, had taken lovers, too many lovers, and had been drinking immoderately. True, it wasn’t a miracle of adjustment, but at least she was able to sleep. And I was happy that afternoon, because little Fernand had drawn a house with windows and doors; for the first time, no iron fence.10

De Beauvoir was well versed in psychoanalysis, and Anne’s descriptions of her patients ring with authenticity. The passage above makes it clear that although she does not expect miracles, she takes pleasure in small successes. Her attitude is strictly professional. And yet, perhaps because the novel is a roman à clef, modeled closely on de Beauvoir’s life with Sartre, their circle, and her love affair with the American writer Nelson Algren, Anne thinks about her patients too little and leaves them too easily. There is nothing in The Mandarins about the psychoanalytic encounter that comes close to the depth of feeling, the soaring moment of identification Diver feels when he stands beside his ailing patient.

When I began writing as Erik Davidsen, I was not thinking of literary precedents for his character. I thought of him as my imaginary brother, a man who worked at a job I can imagine having had in another life. What if I had grown up with a brother, I wondered, born to parents much like mine? What if, rather than four daughters, there had been one son and one daughter? And because I was writing the novel after my father’s death or rather out of his death, a character like my father and grief like my grief, but also not like it, became part of the narrative. I transformed my experience, changed sex, wrote in a different voice, found a doctor self and several patient selves. Being Erik meant having a fictional practice. Writing the sessions between my narrator and the people he treats came from places in me both known and unknown.

I have been reading about psychoanalysis since I was in high school, but being Erik also meant immersing myself in psychiatric diagnoses, pharmacology, and innumerable neuroscience papers. I also read countless memoirs of mental illness, some good, some poor; interviewed several psychiatrists and analysts in New York City; joined a discussion group about neuropsychoanalysis led by a psychoanalyst; and began teaching weekly writing classes to psychiatric patients at Payne Whitney. That is the known part. Books, conversations, and perceptions enter us and become us.

The unknown part is far more diffuse and difficult to reach. I cut the passage I reproduced in this paper because it was a bit of contemporary sociology that did not advance the narrative, but also because I wanted the novel to take place mostly on the terrain of a man’s inner life, a psychic landscape inhabited by both the living and the dead. Erik knows he is not neutral, knows that psychotherapy happens in the land of Between, that wilderness between you and me. Although the patient’s narration must dominate, the analyst can steer, probe, wonder, and interpret, while he or she maintains a thoughtful, sympathetic professional distance. A holding environment is not just a space for confession; it is where truths can be discovered and narratives remade.

The sense of hearing was crucial to the novel. The analyst listens, and as I wrote, I realized that Erik was extremely sensitive to sounds, not only to words spoken, but to the intonations and cadences of the human voice, as well as to pauses and silences, and that his auditory acumen extended to the myriad nonhuman sounds of the city. His patients are part of his inner world, and he thinks about them. They variously hurt, arouse, bore, move, and gratify him. During sessions, he has sudden mental images, associates to words his patients use, and examines his own emotional response to what he hears and sees. His experience with his patients is not exclusively intellectual. Unarticulated tensions bristle in the air. Meanings are confused. Ghosts enter the room. Erik loses his balance with a borderline patient, Ms. L., and seeks advice from Magda, his training analyst. He breaks through with another patient after a long period of stasis. I suspect that it is the multifaceted reality of being a psychoanalyst that is so seldom caught in fictional portraits. The analyst as purely cerebral or as convenient deposit box leaves out the substance of psychoanalysis: the unconscious. Discussing the dynamics of transference in a symposium talk called “Counter-Transference,” D. W. Winnicott comments on transference through a hypothetical statement, “You remind me of my mother”:

In analysis the analyst will be given clues so that he can interpret not only the transference of feelings from mother to analyst but also the unconscious instinctual elements that underlie this, and the conflicts that are aroused, and the defenses that organize. In this way the unconscious begins to have a conscious equivalent and to become a living process involving people, and to be a phenomenon that is acceptable to the patient.11

Obviously, writing fictional versions of psychoanalytic sessions is not the same as being in analysis. There is no real other in a novel, only imagined others. But writing novels is nevertheless a form of open listening to those imagined others, one that draws on memories, transmuted by both fantasies and fears. And it is an embodied act, not an intellectualization. Unconscious processes struggling toward articulation are at work in both psychoanalysis and art. It is impossible to fully understand how a book comes about, because the words are born elsewhere. In fact, when a work of fiction is going well, it seems to write itself. It is no longer a question of authorship, but midwifery — allowing a birth to take place.

Writing as Erik, I felt an underground music that determined the rhythms of the book’s form. I knew I was writing a verbal fugue, point and counterpoint, themes chasing themes, and variations on them that kept returning: telling and not telling, listening and deafness, parents and children, the past in the present, one generation’s sorrows living on in the generation that follows it. And so, I have come to understand that it wasn’t only the parts of the novel that explicitly explored Erik’s relations with his patients that were about psychoanalysis, but that the book as a whole was generated from the discipline’s particular form of dialogue and search for a story that feels right and makes sense.

2010

CRITICAL NOTES ON THE VERBAL CLIMATE

EVERY POLITICAL MOMENT HAS A particular rhetorical climate. Language matters, not only because it expresses the dominant ideologies of a period, but because it creates, alters, and determines our perceptions of the world. We have been living for some time now in what I think of as bad linguistic weather, a verbal fog that is both contagious and damaging to political discourse in the United States. Manipulation of language for ideological purposes is nothing new. When it is effective, it inevitably creates an emotional response in the listener, a rush in the limbic system that calls on the deepest feelings we have as human beings. These appeals fall essentially into two categories: a call to empathy or a call to fear.

When I was a child, I remember listening to Martin Luther King and being moved to tears by his words. Although I didn’t understand everything he was saying, I grasped what was essential. The minister was talking about human rights, arguing that freedom and justice for all were legal and moral imperatives. Racism and its shameful policies were a stain on the United States that had to be eradicated in order that all of its citizens might be free. As unsophisticated as I was as a nine- and ten-year-old, King’s rhetoric was a door to my empathic imagination. What would it be like if I were not a white girl but a black girl subject to indignities and cruelty for no reason? I knew I would be furious. I would fight for my rights. My emotional identification created a primitive but genuine political position, one that has developed, but at the same time, not changed in any fundamental way.

The word freedom, as King used it, with its call for sympathy and imagination, has receded from public discourse. It has been replaced by its double, a word with connotations that inevitably evoke fear. George W. Bush has repeatedly used the word freedom in his speeches as a word to signal alarm. We are protecting our “freedom” from “the enemies of freedom,” fighting the “evildoers” who “hate freedom.” It is well known by now that the rationale for the invasion of Iraq was bogus, that the administration continued to argue that Saddam Hussein was somehow mixed up with September 11 although this wasn’t true. Applying the well-worn propaganda technique of repeating falsehoods over and over, the Bush administration succeeded in convincing significant numbers of people in the country that these prevarications were true. More interesting to me, however, is why this worked. It’s much too easy to argue that Americans are just ignorant and will swallow anything. There are intellectuals, ideologues, and informed people on both sides.

When the president speaks to Americans about freedom, he is using a word that sounds good to just about everybody. The entry in Webster’s under freedom is long, but the first definition reads: “state of being at liberty rather than in confinement or under physical restraint.” Surely this is the most common sense of the word. I’m free, not in bondage, enslaved, or in jail. The word also resonates strongly with the founding principles of American government born of the Enlightenment, the freedom Immanuel Kant evoked in his 1784 essay, What Is Enlightenment?” “Enlightenment is the freedom to make public use of one’s reason at every point.” Before the philosopher arrives at this, however, he cautions that accepting such freedom is not a given.

Laziness and cowardice are the reasons why so great a portion of mankind, after nature has long since discharged them from external direction, nevertheless remains under lifelong tutelage, and why it is so easy for others to set themselves up as their guardians. It is so easy not to be of age. If I have a book which understands for me, a pastor who has a conscience for me, a physician who decides my diet, and so forth, I need not trouble myself. I need not think, if I can only pay — others will readily undertake this irksome work for me.1

This passage has lost none of its pertinence. Knee-jerk patriotism, calls to support the president simply because he is the president, literal interpretations of the Bible, are all versions of Kant’s “tutelage,” a state of mind that evokes feudalism, not democratic republics. And yet, we were all children once. Desires for parental protection, for letting someone else decide, for safety in a frightening world, aren’t bizarre. They are familiar, and I believe that many Americans after September 11 sought comfort in the paternalism that George W. Bush seemed to represent.

But the word freedom also connotes a deep-rooted American myth — that Wild West, out-of-my-face, doin’-as-I-please, gun-toting, anarchic fantasy that has little in common with Kant’s sober argument about maturity and the solitary work of an enlightened individual: independent thinking. The administration has been savvy enough to be unbothered by definitions of exactly what kind of freedom they’re talking about or for whom, but they created an idea attractive to many: a swaggering Father Knows Best shod in a pair of cowboy boots wielding a smart weapon. Accompanied by simple slogans in which freedom was incessantly linked to protection from the monsters out there, this image packed a visceral punch. But beneath the administration’s incantation for freedom lay wildly contradictory and irrational messages: we have brought freedom to Iraq, but Iraqis cannot walk in their own streets without fear, and it isn’t only Islamic extremists and former Baathists they must look out for but strained and confused American soldiers who let bullets fly in fits of fatal, if understandable, paranoia. The United States trumpets freedom, but in the name of that freedom curtails civil liberties at home, defies the Geneva Conventions for prisoners of war, and through ugly legalisms, sets in motion a justification for torture already taking place. The freedom to worship or not worship as one pleases means bringing the precepts of one religion into every aspect of public life. Freedom means suppression: if the conclusions of scientific research on global warming are at odds with the reigning ideology in Washington, alter them. If evolution, a universally accepted scientific theory, offends biblical literalists, throw the weight of power and legitimacy behind its faux alternative: intelligent design. All of these outrages were reported in the newspapers, and yet the president won an election and remained reasonably popular until the floods came roaring into the Gulf.

Bush’s endless repetition of the word freedom has swayed people who take comfort, not from an abstract political notion of protecting universal individual liberties under the law, but from a tribal mentality much older than the Enlightenment and the institutions in this country that came from it, one that reemerges when people sense danger from the Other, when the barbarians are howling at the gates. Fear is a powerful and mobilizing emotion, one that leaves little room for Kantian cogitations. No one in New York who saw the devastation wrought on September 11 would argue that those who planned and executed that horror aren’t dangerous, but the Bush administration has manipulated the terror we all felt that day to ostracize many of the people it is supposed to represent. The Other in Republican rhetoric isn’t limited to the radicals who murdered three thousand people in this city. Bush dehumanized them from the beginning. Over the years, the language of the Right has repeatedly exploited the real divisions in this country between Americans who live in a secular, urban world and those who live in a religious, often rural one. By casting liberals as effete and feminized (Schwarzenegger’s “girlie men” as well as the absurd denigration of John Kerry’s ability to speak French), unpatriotic (any criticism of administration policies), and godless (Republicans circulated a flyer during the last campaign claiming that liberals would ban the Bible), the Other has effectively become not only the rest of the world, but more than half the country.

We’ve been here many times before. A single example will suffice: the anti-immigrant movement of the 1840s and 50s. In April 1844, an election circular was printed in the New York Daily Plebeian which began like this: “Look at the hordes of Dutch and Irish thieves and vagabonds, roaming about our streets, picking up rags and bones, pilfering sugar and coffee along our wharves and slips, and whatever our native citizens happen to leave in their way.” This little gem of fear-mongering goes on to mention “English and Scotch pickpockets and burglars,” “Italian and French mountebanks,” and last, but not least, “wandering Jews.”2 Although both comic and grotesque on hindsight, this text is a reminder of how, at every moment, arbitrary and entirely fictional thresholds are drawn among people for political gain. The Nativists or “Know-Nothings,” as they were called, came from families who had all emigrated from somewhere at some point. The borders between us and them are continually sliding.

The need to divide is old, but the lines of those divisions keep changing. I’m not alone in feeling some amazement at the fact that a party that consistently supports giant corporations and tax relief for the very rich has been able to sell itself as populist. An entire book has been written on the subject: What’s the Matter with Kansas?3 Who are those people who voted against their own self-interest? Are they nuts or what? The Norwegian-American farming community where my father grew up and which I remember from my girlhood was decidedly left wing. Many of those second- and third-generation Norwegians suffered or were ruined entirely by the Depression. My grandparents lost much of their land to the bank when it foreclosed on their mortgage, and they had to stop farming altogether. The DFL, Democratic Farm Labor, was strong in Minnesota then, and my father, for whom the Depression remained a painful memory all his life and whose sympathies were inevitably with the downtrodden, was a wholehearted supporter of that coalition between rural and urban workers.

But there were prejudices in that world, too, a deep-seated anxiety about strangers, especially “city slickers,” who were suspected of feeling superior to “ordinary folks.” I remember conversations among some of the old farmers of my grandfather’s generation that cast bankers, rich people, and most urbanites in a half-light of moral doubt. Their “fanciness” wasn’t just foreign, it smacked of crookedness, of gambling, dancing, and all manner of corruption. But even when prejudice wasn’t operating at full tilt, isolating difference was vital to that community’s mode of being. The old people, all of Norwegian descent and all still Norwegian-speaking, habitually used qualifiers to signal that a person wasn’t of them: “Sven, the Swede,” they’d say, or “Fredrik, the Dane.” It wasn’t malicious, but it underscored the smallness of the world in which they lived. When my father was dying in a nursing home, he had frequent visitors, among them a Lutheran minister, who liked my father and was interested in his spiritual well-being. I was present at one of their conversations, during which the reverend mentioned a Bible study group he attended and brought up the name of one of its members. “A Methodist, you know,” he declared as an aside, a hint of disapproval in his voice. I couldn’t help smiling. Having spent twenty-seven of my fifty years in New York City with its inhabitants from all over the world, who speak dozens upon dozens of languages and subscribe to any number of religions, the distinction between Lutherans and Methodists struck me as rather fine.

The old people never lost their faith in the politics of the Left. They died believers, but many of their children and grandchildren, who inherited their prejudices, have moved to the Right. Playing on the age-old fear of malignant outsiders and foreigners, both those residing on American soil and elsewhere, has become the exclusive property of the Republican Party, and I understand how it happened. Egalitarian, anti-intellectual, paranoid feelings have long been tapped in this country; only the objects of those emotions change. “Evildoers,” “enemies of freedom,” and the exhortation that “If you’re not with us, you’re with the terrorists” are forms of political speech that make dialogue impossible. There is no legitimate response to such speech because anyone who counters with another thought has already been lumped with an inhuman enemy. In psychiatric patients, absolute polarities like those the president habitually makes are regarded as pathological, a form of dichotomous thinking often seen in patients with borderline personality disorder. The ill person is unable to tolerate ambiguity and insists on viewing the people in his life through an “all good” or an “all bad” lens. George W. Bush and his cohorts have been masters of angel/devil discourse, employing a language that gives no room for dialogue and necessarily distorts reality, which unfortunately is usually murky. This kind of speech that doesn’t recognize an interlocutor, a real human other, is speech without empathy, and it is startlingly similar to the rhetoric of the Muslim radicals who spew venom at the West and “the enemies of Islam.” Had the men who boarded those planes on September 11 been able to imagine their victims as people like themselves, they wouldn’t have been able to complete their mission.

No doubt there is something all too human about this phenomenon of splitting. The need for simple juxtapositions of good and evil, heroes and villains, is ubiquitous. It is the stuff of most Hollywood movies and many popular books. Nuance is discarded for easy clarity. It is certainly possible that George W. Bush actually views the world in these black-and-white terms, that his mind is as blunt and unrefined as his impromptu sentences. His insistence on “loyalty” from those who work for him may be an indication of an “all for” or “all against” way of thinking. I don’t know. I do know that a hurricane destroyed at least for a short time the seductiveness of the president’s rhetorical polarities. Nature is amoral, after all, and its ravages are without guilt. In a vain attempt to keep the country’s collective eyes on foreign devils, the president made a statement to the effect that were the terrorists to view the spectacular destruction, they would wish they had done it. I find this sad. I also find it sad that the images on television of drowned bodies and people stranded turned the media’s attention to “poverty,” as if it had suddenly been revealed overnight that huge numbers of people in this country, both white and black, are terribly poor, and that poor black people suffer the double burden of poverty and racism. It is never mentioned, of course, that race is yet another divisive category, another cultural fiction made real because it continually shapes our perception of the world around us. One need only ask where and how the arbitrary line between races is drawn to understand this clearly. Nevertheless, the public discourse shifted. It moved away from us and them to just us. The dead are our dead. The evacuees are ours, too. We had nobody to blame but ourselves and the indifferent government that supposedly represents us.

We will always feel more for those who belong to us, for our families, our neighbors, our fellow citizens. No one can completely escape tribal sentiments. As a resident of New York City, I was more affected by September 11 than by Hurricane Katrina. That said, I resist the idea of turning my private emotional attachments into public policy, because I don’t want others to do the same. Any discourse that demonizes other people, near or far, is a betrayal of the idea of freedom. In a free society, political liberties belong to everyone, and when curbed are lost to everyone. In a free society, nobody owns the truth. It is strange to articulate the obvious, to feel that it’s necessary to argue for principles long enshrined in our political system, but we have strayed from our Enlightenment heritage in these last years, not for the first time, it is true, but I am convinced that we must reexamine what we are saying, must begin to choose our public words judiciously and imaginatively. If we don’t, we are in danger of going blind in the lowering fog of New Speak that has enveloped us.

2005

THREE EMOTIONAL STORIES

IN A 1995 ESSAY ON memory, “Yonder,” I wrote the following sentence: “Writing fiction is like remembering what never happened.”1 It seemed to me fifteen years ago, and still seems to me today, that the mental activity we call memory and what we call the imagination partake of the same mental processes. They are both bound up with emotion and, when conscious, they often take the form of stories. Emotion, memory, imagination, story — all vital to our subjective mental landscapes, central to literature and psychoanalysis and, much more recently, hot topics in the neurosciences.

Ever since Plato banned poets from his Republic, philosophers have debated the role of imagination and its link to memory. Traditionally, imagination referred to the mental pictures we conjure in our minds, as opposed to direct perception. For thinkers as diverse as Aristotle, Descartes, Kant, and Hegel, the imagination occupied a middle zone between the bodily senses and intellect. Augustine connected imagination to both emotion and will. The will directs internal vision toward memories, but it also transforms and recombines them to create something new. “My memory,” he writes in the Confessions, “also contains my feelings, not in the same way as they are present to the mind when it experiences them, but in a quite different way that is in keeping with the special powers of the memory.” The emotions are all there, Augustine tells us — desire, joy, fear, and sorrow — and they can all be called to mind, but “if we had to experience sorrow or fear every time that we mentioned these emotions, no one would be willing to speak of them. Yet we could not speak of them at all unless we could find in our memory not only the sounds of their names, which we retain as images imprinted on the memory by the senses of the body, but also the idea of the emotions themselves.”2

Surely, Augustine’s thoughts remain cogent: remembering is not the same as perceiving. We remember what we have perceived, although we need ideas or concepts and names, language, to recognize and organize the material we have brought to mind. The seventeenth-century Italian philosopher and historian Giambattista Vico regarded memory and imagination as part of the same faculty rooted in sense perceptions. The imagination, he wrote, is “expanded or compounded memory,” and memory, sensation, and imagination are skills of the body: “It is true,” he insisted, “that these faculties appertain to the mind, but they have their roots in the body and draw their strength from it.”3

Vico’s comment is startlingly like the phenomenology of the twentieth-century French philosopher Maurice Merleau-Ponty (1908–1961), who understood imagination as an embodied reality, dependent on sensory perceptions, but which nevertheless allows us entrance into possible, even fictive spaces, l’espace potentielle, “potential space.” D. W. Winnicott used the same term in relation to his thoughts on play and culture. Unlike other animals, human beings are able to inhabit fictional worlds, to turn away from the phenomenal present and imagine ourselves elsewhere. Rats, even sea snails, remember, but they don’t actively recollect themselves as characters in the past or hurl themselves into imaginary futures.

How do we conceive of the imagination now? From Galen in the second century to Descartes and his pineal gland where he located phantasie or the imagination in the seventeenth, to phrenology in the nineteenth and Freud’s abandoned Project near the dawn of the twentieth, thinkers have sought the anatomical sites of mental functions. We have not solved the mystery of the mind’s eye, or what is now framed as a brain/mind problem. Terms such as neural representations, correlates, and substrates for psychological phenomena do not close the explanatory gap; they reveal it. There is a vast literature on this, and the debates are ferocious. A solution does not seem imminent. Suffice it to say that our inner subjective experience of mental images, thoughts, memories, and fantasies bears no resemblance to the objective realities of brain regions, synaptic connections, neurochemicals, and hormones, however closely they are connected.

I am not going to solve the psyche/soma problem here, but I can put some pressure on that old sentence of mine: Writing fiction is like remembering what never happened, or to rephrase it: How are remembering and imagining the same and how are they different?

The novelist, the psychoanalyst, and the neuroscientist inevitably regard memory and imagination from different perspectives. For the novelist, the story does all the work. When I am writing fiction, I am concerned with what feels right and feels wrong. I see images in my mind as I work, just as I do when I remember. Often I use landscapes, rooms, and streets that actually exist as backdrops for the actions of my fictional characters. I am directed by the story, by the creation of a narrative that resonates for me as emotionally, rather than literally, true. The novel develops an internal logic of its own, guided by my feelings.

For the analyst, a patient’s personal memories are crucial, but so are fantasies and dreams. They exist within the dialogical atmosphere of the analytic room and the abstract conceptual framework the psychoanalyst brings to his work. When listening to a patient’s memory, a psychoanalyst would keep in mind Freud’s idea of Nachträglichkeit, what James Strachey translated as “deferred action.” The adult patient may have memories from when he was five, but those memories have been reconfigured over time. The analyst would be alert to repetitive themes and defenses in his patient’s speech, but also voice cadences, hesitations, and, if his patient is looking at him, the motions and gestures of a body. What is created between analyst and patient is not necessarily a story that represents historical fact, but one that reconstructs a past into a narrative that makes sense of troubling emotions and neuroses. For patient and doctor, as for the novelist, the narrative must also be felt; it must resonate bodily as emotionally true.

The neuroscientist is trained to conceive of subjective memory and creative acts through objective categories, which she hopes will unveil the neurobiological realities of a self that both remembers and imagines. Following Endel Tulving and others, she will probably divide memory into three categories: 1) episodic memories, conscious personal recollections that can be localized to a specific place and time; 2) semantic memories, impersonal recall of information — cats have fur, Kierkegaard wrote under pseudonyms — and 3) procedural memories, unconscious learned abilities — riding a bike, reaching for a glass, typing.4 As a memory researcher, she would be aware of Joseph LeDoux’s work on the enduring synaptic connections formed by emotion in memory, fear in particular,5 and she would know that memories are not only consolidated in our brains, they are reconsolidated. Although it is unlikely that our neuroscientist has read Freud carefully, she would unwittingly agree with him that there is no original “true” memory; autobiographical memories are subject to change. Finally, theoretically, at least, her subjective feelings are irrelevant to her work.

In these three practices, we find the two modes of human thought which William James in his essay “Brute and Human Intellect” called narrative thinking and reasoning.6 Jerome Bruner, using a philosophical term, has called them two natural kinds, that is, essentially different.7 Novelists think in stories. Analysts use both narrative thought and the categorical thinking of reasoning. Scientists may employ a case history as an illustration, but their work proceeds without story. Reasoning is sequential but not dependent on a representation of time. Narrative is embedded in the temporal. Unlike the flux that characterizes narrative, scientific categories are static. Memory and imagination have to be approached from a third-person perspective and placed in a broader taxonomy. In the reasoning mode, definitions become all-important, and therefore a frequent battleground. First-person experience is vital to narrative because there is always an agent whose subjectivity and intentionality are part of the story’s movement, narrated from one perspective or another. In science the subject is nameless and normative.

What is the third-person point of view? Scientists cannot jump out of themselves and become God any more than the rest of us can. Through a largely unexamined agreement about what Thomas Kuhn in The Structure of Scientific Revolutions calls a paradigm — the bottom line of accepted theory, which changes over time, often in great convulsions — and an explicit consensus about methodology, scientists aim to avoid subjective bias.8 Although Freud never used the word neutral himself, the idea of the neutral analyst is a direct importation from the natural sciences. The omniscient narrator in some novels plays this role as he looks down from on high at his characters and their follies, but we readers know that clever as he may be, Henry Fielding’s narrator in Tom Jones is not God. Indeed, the cool “I-less” voice of nearly all academic writing adopts the pose of the third person. There is an author or authors, but they vanish as persons from the text. Still, as Kuhn argues in his book, there is no such thing as perceptual neutrality. The histories of science, psychoanalysis, and, of course, the novel, make this abundantly clear. This truth does not impede either discovery or innovation; it merely qualifies epistemology.

As Augustine points out, if we didn’t have names and ideas for things, we couldn’t speak of them at all. Both the reasoning and narrative modes of thought create linguistic representations. They exist on what the linguist Emile Benveniste refers to as a pronominal axis of discourse.9 The I implies a you, even if that I is just listening to one’s inner self. The language wars are as fierce as the brain/mind wars. Is there a universal grammar? as Noam Chomsky argued. Wasn’t Wittgenstein right that there is no such thing as a private language? How is language acquired, and exactly what does it have to do with our memories and imaginations? There is no consensus. I am sympathetic to A. R. Luria’s position that the advent of language reorders the mental landscape.10 I do not subscribe to the postmodern notion that it is the mental landscape. Nevertheless, whatever innate abilities we may have to learn it, language, which is both outside and inside the subject, plays a crucial role in our reflective self-consciousness, in how we become creatures of not only “I remember,” but “What if…?”

We codify perceptual experiences in conscious memory through both placement — where and when it happened — and interpretation, what it means in the larger context of my life. In our autobiographical memories, as in a mirror, we become others to ourselves. Even if we don’t see ourselves in the third person, we have projected the self elsewhere in time. As Merleau-Ponty notes in The Phenomenology of Perception: “Between the self which analyzes perception and the self which perceives, there is always a distance.”11 There is a difference, he argues, using Hegel’s distinction, between the “in itself” and the “for itself” (für sich).12 When I actively recall something from my past, what Augustine called “will” is involved. This is exactly how Aristotle distinguished human from animal memory. Only we people will ourselves backward in time.

And the episodic memories we recall have mostly been turned into stories. If narrative is, as Paul Ricoeur argues in Time and Narrative, a “grasping together” of various temporal actions or episodes13 in both life and fiction into a whole that has meaning, I believe that meaning is crucially grounded in emotion. It makes sense that narrative, a ubiquitous form of human thought, would, mimicking memory itself, focus on the meaningful and leave out the meaningless. What I am indifferent to, I mostly forget. The stories of memory and fiction are also made by absences — all the material that is left out.

As early as 1895, the psychologists Alfred Binet and Victor Henri tested children’s memories for lists of unrelated words as opposed to meaningful paragraphs.14 The children remembered the meaningful passages far better, but they reported them back to their examiners in their own words. They retained, to borrow a term from the Russian formalists, the fabula. Cinderella can be told in many different ways, and the details may vary, but the fabula, the bones of the story, remain the same. The narrative mode contextualizes the meaning or valence inherent in every emotion. It pulls together and makes sense of disparate sensory and affective elements.

Augustine’s insight that emotion dims in memory, however, is overwhelmingly true of our episodic memories. The cooling of the emotions that belongs to such recollections is built into the nature of this kind of memory, because it is quickly turned into narrative. The raw affective material of memories is restructured and then told as stories from a remove. Much, if not all, of this restructuring takes place unconsciously. When I remember, for example, that in 1982 I was hospitalized for eight days with an excruciating migraine that had lasted for many months, I do not reexperience either my pain or my emotional distress, although the pictures in my mind are colored gray for sadness.

I no longer remember what happened every day in the hospital, only a few highlights — a nurse who seemed to believe migraineurs were either neurotics or malingerers, the interns who asked me over and over who the president was, and my doctor who seemed exasperated that I didn’t get well. (All of their facial features are now visually dim.) I remember lying in the hospital bed, but I no longer see the room clearly. Still, I have a mental image that probably combines several hospital rooms I’ve visited or seen in the movies. We deposit memorable emotional events into a visual setting that makes sense, but what we see in our minds may bear little resemblance to what actually was. What I’ve retained is the story and a few serviceable mental pictures, but much is missing from that verbal account.

The fact that I used that hospital stay in my first novel, The Blindfold, further complicates matters because I turned an episode from my life into fiction, an episode I had already, no doubt, fictionalized in memory. Both the memory story and the novel were created unconsciously. Furthermore, I don’t truly recall my twenty-seven-year-old self. Too much time has intervened. I can easily shift the scene and see myself in the third person, a wan, blond young woman pumped full of Thorazine, staring at the ceiling. The hospital chapter of The Blindfold was turned into a movie, La Chambre des Magiciennes, by the French filmmaker Claude Miller. My experiences in Mount Sinai in 1982 generated three stories with the same fabula: my own narrative memory of an actual event, my character’s story in the novel based on that event, and my character’s story in the film, embodied by the actress Anne Brochet. Each one is different, and each one is constructed as a narrative, which partakes of the imaginary, the fictionalizing processes inherent to memories that are reflectively self-conscious.

Neuroscience research on the imagination is limited. However, a 2007 paper on patients with bilateral hippocampal lesions found they suffered from impaired imaginations as well as memory. A paper that same year by the same team, D. Hassabis et al., was published in The Journal of Neuroscience. This paper, “Using Imagination to Understand the Neural Basis of Episodic Memory,” based on fMRI scans, concludes, “… we have demonstrated that a distributed brain network, including the hippocampus, is recruited during both episodic memory recall and the visualization of fictitious experiences.”15 The activated part of the brain is a large cortical network, which has been implicated in “high level” cognitive functions, not only episodic memory, but future thinking, spatial navigation, theory of mind, and the default network. The participants were given three tasks that fell under the rubrics recall, re-create, and imagine. Notably, they were asked to keep all of these scenarios emotionally neutral.

The authors divided episodic memory into what they call “conceptual components,” among them a sense of subjective time, narrative structure, and self-processing. Although I enthusiastically endorse such research and believe episodic memory and imagination are fundamentally connected, I would like to focus on just one of their components: self-processing. The authors hypothesize that there will be less self-processing in imaginary other-oriented scenarios than in autobiographical ones, a reasonable thought until one asks oneself exactly what self-processing is. How exactly does an imaginary story I am generating about you, or her or him, not involve me? Aren’t all of these narratives — recalled, re-created, or imagined — related to my self, a part of my subjective experience? Furthermore, aren’t these narratives represented, at least in part, in language, and so necessarily located on the axis of discourse? There is no pronominal I without a you. When I think of you, are you not a part of me? What is being processed here? Shouldn’t neuroscience look to other disciplines to refine this vague idea: self-processing? Isn’t phenomenology’s concept of an embodied self useful in this regard? And what about psychoanalytic theory, with its internal objects, transference, and countertransference? Even with a cooling effect, can episodic memory and imagination really be entirely divorced from emotion?

In a 2009 comprehensive review of neuroimaging studies on self-processing versus other-processing in the Psychological Review, Dorothée Legrand and Perrine Ruby state: “The authors of the aforementioned studies … hypothesized that a given cerebral substrate should be systematically more activated for the self than for the nonself. Our review demonstrates that the cerebral network they identified does not exhibit such a functional profile.”16 I politely suggest that many of the researchers reviewed by Legrand and Ruby have lost themselves in the philosophical wilderness of selfhood. At the explicit representational level of episodic and imaginative narration, a distinction between self- and other-processing strikes me as entirely artificial.

In a fascinating 2011 paper “Bodily Self: An Implicit Knowledge of What Is Explicitly Unknown,” Frassinetti, Ferri, Maini, and Gallese conducted two experiments to untangle the following question: “We directly compared implicit and explicit knowledge of bodily self to test the hypothesis that bodily self-advantage, i.e., facilitation in discriminating self compared to other people’s body effectors, is the expression of an implicit body-knowledge.”17 In the first experiment, subjects were confronted with three photographs, one on top of the other, of their own and other people’s hands and feet, as well as objects belonging to them or others — mobile phones and shoes. They were asked to match the lower or upper image to the center “target” photograph. In this task a distinct self-advantage showed itself. In other words, people were considerably better at matching their own body parts than matching other people’s. No such advantage was present with the objects. In the second experiment, there was no target image, just an empty white box in the center. This time, the subject was asked which of the two remaining images was his or her own hand, foot, mobile phone, or shoe. Not only was there no self-advantage in this case; there was a self-disadvantage in recognizing one’s own body parts, one that was not seen in the recognition of one’s own objects.

The hypothesis is that an unconscious motor representation of our bodies is at work in the first task, while what the authors call “body identity” must be summoned for the explicit task. Body identity (or what Shaun Gallagher in How the Body Shapes the Mind calls body image) is a conscious, not an unconscious idea.18 It is the self perceived as an other. In the explicit task, the response is not automatic; the person has to think about it; and thinking often involves a linguistic construction as well as a visual one. Is that my foot? Is it someone else’s? The authors’ conclusion is worth quoting, “Taken together, our results show for the first time that the representation of our body-effectors is not only different from the way we represent inanimate objects, but — more importantly — it is accessible in a least two different ways: one way is implicit, while the other is detached, third-person like.”19 This unconscious/conscious distinction is paramount to understanding what neuroimagers call “self-processing.”

William James said that all personal memories have a “warmth and intimacy,” a quality of one’s own.20 To use the Latin word for selfhood or identity, my memories have ipseity. But so do my fantasies, the vicarious experiences I have while reading, my thoughts about others, my feelings about my fictional characters, and my dreams. James’s “warmth and intimacy,” that sense of ownership, is not emotionally neutral. And, as Freud stressed in The Interpretation of Dreams, however irrational or bizarre our dream plots may be, the emotions we feel are not fictional. He quotes the dream researcher Stricker: “If I am afraid of robbers in a dream, the robbers, it is true, are imaginary — but the fear is real.”21

While Proust’s tea-soaked bit of cake has become a facile reference for just about everybody, what is interesting is not that the petite madeleine opens the narrator to memories of his childhood, but rather that at first, the taste produces only feeling. “… this new sensation having had on me the effect which love has of filling me with a precious essence; or rather this precious essence was not in me, it was me.” It is only after the swell of high feeling has passed that Proust’s narrator asks himself this: “What did it mean?”22 That meaning, explored in the seven-volume first-person narrative of Remembrance of Things Past, lies in the fluctuations of subjective experience — of an emotional self in space and time. First, the narrator perceives and feels. He is immersed in the prereflective consciousness of a sensual reality that is also somehow remembering. Only later does he reflect on it, and that reflection requires that he conceive of himself as an object to himself in the same way he conceives of others. Isn’t it reasonable, then, that “self-processing” cannot be distinguished from “other-processing” at the explicit conscious level of storytelling?

The narrative self is the self in time. We are immersed in time, not clock time necessarily, although we adults refer to it, and certainly not the time of physics. We live in subjective time, the sequential time of our consciousness, and what happens before becomes the template for what we expect to happen later. Through repetition, past perceptions create future ones. In one of his 1925 lectures on phenomenological psychology, Edmund Husserl writes that “each … momentary perception is the nuclear phase of a continuity, a continuity of momentary gradated retentions on one side, and a horizon of what is coming on the other side: a horizon of protention, which is disclosed to be characterized as a constantly graded coming.”23 We are continually retaining and projecting, and the present always carries in it the thickness of before and after. Husserl, who was influenced by William James, argues that the experience of time, this perceptual stream is always pregiven in a first-person perspective. When he was five, my nephew Ty sat in the family car and made a startling discovery. Looking at the road behind him, he cried out, “That’s the past!” Turning to the road ahead, he crowed, “That’s the future!” The locus of that streaming reality of time and space, was of course, Ty himself.

We now know that a form of time, or rather, timing is also part of infancy. Psychoanalysis, attachment studies, and infant researchers, such as Daniel Stern, have been vital to our notation of what might be called the intersubjective music of early life, the preverbal melodies of the first human interactions. As John Bowlby postulated, these rhythms of attachment are crucial to affect regulation later in life. In an empirical study of adult-infant vocal interactions, Rhythms of Dialogue in Infancy, the authors proceed from a dyadic view of early communications. The nuanced analysis of the rhythmic dialectic between mother and child provides a foundation for a child’s ongoing social and cognitive experiences by forming, as the authors put it, “temporal expectancies.”24 These bodily, emotional expectations form the ground for the axis of discourse and the narrative self. An infant’s prereflective conscious perceptions are not yet for herself in an articulated story. Nevertheless, these deeply established corporeal metrics, the motor-sensory beats of self and other, merge with genetic temperament in the dynamic synaptic growth that accompanies early emotional learning. In his book Affective Neuroscience, Jaak Panksepp writes,

From a psychological perspective, I would say that the main thing that develops [in a child’s interactions with his world] in emotional development is the linking of internal affective values to new life experiences. However, in addition to the epigenetic processes related to each individual’s personal emotional experience leading to unique emotional habits and traits, there is also a spontaneous neurobiological unfolding of emotional and behavioral systems during childhood and adolescence.25

We are creatures of a subjective time founded in the wordless dialogues of infancy, which is further developed in language and its natural consequence, story. As important as the narrative self is, however, I am in complete concordance with Dan Zahavi, who writes in his book Subjectivity and Selfhood, “Is it legitimate to reduce our selfhood to that which can be narrated?” He goes on to add, “The storyteller will inevitably impose an order on the life events that they did not possess while they were lived.”26 Proust’s “precious essence,” which he claims is himself, resonates with Panksepp’s revision of Descartes’ famous cogito ergo sum to I feel therefore I am. In “The Neural Nature of the Core SELF,” Panksepp locates his core SELF in the brain stem: “… the ability to experience raw affect,” he argues, “may be an essential antecedent to foresight, planning, and thereby willful intentionality.”27 I add narrative to that list. Antonio Damasio, in his book Self Comes to Mind, discusses his protoself that produces “primordial feelings” reflecting the body’s homeostatic reality, “along the scale that ranges from pleasure to pain, and they originate at the level of the brain stem rather than the cerebral cortex. All feelings of emotion,” he continues, “are complex musical variations on primordial feelings.”28

In Instincts and Their Vicissitudes (1915) Freud proposed his own homeostatic model of primitive selfness in an organism that can discriminate between outside and inside through “the efficacy of its muscular activity.” For Freud, the regulation of internal drives and external stimuli are the origin of all feelings. “Even the most highly evolved mental apparatus,” he writes, “is automatically regulated by feelings belonging to the pleasure pain series.”29 It is out of this core feeling self that a reflectively conscious, remembering, imagining narrative self develops.

Shaun Gallagher also posits a minimal self or a “primary embodied self” already present in the motor-sensory corporeal reality of an organism that is aware of its own boundaries.30 Infant studies on imitation and deferred imitation give credence to the idea that a newborn has a greater awareness of his separateness from others and the environment than was thought earlier.31 Exactly how memory develops in babies is controversial. What effects do an immature hippocampus and forebrain and incomplete myelination have on that development? What exactly do implicit and explicit memory mean in a preverbal infant? What roles do imitation, mirroring, and language play? How do we frame the reality of infant consciousness? How is it related to a minimal or core self? When does in itself become for itself? What is the neurophysiology of time perception and how does it develop? All of these questions remain unanswered.

Narratives from the Crib, edited by Katherine Nelson, focuses on the monologues of Emily Oster taped before she went to sleep between the ages of twenty-one and thirty-six months. These soliloquies are remarkable illustrations of what Vygotsky called private speech,32 the stage before inner speech takes over. We witness the chattering play-by-play announcer who has not yet gone underground. Here is a monologue from when Emily was twenty-one months old. She is talking to her doll. I have truncated it slightly.

Baby no in night

Cause baby crying

Baby no eat supper in in in this

No eat broccoli no

So my baby have dinner

Then baby get sick

Baby eat no dinner …

Broccoli carrots cause rice

Emmy eat no dinner

Broccoli soup cause

No baby sleeping

Baby sleeping all night33

There are no fixed tenses here that situate past, present, and future, no pronominal “I.” There is a third-person baby and a third-person Emmy, characters that mingle in what might be called a protonarrative. Emily verbally represents herself as an agent to herself, and describes a series of actions in order to make sense of her emotion: the memory of not feeling well, not eating, and not being able to sleep. The third-person “Emmy” precedes the first-person “I” because reflective self-consciousness, “for-itself” reality, emerges from seeing herself as others see her, those vital others who recognize Emmy as an agent and actor in the world. In a later monologue, at twenty-eight months, the little girl imagines herself in a fictional place, the future, to master her anxiety about what lies ahead. “We are gonna at at the ocean/ ocean is a little far away … /I think it is a couple of blocks away.”34 After an associative stream that includes a fridge submerged in water and a river, the child imagines sharks biting her. The fantasy is driven by emotion, but her speech allows the flowering of creative speculation while she is still safely in her crib, away from the sharks in her mind. Emily’s monologues are heavily analyzed in the book, but two points go unmentioned, perhaps because they are too obvious: having a narrator, external and voiced or internal and silent, is a way of keeping company with one’s self. In language, the self is always touched by otherness, if only because it is represented.

Some memories have no narrator and no time except the present. In 1961, when my cousin Nette was one year old, she traveled to Africa with her parents and sister. Her father, my uncle, was a doctor who practiced in Bambuli in what was then Tanganyika. Nette learned Swahili, a language she later forgot. When she was three, she returned home to Norway with her family. Nette retained no conscious memories of Africa, but in 2007, she and her husband Mads visited Tanzania. As soon as she set foot in Bambuli, she was overwhelmed by sensations of familiarity. The smells, the colors, the sounds all contributed to a heady feeling that she had come home. One afternoon, Nette and Mads met some schoolgirls on the road, and although the two groups shared no common language, they communicated with smiles, laughter, and gestures. Mads suggested Nette hum a melody she remembered from childhood, a song the family had once sung together, the words to which had disappeared. When the girls heard the tune, they began to sing and, to her own amazement, Nette joined them. One after another, the lost Swahili lyrics returned to her, verse after verse, and Nette sang loudly and joyfully. In that moment of exuberant recall, forty-one years seemed to collapse. The forty-four-year-old woman and the small child met.

This memory is not episodic and, although I have told it as a story, the recovery of the lyrics and the flood of joy my cousin experienced is not a narrative, but a form of involuntary memory. The nineteenth-century neurologist John Hughlings Jackson called this kind of repetitive, learned knowledge automatisms. The automatism is proprioceptive, related to my bodily orientation in space, what Merleau-Ponty called a body schema, and it engages my motor-sensory capacities. The perceptual context — visual, auditory, and olfactory — acted as cues, and the once-learned but lost Swahili words came back automatically. Nette’s eruption of memory accompanied by a flood of joy has meaning in itself. Affect marks experience with valence, positive or negative, part of the pleasure-pain series. It is purely phenomenal and prereflective until we ask ourselves: What did it mean?

By far the most dramatic form of bodily prereflective, involuntary memory is the flashback. After a car accident, I had flashbacks four nights in a row that shocked me out of my sleep. Rigid, repetitious, horrifying, this memory was a visuo-motor-sensory reexperiencing of the crash. As the psychoanalysts Françoise Davoine and Jean-Max Gaudillière argue in their book History Beyond Trauma, this form of traumatic memory is outside time and language.35 It is not in the past. It is the kind of memory Augustine said nobody would want to have. In a 1993 paper, the neurobiologists van der Kolk and Saporta make the same argument. “These experiences may then be encoded on a sensorimotor level without proper localization in space and time. They therefore cannot be easily translated into symbolic language necessary for linguistic retrieval.”36 Translation into words means location in space and time; it also means distancing and, perhaps ironically, greater mutability in memory. This very mutability, however, serves the cooling and creative aspects of narration, whether in memory or in fiction.

In Beyond the Pleasure Principle, Freud cites Kant, for whom “time and space are ‘necessary forms of thought,’” and then goes on to say, “We have learnt that unconscious mental processes are in themselves ‘timeless.’ This means in the first place that they are not ordered temporally, that time does not change them in any way and that the idea of time cannot be applied to them.”37 Unlike secondary process, what Freud called primary process does not distinguish past, present, and future. We glimpse this form of archaic thought in dreams, which are more concrete, emotional, and associative than waking thought, and in Emily’s early monologues in which subjective time is not yet fully codified in language.

That creativity is mostly unconscious is hardly surprising. Psychoanalysis has long known that we are strangers to ourselves, and the idea of unconscious perception has been with us at least since Leibniz in the seventeenth century. All creativity in both modes of thought — reasoning and narrative — can be traced to this timeless dimension of human experience or, I would say, a dimension with motor-sensory timing, but not self-reflective time. In a letter to Jacques Hadamard, Albert Einstein wrote that neither language nor “any other kinds of signs which can be communicated to others” were important features of his thought. His work, he said, was the result of “associative play,” was “visual and motor” in character, and had an “emotional basis.”38 In 1915, Henri Poincaré, the great mathematician, pointed to the unconscious origins of his own work:

The subliminal self plays an important role in mathematical creation … we have seen that mathematical work is not simply mechanical, that it could not be done by a machine, however perfect. It is not merely a question of applying rules, of making the most combinations possible according to fixed laws. The combinations so obtained would be exceedingly numerous, useless and cumbersome.39

Every once in a while a formula, a poem, an essay, a novel bursts forth as in a waking dream. The poet Czeslaw Milosz once said: “Frankly all my life I have been in the power of a daimonion, and how the poems dictated by him came into being, I do not quite understand.”40 William Blake said his poem “Milton” “was written from immediate dictation … without premeditation and sometimes against my will.”41 Nietzsche described thoughts that came to him like bolts of lightning. “I never had any choice about it.”42 The last pages of my novel The Sorrows of an American were written in a trance. They seemed to write themselves. Such revelations may well be based on years of laborious living, reading, learning, and cogitating, but they come as revelations nevertheless.

A retreat to nineteenth-century science is needed to frame this creative phenomenon. F. W. H. Myers was a renowned psychical researcher and a friend of William James, who is now mostly forgotten. His magnum opus was called Human Personality and Its Survival of Bodily Death,43 a title which no doubt hastened his oblivion. Still, he was a sophisticated thinker who applied the idea of automatisms to creativity. Unlike Jackson’s habitual automatisms or the pathological dissociations of hysteria studied by Pierre Janet or Freud’s idea of sublimation, Myers argued that subliminally generated material could suddenly find its way into consciousness, and that this eruption was not necessarily the product of hysteria, neurosis, or any other mental illness.

The definition of creativity in neuroscience research I have stumbled over again and again is: “the production of something novel and useful within a given social context.”44 Useful? Was Emily Dickinson’s work considered useful? Within her given social context, her radical, blazingly innovative poems had no place. Are they useful now? This research definition must be creativity understood in the corporate terms of late capitalism. Another component of creativity featured in these studies is divergent thinking, or DT. In one study, subjects’ brains were scanned as they “produced multiple solutions to target problems.” The more solutions, the more creativity, but this is obtuse, as Poincaré pointed out so succinctly. We are not machines or computers but embodied beings guided by a vast unconscious and felt emotions.

I have often asked myself, Why tell one fictional story and not another? Theoretically, a novelist can write about anything, but she doesn’t. It is as if the fabula is already there waiting and must be laboriously unearthed or suddenly unleashed from memory. That process is not exclusively the result of so-called higher cognition; it not purely cognitive or linguistic. When I write, I see images in my mind, and I feel the rhythms of my sentences, embodied temporal expectancies, and I am guided by gut feelings of rightness and wrongness, feelings not unlike what has happened to me in psychotherapy as a patient. After my analyst’s interpretation, I have felt a jolt of recognition, which is never merely an intellectualization but always has a felt meaning: Oh my God, that’s true, and if it’s true, I have to rewrite my story.

Fictions are born of the same faculty that transmutes experience into the narratives we remember explicitly but which are formed unconsciously. Like episodic memories and dreams, fiction reinvents deeply emotional material into meaningful stories, even though in the novel, characters and plots aren’t necessarily anchored in actual events. And we do not have to be Cartesian dualists to think of imagination as a bridge between a timeless core sensorimotor affective self and the fully self-conscious, reasoning and/or narrating linguistic cultural self, rooted in the subjective-intersubjective realities of time and space. Writing fiction, creating an imaginary world, is, it seems, rather like remembering what never happened.

2010

FREUD’S PLAYGROUND

WHEN I WAS IN MY adolescence, I used to think that in every relation between two people, there was also a third entity — an imaginary creature the players made between them — and that this invisible thing was so important, it deserved to be given a proper name, as if it were a new baby. The insight arrived, I believe, because I had begun to notice that two people were able to create both fairies and monsters between them, especially if love was involved. As I got older and read more, I realized that this zone between people had not gone unnoticed. It had been given different names and conceived of through various metaphors, but forms of one plus one make three or, better, one plus one make one out of two, were important aspects of the philosophical thinking I found most compelling. Questions about self and other have been central to psychoanalysis, but they also rage beyond its borders in analytical and continental philosophy, other disciplines in the humanities, in psychiatry, and, more recently, in the neurosciences. Subjectivity, intersubjectivity, mirroring, dialogue, and theory of mind are all terms directed at the problem of the between.

A couple of years ago, I reread one of Freud’s papers on technique, Remembering, Repeating, and Working Through (1914), and found myself fascinated by the following famous passage:

The main instrument, however, for curbing the patient’s compulsion to repeat and for turning it into a motive for remembering lies in the handling of the transference. We render the compulsion harmless, and indeed useful, by giving it the right to assert itself in a definite field. We admit it into the transference as a playground in which it is allowed to expand in almost complete freedom and in which it is expected to display to us everything in the way of pathogenic instincts that is hidden in the patient’s mind.

This “creates an intermediate region between illness and real life,”1 a geographical metaphor—the between is a road to wellness and realism.

James Strachey translated Tummelplatz as “playground.” It’s a sound choice as it evokes children romping at play, but the German carries additional connotations of hurry and commotion among adults, not just children, as well as a figurative possibility — a hotbed of action. Elsewhere, Freud characterized the transference as a field of “struggle” or, more dramatically, as “a battlefield” between doctor and patient. Whether a site of play or bloodshed, whether inhabited by benign characters or frightening ones, this intermediate region is where analysis happens. Just as dreams speak in archaic but overdetermined ways that must be interpreted, the expressive and resistant character of the transference is a language of repetitive actions, driven by primal bodily needs and affects that have become relational patterns, reenactments of early loves and hates that the patient does not consciously remember or understand.

The analyst sees in the patient what Anna Freud called defensive styles, individual modes of being with others in the world.2 Our need for other people is an essential drive, and that need is of us — body and soul. In his New Introductory Lectures on Psychoanalysis (1932–33), Freud introduces drive, Trieb (instinct), theory by saying openly that it is “our mythology,” and that drives are “magnificent in their indefiniteness.” Nevertheless, he continues, human beings are in the grip of “the two great needs — hunger and love.”3 We know from Civilization and Its Discontents that he is quoting Schiller: “hunger and love are what moves the world.”4 Freud makes it plain that he is articulating a “biological psychology”—“the psychical accompaniments of biological processes.”5 Our own age of neurobiology, which has mythologies of its own, returns us to the question of these “psychical accompaniments.” What are they? Freud was most interested in love, as am I. In Affective Neuroscience (1998), Jaak Panksepp writes, “It is now widely accepted that all mammals inherit psychobehavioral systems to mediate social bonding as well as various other social emotions, ranging from intense attraction to separation-induced despair.”6 Mammals do not stay alive by just satisfying their hunger for food. Like rats we have social and sexual drives — a pleasure principle — that animates us and makes us feel good or bad.

Although the emotional systems of our brains have much in common with rats, we are born into a world in which people speak to one another. We represent ourselves to ourselves in language, and this complicates matters considerably. It influences how and whom we love. Freud writes, “The relations of an instinct to its aim and object are also open to alterations; both can be exchanged for other ones … A certain kind of modification of the aim and change of the object, in which our social valuation is taken into account, is described by us as sublimation.”7 Rats don’t sublimate. Freud’s admittedly murky idea of sublimation is one of transformation — primal erotic instincts or drives are redirected into creative work, both intellectual and artistic. Speechless affective forces find symbolic realization in the ornaments of culture. Love can be reconfigured.

The repetitive style of the patient’s relation to the analyst is not articulated, but it has an object, and the analyst may well find herself playing somebody else — mother, father, sister, brother — which is why from the very beginning transference love was riddled with problems of illusion and reality. In Studies on Hysteria, Freud attributes his patient’s desire to kiss him to “a false connection,” a case of mistaken identity. The woman’s memories of the true object have vanished.8 But in his postscript to the Dora debacle (1905), Freud argues that along with simple substitutions, transference may also create a sublimated version of the old love object, which borrows some “real” quality from the analyst.9 In Remembering, Repeating and Working Through, the ghost of Charcot rises when we are told that transference “represents an artificial illness,” that is nevertheless “a piece of real experience” with the character of ‘genuine love.’”10 In a 1906 letter to Jung, however, Freud had stated the issue far more simply: psychoanalysis, he wrote, “is a cure effected by love.”11 But is it one-sided or two-sided?

Sándor Ferenczi first addressed the complexities of countertransference. His acknowledgement that psychoanalysis is “an intimate human practice,” his experiments in mutual analysis, and his rebellion against the unnatural, indeed artificial pose of the analyst, have taken on greater and greater significance in contemporary psychoanalysis.12 Fond as Freud was of some of his patients, he believed countertransference was something the analyst should rise above. The truth is that transference is human, and it moves in both directions.

People fall in love on the playground, and falling in love there or anywhere else is often riddled with the imaginary, steered by phantom powers from the past we can’t consciously remember. I think Freud’s ambivalence about what is real and unreal in transference is resolved with an insight he provided in The Interpretation of Dreams, “Our feeling tells us that an affect experienced in a dream is in no way inferior to one of equal intensity experienced in waking life; and dreams insist with greater energy upon their right to be included among our real mental experiences in respect to their affective than in respect to their ideational content.”13 Emotions are not fictive either in dreams or in transference. Transference love is real, even if it comes about under “special” circumstances.

Freud’s Tummelplatz, Strachey’s playground, became the catalyst for D. W. Winnicott’s “potential or transitional space”—his arena of play, playing, creativity, and culture, surely one of his most important contributions to psychoanalytic theory. It was only through play that people could, as he put it, begin to “feel real.” I have been unable to find in Winnicott a single mention of Remembering, Repeating, and Working Through. If it’s there, I have missed it, but the inspiration, acknowledged or not, is obvious. Freud’s language, via Strachey, suffuses Winnicott’s prose. For Winnicott, not just neurotics in therapy, but normal infants require an “intermediate state between [their] inability and growing ability to recognize and accept reality.”14 Strachey’s translation is intermediate region between illness and real life. Like Freud’s, Winnicott’s intermediate area is crowded with illusions generated by play, but it cannot be situated only inside the person. Winnicott’s “potential space” is “not inner psychic reality. It is outside the individual, but it is not the external world.”15 The transitional object — that bear or bit of blanket — is a real object in the world, but also a “symbol” radiant with the infant’s fantasies of union with his mother that helps ease his separation from her. It is at once “a piece of real experience” and a fiction.

For Freud, the word illusion, the Latin meaning of which is “to be in play”—illudere—had pathological implications. For Winnicott, we never entirely give up our fictions for the so-called real world: “There is a direct development from transitional phenomena to playing, from playing to shared playing, and from this to cultural experiences.”16 Music, painting, dance, and the alternative worlds of novels are all generated by play. The connection between play and culture was hardly new. Thirteen years before Winnicott’s paper “Transitional Objects and Transitional Phenomena,” Johan Huizinga published Homo Ludens (1938), in which he argued that all culture is a form of play.17 For Lev Vygotsky, play was also a developmental phenomenon, but he dated its advent later in a child’s life than Winnicott did. For him, it began with the imaginary situation. When the child pretends, he “operates with alienated meaning in a real situation.”18 The symbol or word for a thing has shifted. The refrigerator box becomes a house or a cave deep in the woods or a monster that eats children, but in real life, the box is still a box.

It is easy to see how theoretically complex this intermediate region can be. How does one frame the subjects involved? Are they monads each in a private and often delusional psychic space sending messages to each other? Where is the border between them? Can they form a single unit of free-flowing mutuality? Can the interaction be seen as a third entity between them? How much of it is conscious and how much unconscious? What is real about it and what is imaginary? I have been borrowing Martin Buber’s term “the Between.” For the philosopher, “the Between,” was an ontological reality that could not be reduced to either person involved and was more than both. The ideal relation between human beings resulted in “a change from communication to communion, that is, in the embodiment of the word dialogue” (my italics). It was not a relation of immersion or loss in the other person, not a schizophrenic confusion of I and you. It was a third reality. Buber writes, “Neither needs give up his point of view … they enter a realm where the point of view no longer holds.”19 The key word in Buber’s communion is embodied, an embodied dialogue — this is not an engagement of two intellects but of two whole beings. For Buber, who was interested in psychotherapy, psychic illness happened in this between. Buber’s I-Thou dialectic made deep inroads into psychoanalysis — in the work of Ludwig Binswanger and Carl Rogers, among others.

Buber and Freud were well aware of each other, but the odor of religious mysticism wafting through Buber must have been repellent to Freud, whose philosophical orientation was eminently rational. He regarded religion as illusion number one. Kant’s idea that play is the antithesis of reason, that reason must triumph over imagination, and philosophy and science over art, is a continual ghostly presence in Freud. The Kantian hierarchy fit well with Freud’s scientific temperament.

Science is ideally a discipline of remove. In fact, the scientist is supposed to vanish in the third-person objectivity of the observation or experiment. The I-You dialectic may be an object of study, but it is not part of the epistemological drama. When the cure in question is one enacted through a form of dialogue (two people alone together in a room), the remove, what came to be called neutrality, is an insistence that dispassionate reason guide the analyst, no matter how messy and wild the doings on the Tummelplatz become. “We admit it into the transference as a playground where it is allowed to expand in almost complete freedom…” And yet, there are echoes of Schiller here, too, and his play-drive, Spieltrieb. For Schiller, Spieltrieb generates the imagination and the arts that serve as intermediaries between Stofftrieb, sense drive (sensual, bodily reality), and Formtrieb (the rational vehicle of conceptual and moral order), between feeling and willing, between philosophical-scientific discourse and unknowable external reality, das Ding an sich of Kant. Schiller’s location of the imagination is classical. Imagination, phantasie, or mental images as a meeting ground between body and intellect goes back to the Greeks. But for Schiller, the integration of passive feeling and active willing, the senses and reason, results in human freedom.20 Of course, Freud was neither an Idealist nor a Romantic. Even the repetitions of transference expanding on his playground are not entirely free — his words are almost complete freedom. Freud was a scientist, a doctor, and a Jew. The phrase complete freedom was not in his vocabulary.

Freud’s drive theory also seeks integration, a way to account for the inherited givens of the animal body and the labile character of the human psyche. He described a drive as “lying on the frontier between the mental and the physical.”21 Through yet another intermediate zone, Freud attempts to solve a problem that still baffles scientists and philosophers. How do ideas relate to neural networks? What are the psychical accompaniments to biological processes? Freud answers by arguing that during a person’s life, his essential need for love begins with a bodily excitement or tension, which attaches itself to a psychic representative, an idea or Vorstellung, which, when all goes well, finds its satisfaction in the love object. A drive pushes outward into the world, and its journey is always accompanied by a quota of affect, which results in our conscious subjective feelings — in the best cases, ones of relief. Our first experiences with others lay down unconscious memory traces — traces which become a link between affects and ideas. In Inhibitions, Symptoms, and Anxiety, Freud states that affects are “reproductions of very early, perhaps even pre-individual experiences of vital importance.”22 If this is true, we feel again what we have felt before.

Freud’s “biologism,” long derided in many circles, is enjoying a resurrection. Intellectual climates change, and the winds are blowing in a new direction. For people in the humanities, the disembodied, culturally constructed, linguistic subject of late twentieth-century theory began to look both tired and incomplete. And in the hard sciences, the sweeping influence of behaviorism, with its contempt for inner life and subjective experience, grew equally dull. These are broad generalizations. Everywhere, always, there have been individuals pursuing their intellectual concerns, despite the atmospheric pressure, but we find ourselves at a moment of change, a change that has reinvigorated Freud’s thinking about psychobiological processes and sheds light on the anatomical as well as the psychic reality of the playground.

But how do we read other people? Perhaps no discovery in neuroscience has created as much uproar, both in the field and outside it, as the finding by Gallese, Fadiga, Fogassi, and Rizzolatti published in 1996 that in the macaque monkey there are neurons in the premotor cortex of its brain that fire when an animal performs an action, such as grasping, but also fire when another animal simply observes the same action.23 These aptly named mirror neurons have become part of a physiological explanation for human intersubjectivity, although it is vital to state that nobody knows whether this reflective action is innate or developed. What has become clear is that the neurons are involved not only in mimicry but in the comprehension of intentionality — the why of an action.

Research into biological mimesis of various kinds and into what is now called primary intersubjectivity, which begins from the first day of life, has exploded. However, if the idea of mirror neurons wasn’t intuitively attractive and didn’t resonate powerfully with thought outside of neuroscience, in psychoanalysis, phenomenology, linguistics, attachment studies, and infant research, the fate of the discovery might have been different. As it was, the prestigious journal Science rejected the paper by Gallese et al. at the time, because it was deemed lacking in general interest.

Both as literal experience and as metaphor, the mirror has a long history in Western thought. The tragic triangular love story of Echo and Narcissus continues to move us. In that myth, the third person in the drama is nobody — an illusory presence in the water. One of the first to research a child’s relation to the mirror was William Preyer (1841–1897), who in The Mind of the Child meticulously charted his own son’s reaction to his reflection over many months. His experiments led to the conclusion that mirror recognition marks the emergence of the ego (Ich), which allows the child to distinguish both himself and others from his and their images in the looking glass.24 Jacques Lacan codified the mirror stage, influenced by his teacher Alexandre Kojève’s reading of Hegel, the master-slave chapter of the Phenomenology in particular. Lacan also borrowed Charlotte Buhler’s concept transitivism25 (a term originally coined by Carl Wernicke) for the confusion of self and other in early childhood, a phenomenon that has been demonstrated over and over on real playgrounds: one small child takes a tumble and starts to cry. Another, who is only watching the fall, begins to cry as well. Winnicott, after reading Lacan, rethinks mirroring as a relation between mother and child. A contemporary example of this line of thought can be found in the psychoanalyst Jessica Benjamin. Through her critical reading of Hegel and Freud from a feminist, dialogical perspective, as well as her use of both Winnicott’s transitional theory and the burgeoning infant research on reciprocity and attunement, she repositions the human and psychoanalytic playground. In The Bonds of Love (1988), she writes, “… intersubjective theory sees the relationship between self and the other, with its tension between sameness and difference as a continual exchange of influence.”26

In one of his papers on analytic technique, Freud used, among other metaphors, the image of a mirror for the analyst: “The doctor should be opaque to his patients and, like a mirror, show them nothing but what is shown to him.”27 From the patient’s perspective we might put it this way: In you, I am able to see myself, or through you I become able to see myself. One research study has shown that in successful psychotherapy sessions, therapist and patient engage in bodily mirroring.28 Freud had nothing quite so literal in mind, but he did understand psychoanalysis as a conversation between two unconsciousnesses that communicated in ways well beyond purely cognitive operations. I suspect that it was Freud’s clinical experience that pricked his interest in the idea of thought transference, despite the fact that it was laden with occult connotations.

The mirror metaphor for the other confirms a phenomenal reality: when I look at you, the symmetrical likeness of our two bodies is felt by me, although I am invisible to myself. Most of this mirroring is unconscious, but in some fundamental way, when I look at you, your face supplants mine. The now classic studies by Meltzoff and Moore and Kugiumutzakis that documented infants (some less than an hour old) imitating the facial expressions of adults amazed many in the scientific community,29 but I must add here that the finding merely confirmed what every attentive parent has always known. In the first weeks of my daughter’s life, she imitated my facial expressions, and she also engaged in what is now called protoconversation. I would talk, wait a little, and she answered me. There were scientists, too, who truly looked at infants, among them, Preyer, who documented the fact that when he stuck out his tongue, his seventeen-week-old son did the same. But their observations didn’t stick. They drowned under a wave of opposing consensual theories.

The newborn as egocentric, asocial, solipsistic, autistic, has been replaced with an innately convivial being. In other words, as infant researchers point out regularly, the newborn that has come to light in the last thirty years is neither Freud’s nor Piaget’s nor Skinner’s nor Mahler’s. Colwyn Trevarthen writes, “The idea of infant intersubjectivity is no less than a theory of how human minds, in human bodies, can recognize one another’s impulses intuitively with or without cognitive or symbolic elaboration.”30 The word intuitively is a slap in the face, not only to earlier child research, but to the cognitive theorists and analytical philosophers, who have addressed the problem of other minds by postulating convoluted, fully conscious, intellectual gymnastics based on a paradigm of the human mind as a locked room and the other as robotic alien.

Trevarthen, Daniel Stern, Stein Bråten, Gallese, and others articulate a theoretical position, which is remarkably similar to the embodied intersubjectivity of Maurice Merleau-Ponty. In Phenomenology of Perception (1945), the philosopher argues forcefully against the understanding of others by analogy. He writes:

A baby of fifteen months opens its mouth if I playfully take one of its fingers between my teeth and pretend to bite it. And yet it has scarcely looked at its face in a glass, and its teeth are not in any case like mine. The fact is that its own mouth and teeth, as it feels them from the inside, are immediately, for it, the apparatus to bite with, and my jaw, as the baby sees it from the outside, is immediately capable of the same intentions. Biting has immediately, for it, an intersubjective significance. It perceives its intentions in its own body, and my body with its own, and thereby my intentions in its own body.31

None of the later researchers has said it better. The infant’s grasp of the other is not reflectively self-conscious; it is an embodied, subliminal relation. Mother and infant make a dyad, the two-in-one unit proposed by John Bowlby. They engage in a Buberian “we space.” Their mutual rhythms of gazing, vocal intonations, touch, and gesture are “attuned” to each other in a Winnicottian dialectic. They make music. They dance. They play. In this dynamic, the classical subject/object distinction begins to blur.

And through these repetitions, this shared bodily music, this emotional signaling back and forth, the immature brain of the infant develops enormously. By the end of the first year, there is a conspicuous enlargement of the brain’s frontal region. In those first twelve months, it is particularly the right orbitofrontal cortex that develops, an area of the brain crucial to affect regulation — how we manage our emotional responses. Allan Schore has written extensively on right hemisphere interactions between child and caretaker and their importance for psychopathology, and there is growing evidence that attachment and failures in attachment between mother and child affect autonomic, neurochemical, and hormonal functions in a growing brain.32 In a 2006 paper in Nature Neuroscience, Mirella Dapretto et al. articulate the mutual reflections this way: “Typically developing children can rely on a right hemisphere mirroring neural mechanism — interfacing with the limbic system via the insula — whereby the meaning of the imitated (or observed) emotion is directly felt and hence understood.”33 Without dwelling excessively on the neurobiology, which is still in a state of flux, it is fair to say that what has emerged is a psychophysiology of the Between, which involves neither nature nor nurture, but both at once, merging without demarcation — genetic temperament and a specific human story become personality over time, a personality shaped by its affective story: the “reproductions of very early … experiences of vital importance.”

Genetic research, the decoding of the genome, in particular, which some scientists hoped would serve as a fixed map for all human traits, has proved disappointing and given way to epigenetics. There is little evidence for simple, consistent effects of mutations in specific genes. The human story is far more complex. What does seem clear is that emotional styles or patterns of response, the repetitious forms of our relations to others and their primal meanings, are cocreated and probably begin even before birth. Brain plasticity implies dynamism, but also neural repetitions. There is no meaning without repetition, as Kierkegaard pointed out, and all recollection — implicit and explicit — is a form of repetition.

The early templates of our social relations are encoded prelinguistically in motor-sensory, emotional systems of our bodies, without symbolic representation, but they lie at the bottom of all higher meaning — as part of Freud’s primal pleasure-pain series. They create temporal bodily expectations that become part of unconscious memory and are essential to our relations with others. They become a dialogical code of behavior that appears on the playground. As Rhawn Joseph puts it, “… early emotional learning” may occur “in the right hemisphere completely unbeknownst to the left; learning and associated emotional responding may later be completely inaccessible to the language centers of the brain.…”34 Whether he knows it or not, he is echoing Freud. This is how what is not consciously remembered becomes repetition, an automatic affective response. We may intuitively perceive the actions of others through biological mechanisms, but those perceptions are also determined by past experiences. And so, illusions and delusions enter the playground. The person who was neglected as a child will read the wandering eyes of the other differently from the one who was not. Simply seeing his interlocutor or analyst look away from him may generate fear, sadness, or a feeling of persecution in the man who has felt unloved, and he then may create countless verbal explanations to explain why it is perfectly rational for him to feel that way, none of which touch on the actual wound.

What we don’t explicitly remember, we repeat, and these reenactments of learned emotional responses, mechanical in their appearance, are like a musical phrase repeated, sometimes ad nauseam, between patient and analyst in an echoing drama that often goes back to our earliest, unsymbolized relations. And it is not easy to bring these dissonant phrases to consciousness, and when we do they are translations only. Relational pathologies appear as an adaptation, after all, a defense against a cacophonous Between. Buber was right, psychological illnesses do grow up between people. And it seems that as much as milk, the neonate craves recognition, the eyes of the other on her, through which she finds her own eyes and mouth and tongue, arms and legs and torso, long before she can identify those various body parts in the mirror as herself.

The melodies of interaction become unconscious memories or, in Freud’s terminology, mnestic traces, the proprioceptive and emotional underground of each one of us, and this music must be distinguished from what Endel Tulving called episodic memories, that can be rooted to a time and place in a narrated personal autobiography.35 As Freud said in The Ego and the Id (1923), things become available to consciousness through corresponding “word presentations.”36 I am convinced that Freud was right about words: self-conscious narrative memories gain their flexibility — motion in time — and their mutability — they are not reliable but continually reconstructed over a lifetime — in language. They depend on our ability to see ourselves as others see us — that’s me in the mirror — a character in my family, a player in the social world, and this otherness becomes most highly articulated in words, which allow us to gather up ourselves in the past, project ourselves into the future, and create fictional worlds that are always in essence dialogical.

Reflective self-consciousness is a form of alienation that brings with it Vygotsky’s imaginary situation, “Let’s pretend the box is a dragon,” but what he did not say is that this form of imagining is predicated on the affective mirroring relations that came before it. Words are the ultimate emissaries because they travel from inside the body to the outside, and they are shared, not privately owned. My mental images, my dream pictures can reach you because I tell you about them. And when you respond, we have a dialogue in the real world or in the special circumstances of the analytic space. And, as M. M. Bakhtin, the Russian theorist, argues, the word is dialogic in itself, charged with personal intentionality in relation to the other and to whole histories of use and context by others, and it cannot be reified, given some fixed, final meaning.37 Even when I am alone, the rhythmical realities of the back and forth between self and other are embodied in me, and their meanings are both implicit and explicit.

This brings us to Freud’s Beyond the Pleasure Principle, and to his grandson, the “good” little one-and-a-half-year-old boy, who does not cry when his mother leaves him or disturb his parents at night, but plays his fort-da game instead, reenacting and mastering his mother’s absence by throwing her away and bringing her back again.38 The “symbolic” significance of this game, the role of the words fort and da as vehicles to cover absence, has, of course, been heavily glossed by Lacan,39 but the game is also rhythmic and physical. The spool attached to a string is thrown and retrieved; a corporeal enactment of a relation in its back-and-forth motion. Freud’s grandson plays the Between game. He is too young to turn his spool and string into a true imaginary situation, which means, according to Vygotsky, “separating the field of meaning from the visual field.”40 The boy cannot say, “Let’s pretend the spool is Mommy, and I have the power.” But there is potent emotional meaning in the fort-da game, as Freud noticed, a meaning which is not yet fully articulated or alienated in Vygotsky’s terms. It is an illusion of mastery seen in the game’s musical repetition of the words that are accompanied by the physical rhythms of a repetitive throwing out and reeling in that bring the boy pleasure. The two levels of motor-sensory action and speech are not separate but integrated.

All mammals play, especially young ones. They cavort and gambol and tumble about. Even pretense is not limited to people. Anyone who has had a dog has witnessed play fights, and great apes and dolphins, who recognize their own images in a mirror, exhibit signs of more sophisticated make-believe. Jaak Panksepp has his own version of Spieltrieb, an instinctual PLAY system in the brain that is fundamentally emotional and social, enhanced by certain synaptic chemistries and depressed by others. Panksepp writes, “In most primates, prior social isolation has a devastating effect on the urge to play. After several days of isolation, young monkeys and chimps become despondent and are likely to exhibit relatively little play when reunited. Apparently their basic needs for social warmth, support and affiliation must be fulfilled first; only when confidence is restored does carefree playfulness return.”41 As Winnicott noticed, some of his patients had to learn how to play in therapy. Before going out onto the Tummelplatz, a person (and some animals) must feel safe and recognized.

Compulsive, involuntary repetitions that have no ludic character, the ones Freud also addresses in Beyond the Pleasure Principle, are often traumatic in origin. And I think they can be characterized without any reference to a death instinct. They are fully somatized, motor-sensory, and sometimes visual memories without representation in language. Flashbacks are the most dramatic illustration of this. But the symptoms of hysteria or conversion disorder are also conditions unarticulated by linguistic symbols. As Freud wrote in 1895, these are moments when “the body join[s] in the conversation.”42 There is a lot of psychoanalytic and neurobiological work on memory and trauma, which I have addressed in greater detail elsewhere, both in my book The Shaking Woman43 and in a paper forthcoming in the journal Neuropsychoanalysis. Let me simply say this: a transference battleground can become a playground under the right emotional circumstances. The holding and containing environments of Winnicott and Bion seem highly relevant here. And, as Freud knew, it is not only profoundly traumatized people who are subject to pathological repetitions in relation to the other or to the analyst, but plain old neurotics, too. My friend Mark Solms, the psychoanalyst and brain researcher who gave this lecture once, says that psychoanalysis is learning to face what you’d rather not know. In my own experience, this is exactly right. It can be painful to look in the mirror, because we are never alone in that image. We are there with our beloveds and our struggles with them. The journey of a therapy is to replace rigid, unrepresented patterns of repressed meaning with freer, more creative and playful meanings that can be articulated in words — to live in repetitions of greater variety.

The fort-da game is a beautiful example of the inherently social and dialogic character of even solitary play, an illustration of the two in one. At eighteen months, the boy is just at the moment of mirror self-recognition. The playground of the child’s repetitions is indeed an intermediate space between his desire and the real outside world of others. The illusionary game is transitional because it will vanish from his life, to be replaced by new games and fantasies in further bids for mastery. But these later forms of play are nevertheless built on the oldest cadences and bodily rhythms of preverbal dialogue.

Perhaps the most striking characters of a developing intermediate region are imaginary friends, those transitional beings that rise up in childhood, spend time as important, sometimes difficult members of a household, and then disappear. The phenomenon is poorly studied, because psychologists endlessly stumble over definitions. Do stuffed animals with personalities count? What about the child who becomes the imaginary companion herself, the little girl who, when asked how often she played with Applejack, answered, “I am Applejack!”44 I will give you two examples of imaginary companions I dredged from a tome called Annual Progress in Child Development, 2000–2001: “Herd of cows: cows of many colors and varying sizes who were often fed and diapered like infants. Discovered when the father accidentally stepped on one.” “Maybe: A human of varying gender whom the child routinely summoned by shouting out the front door of the family house.”45

The cows and Maybe are characters in fictional worlds that serve some emotional purpose in relation to the real world out there. The psychological richness of these figments is apparent even from these brief, completely unelaborated descriptions: the cows wear diapers, and the father was so thoughtless, he trod on one! Maybe regularly changes sex; the make-believe friend isn’t stuck in one or the other, how nice! My sister Liv and I had an enemy, Mrs. Klinch Klonch, a hideous woman who tormented children. The sadistic exploits of this convenient projection gave us hours of delight and a safe reservoir for our own nasty impulses and whatever anger we harbored against our mother. Like Freud’s grandson, we were very good children, and I think we needed our sublimated Mrs. Klinch Klonch, the female ogre who gave us a ready supply of hair-raising, but safe, stories.

Even very young children know that their friends and play figments aren’t “real.” They inhabit Winnicott’s and Merleau-Ponty’s “potential space,” which is not phenomenal reality, the here and now, but an illusory narrative terrain alongside of it. Elaborate story worlds are always the product of double consciousness, of the simultaneity of here and there, I and you. In a paper on impaired play in autism, Somogy Varga says it well: “Play is an inherently intersubjective phenomenon and requires the resonative presence of others.”46 We may not literally see the ghosts of those others, but they are there, and their resonance is emotional. The feelings are real. And so, children can frighten themselves with their own pretend stories. Imaginary companions can be mean and tormenting as well as sweet. Once a character has been born on the playground, he or she or Maybe is not always obedient or under our conscious control.

Again we run up into illusion and delusion, health and illness, in life and in analysis. Where are the borders? Winnicott was suspicious of imaginary companions, calling them “other selves of a highly primitive type,”47 but there is little evidence, as Marjorie Taylor points out in her book on the subject, for any generalized conception of these figments as pathological.48 For most children, Freud’s reality principle remains an active constraint. More revealing are the imaginary friends and enemies of neurological patients, who mistake their mirror images for benign or persecutory others, who personify dolls and toys, invent imaginary children or strangers haunting their attics, who believe their loved ones are doubles or identify their own paralyzed arms as the limbs of other people. As the neurologist Todd Feinberg points out, these delusions are positive symptoms, not negative ones, and they occur almost always in connection to right hemisphere damage. But not all right hemisphere lesions result in delusions. No simple locationist theory can account for them, and they cannot be explained away as a simple filling in of absent memories by the intact language areas of the left brain.49

Feinberg articulates the connections among these confabulations, the imaginary companions of childhood, and Freud’s idea of unconscious defenses. In stark opposition to many of his colleagues, Feinberg, like the psychoanalyst, is interested in the subjective content of his patients’ curious narratives. Without their personal narratives, their illnesses remain elusive.

After surgery to drain a brain abscess in his right hemisphere, one of Feinberg’s patients, JK, claimed that his paralyzed left arm belonged to his brother. The members of the young man’s family lived overseas, and he missed them sorely. He told Dr. Feinberg that he had found his brother’s arm in a coffin, and its presence on his body brought him comfort. After the delusion remitted, JK explained as he sobbed that his brother’s arm had made him feel stronger. When Dr. Feinberg asked him how he regarded his good right arm, the patient answered simply, “OK.” Then he explained, “This one [his right arm] doesn’t have a story like this one [he then pointed to his left hand] … it has a story. Like the one I told you … this hand has a background that makes me feel closer to my brother.”50

It seems to me that in this man’s case, his brother’s arm occupied a fictional territory between his illness and real life, and that it served him as a transitional object, a metaphorical part for the whole beloved brother. It is not lost on Feinberg that his patients’ confabulations bear a strong resemblance to myth, dreams, and literature. But I would like to examine the case with a somewhat different emphasis. JK’s right hemisphere injury causes a dysfunction in Freud’s reality principle, the regulatory faculty that involves the cognitive control of instinctual discharge, delaying gratification, and facing unpleasant truths, such as: I am paralyzed on my left side, and I can’t possibly have my brother’s arm on my body. The right hemisphere is heavily implicated in both reading affect in other people and regulating, inhibiting it from within. JK and patients like him appear to be in the grip of wakeful dreams. In my own dreams I have found myself with new or radically altered body parts without thinking for an instant: this is impossible. Although there is forebrain activity in dreaming, higher cognitive processes at work, the bilateral prefrontal cortex is quiescent.

Freud maintained that there was “a sleeping state of mind”51 and the dream work is on during the day, too, but not available to consciousness. JK’s delusion is reminiscent of dream condensation. “The direction in which condensations in dreams proceed,” Freud writes, “is determined on the one hand by the rational preconscious relations of the dream thoughts, and on the other by the attraction exercised by visual memories in the unconscious.”52 Abstract thinking becomes concrete and visual in dreams, and their content is borrowed from memory images. I am alone and paralyzed; I remember and want my brother. I want his strength and comfort, his love and companionship. JK’s desire is neatly transformed, condensed or compressed, in a transformation of his conscious body image, which is also a case of mistaken identity: My brother is here with me now. I have his arm, his embracing arm. There may be a dark side, too: I wish my brother had a dead arm, not I. The brother’s arm is an imaginary internal construct of an intersubjective relation, and the form this fiction takes is not arbitrary. The “dead” arm first appears to him in a coffin, after all, the perfect container for a useless, unresponsive limb. The feeling that appears in this Between region of illusion, however, is terrifyingly real. We are looking at grief.

It is a mistake to isolate neurology from psychiatry, the biological from the psychological, and a further grievous error to discount subjective experience from illness in general. JK’s brain lesion may explain the disinhibition that allows the hallucinatory process of the dream work into daylight, but it does not explain either why it erupts in JK and not in others with similar injuries, nor does it explain the personal content of that eruption. Many forms of dream consciousness occur in people when they’re not asleep. The logical sequences of rational thought loosen in reverie, in the hypnagogic visions that precede sleep, in the free associations of analysis, and in the making of art.

Artists and art-making always made Freud a little nervous, but in The Interpretation of Dreams, he turns to Schiller, as he did from time to time. He quotes a letter written by the poet and philosopher to a friend: “… where there is a creative mind, Reason — so it seems to me — relaxes its watch upon the gates, and the ideas rush in pell-mell, and only then does it look them through and examine them in a mass.”53 When Reason — and, I must add, the whole body — relaxes, ideas rush in and may attach themselves to paralyzed arms or mirror reflections, to the analyst in the room, or to the story or poem being written. And the imagined others or doubles of these fictions are not always good brother fairies. They may be monsters like Mrs. Klinch Klonch. The fairies and monsters take many forms, and they rise up as ghosts of the forgotten past in the transference and the countertransference and in the play of imaginative fiction. They haunt the intermediate region between me and you.

This is where stories bloom, where the confabulations of literature flower like dreams over our wounds and losses in a bid for mastery. And as Winnicott believed, these illusions can paradoxically make us feel real. After Sigmund Freud died, his daughter Anna dreamed of him. In her analysis of her resurrection dreams, she wrote, “The main scene, in the dreams are always of his tenderness to me, which always takes the form of my own earlier tenderness. In reality he never showed either with the exception of one or two times, which always remained in my memory.”54

Making fictions is something like dreaming while awake. Vague or vivid memories are reconfigured as the artist plays. Imaginary companions appear from unknown regions to keep one company. Like every child, I know that the worlds of my novels are “pretend.” My reality principle is intact, but every fiction I write must be emotionally true to sustain its making, to keep my play in motion. The choices are never, never arbitrary, because beneath the words I write, I feel the old music, tuneful and dissonant, my own melodies and rhythms and ruptures that direct the story I am telling, that propel it forward. And every work of fiction is written for the other, an imagined other, to be sure. The novel is dialogical, created by many voices chattering within and rarely agreeing with one another. In psychotherapy, I must also bump up against a real other, my analyst, on the playground where we wrestle with fairies and monsters in the Land of Between, and where I sometimes find myself in the mirror, transformed—neither monster nor fairy — but a person who can play harder and more freely.

I met Sigmund Freud and Anna Freud not so long ago — in a dream. I was starstruck, filled with happiness. I shook the old man’s hand, and then Miss Freud said to me, “I hope you won’t be like Oscar Wilde and overstay your welcome.” There is no time for my analysis of this dream, which goes very deep, I can assure you, but I do hope I have neither been too wild nor overstayed my welcome.

2011

Загрузка...