TWO. The Strange History of Altruism

The great breach that separates the modern Western world from its dominant traditions of religion and metaphysics is the prestige of opinion that throws into question the scale of the reality in which the mind participates. Does it open on ultimate truth, at least potentially or in momentary glimpses, or is it an extravagance of nature, brilliantly complex yet created and radically constrained by its biology and by cultural influence? Prior to any statement about the mind is an assumption about the nature of the reality of which it is part, and which is in some degree accessible to it as experience or as knowledge.

Whoever controls the definition of mind controls the definition of humankind itself, and culture, and history. There is something uniquely human in the fact that we can pose questions to ourselves about ourselves, and questions that actually matter, that actually change reality. What we are, what human beings are as individuals and in the categories we assign to them — our assumptions and conclusions on these subjects have had enormous consequences, which were by no means reliably good.

I should declare at the outset my own bias. I believe it is only prudent to make a very high estimate of human nature, first of all in order to contain the worst impulses of human nature, and then to liberate its best impulses. I do not wish to imply malice or calculation on the part of those who insist on a definition of the mind, therefore the human person, which tends to lower us all in our own estimation. It must be obvious, however, that I consider this tendency in modern and contemporary thinking significant and also regrettable.

There is a characteristic certainty that is present structurally in the kind of thought and writing to which I wish to draw attention, a boldness that diminishes its subject. I will refer to this as parascientific literature. By this phrase I mean a robust, and surprisingly conventional, genre of social or political theory or anthropology that makes its case by proceeding, using the science of its moment, from a genesis of human nature in primordial life to a set of general conclusions about what our nature is and must be, together with the ethical, political, economic and/or philosophic implications to be drawn from these conclusions. Its author may or may not be a scientist himself. One of the characterizing traits of this large and burgeoning literature is its confidence that science has given us knowledge sufficient to allow us to answer certain essential questions about the nature of reality, if only by dismissing them. This confidence was already firmly asserted by Auguste Comte, the father of positivism, in 1848. He saw his age prepared for the social regeneration of mankind: “For three centuries men of science have been unconsciously co-operating in the work. They have left no gap of any importance, except in the region of Moral and Social phenomena. And now that man’s history has been for the first time systematically considered as a whole, and has been found to be, like all other phenomena, subject to invariable laws, the preparatory labours of modern Science are ended.”1 I seriously doubt that any scientist active today, if pressed, would speak of the sufficiency of our present state of knowledge with equal assurance. Yet in literature of this genre, of which Comte is also an ancestor, that tone of certainty persists, an atavistic trait that defies the evolution of its notional subject.

It is and may always be premature to attempt, let alone to assert, a closed ontology, to say we know all we need to know in order to assess and define human nature and circumstance. The voices that have said, “There is something more, knowledge to be had beyond and other than this knowledge,” have always been right. If there is one great truth contained in the Gilgamesh epic and every other epic venture of human thought, scientific or philosophical or religious, it is that the human mind itself yields the only evidence we can have of the scale of human reality. We have had a place in the universe since it occurred to the first of our species to ask what our place might be. If the answer is that we are an interesting accidental outcome of the workings of physical laws which are themselves accidental, this is as much a statement about ultimate reality as if we were to find that we are indeed a little lower than the angels. To say there is no aspect of being that metaphysics can meaningfully address is a metaphysical statement. To say that metaphysics is a cultural phase or misapprehension that can be put aside is also a metaphysical statement. The notion of accident does nothing to dispel mystery, nothing to diminish scale.

I consider the common account of the sense of emptiness in the modern world to be a faulty diagnosis. If there is in fact an emptiness peculiar to our age it is not because of “the death of God” in the non-Lutheran sense in which that phrase is usually understood. It is not because an ebbing away of faith before the advance of science has impoverished modern experience. Assuming that there is indeed a modern malaise, one contributing factor might be the exclusion of the felt life of the mind from the accounts of reality proposed by the oddly authoritative and deeply influential parascientific literature that has long associated itself with intellectual progress, and the exclusion of felt life from the varieties of thought and art that reflect the influence of these accounts. To some extent even theology has embraced impoverishment, often under the name of secularism, in order to blend more thoroughly into a disheartened cultural landscape. To the great degree that theology has accommodated the parascientific world view, it too has tended to forget the beauty and strangeness of the individual soul, that is, of the world as perceived in the course of a human life, of the mind as it exists in time. But the beauty and strangeness persist just the same. And theology persists, even when it has absorbed as truth theories and interpretations that could reasonably be expected to kill it off. This suggests that its real life is elsewhere, in a place not reached by these doubts and assaults. Subjectivity is the ancient haunt of piety and reverence and long, long thoughts. And the literatures that would dispel such things refuse to acknowledge subjectivity, perhaps because inability has evolved into principle and method.

The advance of science as such need not and should not preclude acknowledgment of so indubitable a feature of reality as human subjectivity. Quantum physics has raised very radical questions about the legitimacy of the distinction between subjectivity and objectivity. Indeed, there is now a suggestion of the pervasive importance to the deep structures of reality of something of a kind with consciousness. The elusiveness of the mind is a consequence of its centrality, which is both its potency and its limitation. The difficulty with which objectivity can be achieved, to the extent that it ever is achieved, only demonstrates the pervasive importance of subjectivity. I would argue that the absence of mind and subjectivity from parascientific literature is in some part a consequence of the fact that the literature arose and took its form in part as a polemic against religion. And it has persisted, consciously or not, in a strategy for excluding thought of the kind hospitable to religion from the possibility of speaking in its own terms, making its own case. Metaphysics in general has been excluded at the same time, even from philosophy, which since Comte has been associated with this same project of exclusion. The arts have been radically marginalized. In its treatment of human nature the diversity of cultures is left out of account, perhaps to facilitate the making of analogies between our living selves and our hypothetical primitive ancestors, so central to their argument, who can only have been culturally very remote from us indeed. When history is mentioned, it is usually to point to its follies and errors, which persist to the degree that the light of science has not yet fallen over the whole of human affairs.

There is an odd, undeniable power in this defining of humankind by the exclusion of the things that in fact distinguish us as a species. For this exclusion Comte is not to blame. He famously proposed an elaborate ritualized religion of Humanity, the Grand Being in his parlance. His theory of man and society has no heirs and was in fact shuffled out of positivist thought so promptly and thoroughly that no trace of it can be seen. Comte said that in his new social order, cooperation among people “must be sought in their own inherent tendency to universal love. No calculations of self-interest can rival this social instinct, whether in promptitude and breadth of intuition, or in boldness and tenacity of purpose. True it is that the benevolent emotions have in most cases less intrinsic energy than the selfish. But they have this beautiful quality, that social life not only permits their growth, but stimulates it to an almost unlimited extent, while it holds their antagonists in constant check.”2 To build a grand humanism on the foundation of the sciences was the dream and object of his philosophy.

No theory contemporary with us or influential among us would suggest that humankind is characterized by an “inherent tendency to universal love.” Comte wrote in the bloody period of European revolutions and counterrevolutions, and still he believed in the unrivaled power of the “benevolent emotions.” Our positivist writers on human nature assume that only self-interest can account for individual behavior. Selfish behavior is assumed to be merely reflexive, though it can be deceptive in its forms, for example when the reward toward which it is directed is social approval. And the deep and persisting acceptance of this vision as indisputable truth has had an epochal significance for the way we think. Comte has had his revenge for the decapitation of his philosophic system in leaving behind a word and concept — altruism, selfless devotion to the good of others — that has deviled parascientific thought ever afterward.

There are inevitable problems with parascientific argument. At best, arguments based on science, no matter what their source, are vulnerable over the medium term, at least, on account of the very commendable tendency of science to change and advance. At this point, the parascientific genre feels like a rear-guard action, a nostalgia for the lost certitudes of positivism. The physical universe, as it is known to us now, is not accessible to the strategies of comprehension that once seemed so exhaustively useful to us. Nevertheless, that it is accessible to these strategies is the core faith that continues to animate the writers in the parascientific tradition.

Comte, in the words of the eleventh edition of the Encyclopaedia Britannica, foresaw the evolution of human consciousness beyond its theological and metaphysical stages and into positivism. The article says, “When that stage has been reached, not merely the greater part, but the whole, of our knowledge will be impressed with one character, the character, namely, of positivity or scientificalness, and all our conceptions in every part of knowledge will be thoroughly homogeneous.” The impulse to impress all thought with one character is mighty in the literature of parascience, perhaps because it shared its cradle with philosophic monisms like positivism. This is true despite the fact that the traditions of modern thought, however rigorously self-consistent, are not consistent with one another — except in their shared impulse to nullify individual experience, which is perhaps as much a motive as a consequence of their rigor. William James, in an essay on Hegel, says he fears lest that philosopher’s monism, “like all religions of the ‘one thing needful,’ end by sterilizing and occluding the minds of its believers.” Perhaps there is something about a sterilized and occluded mind that is strongly associated with missionary zeal, an impatient need to enlist believers, to bring others into the fold. This zeal is another characteristic of the literature I have called parascientific. It has found in the object and glory of Comte’s system, altruism, an irresolvable anomaly and an irritant.3

If I were a practitioner of the hermeneutics of suspicion, I would note here that, despite their pedagogical tone, these preachments are often intended for those who are in the fold already, meant to reassure them as to the wisdom and actual virtue of their being there. Malthus’s Theory of Population took its authority from a formula expressing a supposed ratio of the growth of population to the increase of arable land. His contemporaries saw clearly enough what the implications must be for social policy, that the impulse to intervene in the suffering of the poor, an impulse that was under formidable control among them in any case, could, if acted upon, yield only greater suffering among the poor, given the inevitable limits to population size Malthus had seemed to express so objectively. Darwin, famously influenced by Malthus, made the competition for limited resources an elemental, universal principle of life, and, in The Descent of Man, folded tribal warfare into the processes of evolution, a notion which meshed nicely with colonialism and with the high esteem in which Europeans of the period held themselves. To proceed from Peter Townsend’s observations of overpopulation and starvation among dogs stranded on an island stocked with sheep to the observed fact of starvation among the lower classes in Britain to a formula that makes starvation seem inevitable, as Malthus did — setting aside very practical questions about the distribution of resources, raised by Adam Smith and others — is an instance of parascientific reasoning. To proceed from biological evidence of our origins among the primates and the primitives to an argument for European supremacy is no less an instance of it. Then there are the writings of Sigmund Freud, by far the greatest and the most interesting contribution to parascientific thought and literature ever made. Freud will be the subject of the next chapter. Recent contributors to the genre include Richard Dawkins and Daniel Dennett, who have given their ideas the effective authority that comes with successful popularization.

However starry-eyed Comte’s vision of humanity may have been, there is something in experience that relates, however inexactly, to benevolence and also altruism. There is something in the nature of most of us that takes pleasure in the thought of a humane and benign social order. The tendency of Malthus, and of Darwin in The Descent of Man, to counter the humane and also the religious objections to warfare and gross poverty puts compassion or conscience out of play — two of the most potent and engrossing individual experiences, both factors in anyone’s sense of right and wrong. This is a suppression of, and an assault on the legitimacy of, an aspect of mind without which the world is indeed impoverished. It is done in the course of proposing an objective, amoral force to which every choice and act is subject. In light of this fact our own sense of things is shown to be delusional, insofar as it might persuade us that our behavior is not essentially self-interested in a narrow sense of that term. By the word “altruism,” altruisme in French, Comte intended a selfless devotion to the welfare of others which was to fill the place of belief in God left empty by the triumph of scientific positivism. In parascientific literature, the word always appears in a context that questions whether altruism is possible or desirable, or whether apparent instances are real, or what survival benefit might be conferred by it that would account for its undeniable persistence among certain insect colonies.

Herbert Spencer, an important earlier contributor to parascientific literature, is in some degree an exception. In his Data of Ethics, published in 1879, he takes up the issue framed by Comte, defending egoism in one chapter and altruism in the next. His argument for egoism is Darwinian: “The law that each creature shall take the benefits and evils of its own nature, be they those derived from ancestry or those due to self-produced modifications, has been the law under which life has evolved thus far; and it must continue to be the law however much farther life may evolve. Whatever qualifications this natural course of action may now or hereafter undergo, are qualifications that cannot, without fatal results, essentially change it. Any arrangements which in a considerable degree prevent superiority from profiting by the rewards of superiority, or shield inferiority from the evils it entails — any arrangements which tend to make it as well to be inferior as to be superior; are arrangements diametrically opposed to the progress of organization and the reaching of a higher life.” He goes on to make a case for altruism based on his understanding of reproduction among “the simplest beings,” which, he says, “habitually multiply by spontaneous fission.” He notes that “though the individuality of the parent infusorium or other protozoon is lost in ceasing to be single, yet the old individual continues to exist in each of the new individuals. When, however, as happens generally with these smallest animals, an interval of quiescence ends in the breaking up of the whole body into minute parts, each of which is the germ of a young one, we see the parent entirely sacrificed in forming progeny.”4

Spencer is using two modes of scientific thought available to him in the late nineteenth century, Darwinian evolution and the observed division of single-cell animals, to explain the origins of two apparently conflicting ethical impulses or values. Having in a sense legitimized them both by means of these etiologies, he expounds on the ethical, social, and intellectual benefits and difficulties associated with each one, proceeding in the way parascientific argument typically proceeds. Some allusion to the science of the moment is used as the foundation for extrapolations and conclusions that fall far outside the broadest definitions of science. It is to Spencer’s credit nevertheless that he acknowledges complexity in this instance. Altruism is a classic problem in the tradition of Darwinist thinking, and Spencer is unusual in granting it reality and a legitimate place in human behavior. It is to be noted, however, that in his considerations of both egoism and altruism, the question might be rephrased in terms of justice or humanity, both of which do from time to time entail some cost to oneself. Justice worth the name tends to exact advantage from anyone who might otherwise enjoy the benefits of relative power. This is a cost which most would be ashamed to notice, and for which they might feel they were fully compensated in the assurance that equity is an active principle. But parascience excludes such subjective considerations.

One might think the insufficiency of any explanatory model to describing essential elements of experience might raise doubts about the model itself, but when the problem of altruism is acknowledged, it is generally addressed by a redefinition of altruism which makes it much more conformable to neo-Darwinist theory. Yet altruism as an idea has not been passive in all this. If I may borrow the language of this genre, it has in some cases parasitized other concepts. By the extremely parsimonious standards of neo-Darwinism, it is the proverbial bad penny, liable to show up anywhere. Michael Gazzaniga reports a question raised by Geoffrey Miller, another evolutionary psychologist. “Most speech appears to transfer useful information from the speaker to the listener, and it costs time and energy. It seems to be altruistic. What fitness benefit can be attained by giving another individual good information? Reviewing the original argument of Richard Dawkins and John Krebs, Miller states, ‘Evolution cannot favor altruistic information-sharing any more than it can favor altruistic food-sharing. Therefore, most animals’ signals must have evolved to manipulate the behavior of another animal for the signaler’s own benefit.’ And other animals have evolved to ignore them, because it didn’t pay to listen to manipulators.” Ergo, it seems, we, alone among the animals, have language. Why the complexity of language and our adeptness in the use of it? Gazzaniga says, “Considering this conundrum, Miller proposes that language’s complexities evolved for verbal courtship. This solves the altruism problem by providing a sexual payoff for eloquent speaking by the male and the female.” So informative speech is at peril of presenting the theorist with an instance in which a speaker confers benefit to another at cost to himself. But wait! There is manipulation! There is sexual payoff! Does this answer the question about the cost of sharing information? No. Nevertheless, our nature is defined as if determined by the nature of hypothetical primitives, humanlike in their ability to have and give information, but finding neither use nor pleasure in doing so.5

This is one instance of the fact that possible altruism can be detected in many kinds of human behavior, and that where it is even apparently detected it is obviated by elaborations of theory that would have consequences for the understanding of important evolutionary issues — pair bonding, for example, or the early history of the animal brain — since animals supposedly had a capacity for manipulation until it was selected against. Charming as the notion is that our proto-verbal ancestors found mates through eloquent proto-speech — oh, to have been a fly on the wall! — it has very rarely been the case that people have had a pool of eligible others to select among on the basis of some pleasing trait. Endogamy or restricted exogamy among small groups, the bartering of daughters, and status considerations all come into play. It often seems that American anthropologists forget how fluid our culture is and how exceptional our marriage customs are, globally and historically. Pyramus and Thisbe, Eloise and Abelard, Romeo and Juliet, even if they had lived and were able to reproduce, would have been far too exceptional to have influenced the gene pool. And consider those animals who were capable of manipulation and then capable of indifference to it, so that the capacity for it faded away. How did this initial complexity arise? Do animals now have any comparable insight into the motives of others? These neuroscientists tend to say no, though such insight would seem to confer a marked survival advantage. There is more than a little of the just-so story in this theoretical patch on the cost-benefit problem supposedly posed by the phenomenon of human speech. In this way, the specter of altruism, like a lancet fluke in the brain of an ant, distorts Darwinian argument and carries it far beyond the conceptual simplicity for which it is justly famous.

*

I am indebted to Daniel Dennett for the ant and the lancet fluke, a metaphor that comes to mind often as I read in his genre. For example, consider poor Phineas Gage, the railroad worker famous for the accident he sufered and survived more than 150 years ago, an explosion that sent a large iron rod through his skull. Wilson, Pinker, Gazzaniga, and Antonio Damasio all tell this tale to illustrate the point that aspects of behavior we might think of as character or personality are localized in a specific region of the brain, a fact that, by their lights, somehow compromises the idea of individual character and undermines the notion that our amiable traits are intrinsic to our nature.

Very little is really known about Phineas Gage. The lore that surrounds him in parascientific contexts is based on a few anecdotes of uncertain provenance, to the effect that he recovered without significant damage — except to his social skills. Gazzaniga says, “He was reported the next day by the local paper to be pain free.” Now, considering that his upper jaw was shattered and he had lost an eye, and that it was 1848, if he was indeed pain free, this should surely suggest damage to the brain. But, together with his rational and coherent speech minutes after the accident, it is taken to suggest instead that somehow his brain escaped injury, except to those parts of the cerebral cortex that had, till then, kept him from being “‘fitful, irreverent, and grossly profane.’” He was twenty-five at the time of the accident. Did he have dependents? Did he have hopes? These questions seem to me of more than novelistic interest in understanding the rage and confusion that emerged in him as he recovered.6

How oddly stereotyped this anecdote is through any number of tellings. It is as if there were a Mr. Hyde in us all that would emerge sputtering expletives if our frontal lobes weren’t there to restrain him. If any kind of language is human and cultural, it is surely gross profanity, and, after that, irreverence, which must have reverence as a foil to mean anything at all. If to Victorians this behavior seemed like the emergence of the inner savage, this is understandable enough. But from our vantage, the fact that Gage was suddenly disfigured and half blind, that he suffered a prolonged infection of the brain, and that “it took much longer to recover his stamina,” according to Gazzaniga, might account for some of the profanity, which, after all, culture and language have prepared for such occasions. But the part of Gage’s brain where damage is assumed by modern writers to have been localized is believed to be the seat of the emotions. Therefore — the logic here is unclear to me — his swearing and reviling the heavens could not mean what it means when the rest of us do it. Damasio gives extensive attention to Gage, offering the standard interpretation of the reported change in his character. He cites at some length the case of a “modern Phineas Gage,” a patient who, while intellectually undamaged, lost “his ability to choose the most advantageous course of action.” Gage himself behaved “dismally” in his compromised ability “to plan for the future, to conduct himself according to the social rules he previously had learned, and to decide on the course of action that ultimately would be most advantageous to his survival.” The same could certainly be said as well of Captain Ahab. So perhaps Melville meant to propose that the organ of veneration was located in the leg. My point being that another proper context for the interpretation of Phineas Gage might be others who have sufered gross insult to the body, especially those who have been disfigured by it. And in justice to Gage, the touching fact is that he was employed continually until his final illness. No one considers what might have been the reaction of other people to him when his moving from job to job — his only sin besides cursing and irritability — attracts learned disapprobation.7

I trouble the dust of poor Phineas Gage only to make the point that in these recountings of his afflictions there is no sense at all that he was a human being who thought and felt, a man with a singular and terrible fate. In the absence of an acknowledgment of his subjectivity, his reaction to this disaster is treated as indicating damage to the cerebral machinery, not to his prospects, or his faith, or his self-love. It is as if in telling the tale the writers participate in the absence of compassionate imagination, of benevolence, that they posit for their kind. And there is another point as well. This anecdote is far too important to these statements about the mind, and about human nature. It ought not to be the center of any argument about so important a question as the basis of human nature. It is too remote in time, too phrenological in its initial descriptions, too likely to be contaminated by sensationalism, to have any weight as evidence. Are we really to believe that Gage was not in pain during those thirteen years until his death? How did that terrible exit wound in his skull resolve? No conclusion can be drawn, except that in 1848 a man reacted to severe physical trauma more or less as a man living in 2009 might be expected to do. The stereotyped appearance of this anecdote, the particulars it includes and those whose absence it passes over, and the conclusion that is drawn from it are a perfect demonstration of the difference between parascientific thinking and actual science.

So complete a triumph of one mode of thought as the neo-Darwinists envision has the look of desolation to some writers in the field, the same desolation that Comte foresaw. He feared that a wholly rational and scientifical understanding would exclude from the world much that is best in it, and much that is essential to a humane understanding of it. As Comte did before him, E. O. Wilson, a well-respected exemplar of this genre, has proposed a new “consilience” that will enrich both science and the arts and humanities by integrating them, a treaty he proposes in the course of asserting a theory of the human mind that is notably unfriendly to his project. He says, “All that has been learned empirically about evolution in general and mental process in particular suggests that the brain is a machine assembled not to understand itself, but to survive. Because these two ends are basically different, the mind unaided by factual knowledge from science sees the world only in little pieces. It throws a spotlight on those portions of the world it must know in order to live to the next day, and surrenders the rest to darkness. For thousands of generations people lived and reproduced with no need to know how the machinery of the brain works. Myth and self-deception, tribal identity and ritual, more than objective truth, gave them the adaptive edge.”8

When exactly did the mind begin to be aided by “factual knowledge from science”? Where is the evidence that prescientific people see the world “only in little pieces”? Is he speaking of Herodotus? Dante? Michelangelo? Shakespeare? Does knowing “how the machinery of the brain works”—and, in fact, we still do not know how it works — have any implication for the effective use of the mind? Unlike science, the arts and humanities have a deep, strong root in human culture, and have had for millennia. Granting the brilliance of science, there are no grounds for the notion that in its brief history it has transformed human consciousness in the way Wilson describes. The narrowness of Wilson’s view of human history seems rather to suggest a parochialism that follows from a belief in science as a kind of magic, as if it existed apart from history and culture, rather than being, in objective truth and inevitably, their product.

*

For this reason there is in his proposal the implicit assumption that science in its present state is less deeply under unacknowledged cultural influences than it has been historically, as if there were not a history behind his own world view, one that deeply informs his writing. Granting that Wilson’s qualifications vastly exceed Spencer’s and those of many writers in this genre, the stretch from entomology to human nature is long enough, and his faithfulness to parascientific conventions is close enough, that I feel no hesitation in placing On Human Nature and Consilience in the same company with The Data of Ethics and The Descent of Man, rather than with, say, Discourse on Method or The Origin of Species. The cultural contamination to which science is most vulnerable is the kind that seems to the writer not to be cultural at all, to be instead commonsensical, for instance the very Western, very modern exclusion of subjectivity from the account to be made of human nature.

William James proposed an open epistemology, using the kind of language available to psychology before the positivist purge, appealing to experience, to subjectivity. He said,

Whoso partakes of a thing enjoys his share, and comes into contact with the thing and its other partakers. But he claims no more. His share in no wise negates the thing or their share; nor does it preclude his possession of reserved and private powers with which they have nothing to do, and which are not all absorbed in the mere function of sharing. Why may not the world be a sort of republican banquet of this sort, where all the qualities of being respect one another’s personal sacredness, yet sit at the common table of space and time? … Things cohere, but the act of cohesion itself implies but few conditions, and leaves the rest of their qualifications indeterminate…. The parts actually known of the universe may comport many ideally possible complements. But as the facts are not the complements, so the knowledge of one is not the knowledge of the other in anything but the few necessary elements of which all must partake in order to be together at all.

9

This is consilient language, too, and aware that it is. Explicitly religious and political language of a kind that would be familiar to a nineteenth-century American audience is a weight-bearing element in the architecture of experience he proposes. He says we know anything in the way and to the degree that we encounter it, and not otherwise. To claim more is to trench upon a deeper identity that is unknowable by us, a system of contingencies that inheres in the object of encounter and cannot be excluded from its reality, and which will not be reached by extrapolation from what we know about it through our experience. Nor may the observer himself be absorbed into this universe, as if in accepting definition it must necessarily define him. This is language that accords uncannily well with the idea of indeterminacy in modern physics, in integrating what we know about reality with the awareness that unknowability is the first thing about reality that must be acknowledged. James published the essay in which it appears in 1882.

*

In his book On Human Nature, published in 1978, E. O. Wilson does at one point acknowledge the great complexit of human behavior. He says, “Only techniques beyond our present imagining could hope to achieve even the short-term prediction of the detailed behavior of an individual human being, and such an accomplishment might be beyond the capacity of any conceivable intelligence.”10 Fair enough. These comments on complexity have the smack of actual science about them because they acknowledge the impact of strategies of measurement and of the interests as well as the mere presence of an observer. He is in error when he associates these things with the Heisenberg uncertainty principle, but for one paragraph he does acknowledge the world of scientific awareness we have lived in for the last century.

Still, here is how he interprets a specific kind of behavior he calls “soft-core” altruism, that is, the kind whose benefits redound to the altruist and near kin rather than to his tribe or nation. That he chooses to give this subject a chapter in a book on human nature is itself a cultural choice, one made by Spencer before him, since the possibility of truly selfless behavior has been a point of dispute in this genre since well before Auguste Comte. Wilson says, “Soft-core altruism … is ultimately selfish. The ‘altruist’ expects reciprocation from society for himself or his closest relatives. His good behavior is calculating, often in a wholly conscious way, and his maneuvers are orchestrated by the excruciatingly intricate sanctions and demands of society. The capacity for soft-core altruism can be expected to have evolved primarily by selection of individuals and to be deeply influenced by the vagaries of cultural evolution. Its psychological vehicles are lying, pretense, and deceit, including self-deceit, because the actor is more convincing who believes that his performance is real.” Michael Gazzaniga has translated this insight into sophomore-speak: “Everyone (except for me, of course) is a hypocrite. It apparently is just easier to see from the outside than the inside. As we just learned, to pull this of, it helps not to consciously know that you are pulling a fast one, because then you will have less anxiety and thus less chance of getting busted.” Steven Pinker takes a different view. There is a book, he says, that “complains that if altruism according to biologists is just helping kin or exchanging favors, both of which serve the interests of one’s genes, it would not really be altruism after all, but some kind of hypocrisy. This too is a mixup…. Genes are a play within a play, not the interior monologue of the players.” So for him our conscious motives are entirely distinct from the biological reality that actually prompts behavior. This is a high price to pay for exculpation, in its way the ultimate statement of the modernist impulse to discredit the witness of the mind.11

For Wilson, despite his mention of maneuvers and excruciatingly intricate sanctions and vagaries of cultural evolution, complexity is all forgotten. It seems a sociobiologist can bring his perspective to bear on hypothetical actions of a particular kind, without reference to the circumstances in which they might occur, and without what in the circumstances must be called the observer prejudicing the results of his hypothetical observations. No point inquiring of an altruist, should some individual instance of the general phenomenon be found. Should he report other motives than the sociobiologist observed in him, we have already been cautioned against the lying, pretense, deceit, and self-deceit to which his kind — the world over, apparently — are prone. Every seemingly selfless act is really a matter of quid pro quo, whether it occurs in ancient Mesopotamia or modern Japan. We must all know this, since according to Wilson we all use strategies of deception to conceal our true motives from one another. But if we do all know it, how can it be that we expect to deceive one another? What accounts for the impulse to conceal a calculus of fair exchange — the generous act and its socially determined reward — assuming this is what altruism really amounts to?

Herbert Spencer had arrived at the conclusion a century earlier that altruism has its rewards. Yet he concedes the possibility of truly selfless behavior — which, he says, is attended by more than mere reciprocity. “Those [actions] which bring more than equivalents are those not prompted by any thoughts of equivalents. For obviously it is the spontaneous outflow of good nature, not in the larger acts of life only but in all its details, which generates in those around the attachments prompting unstinted benevolence.” Spencer’s posture is every bit as secular as Wilson’s. He is every bit as capable of understanding that altruism brings its returns — public health reforms keep cholera at bay — and yet he can also allow for true generosity. His little portrait of good nature seems almost Dickensian in the context, frank notice of the fact of human community and the pleasures of it, a consideration reliably missing from the sociobiological reckoning of motive and behavior. This may simply be a consequence of his writing more than a century before William Hamilton made his cost-benefit analysis—r × b c — purporting to show that kinship altruism could be brought under the aegis of self-interest by the understanding that it enhanced the likelihood of survival of one’s genes, the formula by which true monism was achieved. Over the years old altruism, the capstone of the Comtean positivist system, had evolved into an insubordinate datum in the grand scheme of rational self-interest, daring to trouble even Darwin himself, who found it among bees. Finally, by means of a mathematical formula, the truth was revealed and the sutures of the system closed.12

I find it hard to believe that kinship altruism was where the real mystery lay, however, since the wish to live on in one’s descendants is not unusual, even if the words in which it is expressed have lacked imprimatur. Hamilton’s formula may have made the generosity of families toward their members comprehensible to the Darwinian mind, but it only sharpens the problem of stranger altruism, which does often appear when a need accessible to help is made known. Most of us have engaged at some time in the imaginative act of identification with the imperiled or suffering. We rehearse it often enough in ballads and novels and films, presumably refining our capacity for self-deception. I should note that later researchers applied game theory to the problem of stranger altruism and worked through the problem to their own satisfaction. They used the “prisoner’s dilemma,” which, to this poor humanist, seems liable to have prejudiced the outcome, since the given of that game is that each player tries to find a solution least harmful or most beneficial to himself.13

Wilson’s use of lying, pretense, deceit, and, crucially, self-deceit to explain the reality behind manifest behavior is an important aspect in which Wilson has taken on an inflection of the modern that is not yet apparent in Spencer. A central tenet of the modern world view is that we do not know our own minds, our own motives, our own desires. And — an important corollary — certain well-qualified others do know them. I have spoken of the suppression of the testimony of individual consciousness and experience among us, and this is one reason it has fallen silent. We have been persuaded that it is a perjured witness. This is that rare point of convergence among the very diverse schools, Freudianism and behaviorism, for example, that have been called modern, and its consequences have been very great. If I seem to contradict myself, saying in the first place that subjective experience is excluded from this literature and then that it is impugned in it, this contradiction is itself a feature of the genre. Wilson finds in the experience of the altruist “lying, pretense and deceit.” Granting that he has said one thing three times — for emphasis, I suppose — he has nevertheless described the intense and calculating interior state of one who ventures a generous act, a state which, since it includes even self-deceit, disqualifies her or him from reporting another set of intentions. What evidence does Wilson offer for the truth of what he says? None at all. He only impugns contrary evidence, the persisting delusion among us that we ourselves do sometimes act from generous motives, and believe that we see others act from them. This is also typical of parascientific argument.

Altruism has been and still is an issue because Darwinist evolutionary theory has considered it to be one. Why would altruism persist as a trait, when evolution would necessarily select against the conferring of benefit to another at cost to oneself? Hamilton’s rule is thought to have resolved the issue by the power of cost-benefit analysis. A scenario involving the rescue of a drowning child demonstrates, mathematically, without the slightest reference to anything that has happened or might happen in the real world, that a parent would be likely to rescue a child of his own, since that child is presumably the bearer of half his parent’s genetic inheritance — possibly including the genetic predisposition to altruism. To quote Lee Alan Dugatkin, “If grandchildren are in need of rescue, the net benefit received by the altruist is cut in half,” and so on as the degree of consanguinity diminishes.14 Note the impossibly narrow set of factors in play here. The potential cost (c) is not the value — even genetic value — invested in the child by the rescuer and potentially lost by him but only the risk to the rescuer’s own physical well-being. Nor is the potential benefit (b) the emotional one of recovering the child, or even of feeling adequate to a critical situation, but only of enhancing the likelihood that a gene will survive into another generation.

All this is plausible if the experience and testimony of humankind is not to be credited, if reflection and emotion are only the means by which the genes that have colonized us manipulate us for their purposes. How are “we” to be located in all this? What are “we” if we must be bribed and seduced by illusory sensations we call love or courage or benevolence? Why need our genes conjure these better angels, when, presumably, the species of toads and butterflies whose ways are said to demonstrate the power of Hamilton’s rule flourish without them? What are “we” if our hopes of ourselves are higher than, or contrary to, the reality by which we are in fact governed? If these feelings are so strong for us that our true motives awaited the coming of sociobiology in order to be revealed to us, might not the hope of these illusory rewards have begun at some point to function as our true motive, one that would tend to express itself (given the nature of the deception) in ways that were altruistic in the ordinary sense of the word? And, assuming that termites are without illusion, does this possibility not create a problem for Hamilton’s rule, insofar as it is taken to be a description of both termite and human behavior? If these ingratiating deceits and delusions were called by kinder names, they might seem to argue for the kind of thing theology calls ensoulment. The so-called illusions, delusions, deceptions, and self-deceptions about which parascience as a project is so inclined to fret make up a great part of the margin between ourselves and the other creatures that we call our humanity. And, I will argue, they are the implicit subject of that project. So, clearly, they have an important reality. They are, whatever else, the workings of our species’ remarkable brain. To exclude them from consideration in an account of human nature makes no sense at all.

The Hamilton equation describes a circumstance that is entirely theoretical, and inevitably so. Instances of this ideal test, the drowning child imagined by J. B. S. Haldane, no less, the child who is to be rescued or not by kin or strangers, would be far too rare among possibly altruistic events to support generalization. Do elderly mothers go unrescued, being past their childbearing years? Do firefighters run into burning houses looking for kith and kin? In how many instances would those disposed to altruism die in the rescue of strangers whose genetic proclivities were entirely unknown to them? Then how likely would it be that a gene for altruism would persist in a population, given Hamilton’s account of it? Whether the formula can be applied to bees and termites and naked mole rats is a judgment that can be left only to specialists, though the observer effect must be assumed to be in play among specialists, too. And a reader in this literature has no more chance of testing the validity of their observations than she has of splitting a photon.

We have been told to disallow the intense and emotional subjective considerations a human altruist is likely to ponder, and to do so in deference to a mathematical formula that can never be made subject to any test in a human population. It is consistent with the genre of parascience, however, that this formula is applied with great confidence to the nature of our species. Hamilton himself said he “realized from common experience that university people sometimes don’t react well to common sense, and in any case most of them listened to it harder if you first intimidate them with equations.”15 If one may judge from the impact of his equation on his field, this is certainly true. Hamilton’s rule is really the transmogrification of a statement Thomas Huxley had made a century before him. If his formula is taken seriously, it precludes any other conclusion than that altruism, where it occurs at all, occurs within families, on account of the “selfishness” of a gene. That is, it occurs only in circumstances that reduce as far as possible the degree to which the behavior can be called altruistic, not in order to refine the definition of the word but in order to make the phenomenon seem assimilable to a theory.

Spencer’s mention of the “parent infusorium,” Freud’s mention of “the stores of libido by means of which the cells of the soma are attached to one another,” for that matter Auguste Comte’s pondering the physiology of the brain — such things have lent authority to philosophies that in turn deeply influence the thought of subsequent generations.16 And by dint of sheer historical importance they have legitimized a style of argument — the use of fragments of what in the writer’s moment is taken to be scientific truth — to leverage the broadest statements on the largest subjects.

Thinkers like Richard Dawkins and Daniel Dennett attribute the universe in all its complexity to accident. In this view, accident defines over time the range of the possible because circumstances develop which create an effect of optimization, an enhanced suitability of life forms for survival, individual and genetic, in whatever conditions pertain. Not surprisingly, Dennett likens this process to an algorithm. The inevitable iterations of variation on one hand and selection on the other have yielded all that exists or has ever existed. The human mind is one more, very splendid, product of these iterations. Of course Dennett assumes that the human mind was and is profoundly wrong about its origins and nature. This can be true despite the unsentimental workings of natural selection because a new layer has been superadded to reality by Dennett, Dawkins, and others to allow for the anomalous character of the brain/mind. This entity or phenomenon is called the meme, by analogy to the gene. It is a selfish, brain-colonizing personal or cultural concept, idea, or memory that survives by proliferating, implanting itself in other brains. Dawkins says, “Examples of memes are tunes, ideas, catch-phrases, clothes fashions, ways of making pots or building arches. Just as genes propagate themselves in the gene pool by leaping from body to body via sperms or eggs, so memes propagate themselves in the meme pool by leaping from brain to brain via a process which, in the broad sense, can be called imitation.” He quotes his colleague N. K. Humphrey: “Memes should be regarded as living structures, not just metaphorically but technically. When you plant a fertile meme in my mind, you literally parasitize my brain, turning it into a vehicle for the meme’s propagation in just the way that a virus may parasitize the genetic mechanism of a host cell. And this isn’t just a way of talking — the meme for, say, ‘belief in life after death’ is actually realized physically, millions of times over, as a structure in the nervous systems of individual men the world over.”17

The meme is not a notion I can dismiss out of hand. It seems to me to describe as well as anything does the obdurate persistence and influence of the genre of writing I have called parascientific. This piece of evidence for its reality might not please its originators, who always seem to assume their own immunity from the illusions and distractions that plague the rest of us. Still, aware as I am that Einstein’s cosmological constant was first of all a sort of fudge, in his view a blunder, I am willing to concede that this idea cannot be wholly discredited by its obvious usefulness to those who have proposed it. It does raise questions within the terms of their conceptual universe, however. For example, let us say altruism is a meme, inexplicably persistent, as other traits associated with religion are also. Then is there any need to make a genetic or sociobiological account of it? If its purpose is to have a part in sustaining related memes by which it would also be sustained, such as “family” or “religious community,” would it be dependent on the process of Darwinian selection represented in the theoretical rescue/non-rescue of the drowning child?

To put the question in more general terms: the role of the meme in this school of thought is to account for the human mind and the promiscuous melange of truth and error, science and mythology, that abides in it and governs it, sometimes promoting and sometimes thwarting the best interests of the organism and the species. Then why assume a genetic basis for any human behavior? Memes would appear to have sprung free from direct dependency on our genes, and to be able to do so potentially where they have not yet done so in fact. And assuming that Homo sapiens are unique in this experience of meme colonization, does this theory not set apart something that might be called human nature, that is, certain qualities of humankind that are unique to us, and not to be accounted for by analogies between ourselves and the hymenoptera? Sociobiology, with its dependency on gradualist neo-Darwinism, is difficult to reconcile with these incorporeal, free-floating, highly contagious memes which, in theory, have somehow managed to grow our physical brains to accommodate their own survival and propagation. Only consider the physiological and societal consequences of those big heads of ours in terms of maternal and infant mortality, the helplessness of infants, and the importance to us of culture, among other things. Does not this theory implicitly marginalize gene-based accounts of human behavior?

Memes and Hamiltonian genes do resemble each other, though only as a stone resembles an oyster. They differ in that the first has a status that is something less than hypothetical, while, of course, genes are actual and are thoroughly mapped and studied. The traits of this notional meme align nicely with the Hamiltonian idea of “selfishness,” that is, the idea that, like the gene, the meme impacts the organism’s function and behavior to perpetuate its own existence through generations. Granting that such an entity as a meme would have an interest in the survival of the one species that can serve as the vehicle of its spread and perpetuation, in individual cases this is clearly at odds with the personal survival of human beings. To choose an illustration of the point at random — the Horst Wessel Lied, a song written in celebration of fallen comrades by a young man who was himself assassinated, was, so to speak, an important modern carrier of that ancient meme and killer of young men, dulce et decorum est pro patria mori. I think it is generally believed that the martyrdoms of early Christians did much to anchor their religion in the culture of the Mediterranean world. The best case to be made for the correctness of the notion that there are indeed memes, and that they do indeed perpetuate themselves in human culture over time, would be the potency they acquire in the very fact of the destruction of the young and strong. When factions or nations turn on each other, those who win lose from the point of view of the species, in destroying the genetic wealth of their adversaries, and no “selfishness,” however leveraged by equations, intervenes to limit the losses we as a species suffer.

My point is that, despite a superficial resemblance between the hypothetical meme and the hypothetically “selfish” gene, owed no doubt to their shared intellectual paternity, each theory obviates the other, or at best creates any number of disputed boundaries between them. This would be interesting and nothing more than interesting if the neo-Darwinism of Hamilton, Dawkins, Dennett, and others did not offer itself as a monism, as the one thing needful, the one sufficient account for literally everything. If altruism has seemed to be the ragged edge of Darwinism, a worry to T. H. Huxley, finally tucked out of sight by Hamilton’s formula, why should they be so unperturbed by the fact that these mighty memes, granting their existence here for the purposes of argument, provide an alternative account for the whole of human behavior? Why war? Dulce et decorum est. Why altruism? It is more blessed to give than to receive. Whence the bonds of family? I love all the dear silver that shines in her hair, and the brow that is wrinkled and furrowed with care.

Ah, but what is the origin of these memes? Once a shaman was right about where game was to be found, and religion was up and running. But a good many human behaviors and cultural patterns run counter to religion or have no clear source in it. In any case, a stickler might wonder whether some crude metaphysics would not have lurked behind the role of shaman and the idea of consulting him, if shamanism itself ought not to be called a meme. For that matter, one might wonder if some unacknowledged metaphysics lurks behind the parascientific positing of these immortal, incorporeal destinies that possess us to their own inscrutable ends, rather in the manner of the gods of Greek mythology. The question of origins bears a certain similarity to the questions raised by E. O. Wilson’s remarks on altruism. What is the nature of the reality we inhabit if we have to conceal self-interested motives? If nature runs on self-interest to its own ultimate enhancement and ours, where is the shame in it? Isn’t shame as extraneous to the workings of the world, understood from a Hamiltonian perspective, as generosity itself would be? We might be tempted to patch in a meme here — I was hungry and you fed me, I was naked and you clothed me — but if we did, then we would have proposed a sufficient account of altruism, making Hamilton’s equation entirely unnecessary. And, since the benefactor would have been acting purely at the behest of the meme, we would also have excluded deception and self-deception as factors in the altruistic act.

The neo-Darwinism of Hamilton and others shares one consequence with meme theory: both of them represent the mind as a passive conduit of other purposes than those the mind ascribes to itself. It reiterates that essential modernist position, that our minds are not our own. The conviction so generally shared among us, that we think in some ordinary sense of that word, that we reason and learn and choose as individuals in response to our circumstances and capacities, is simply — the one, crucial point of agreement between these otherwise incompatible theories — a persisting illusion serving a force or a process that is essentially unknown and indifferent to us.

*

The comparison that is salient here is between the accidental and the intentional in terms of their consequences for the interpretation of anything. In the course of my reading, I have come to the conclusion that the random, the accidental, have a strong attraction for many writers because they simplify by delimiting. Why is there something rather than nothing? Accident. Accident narrows the range of appropriate strategies of interpretation, while intention very much broadens it. Accident closes on itself, while intention implies that, in and beyond any particular fact or circumstance, there is vastly more to be understood. Intention is implicitly communicative, because an actor is described in any intentional act. Why is the human brain the most complex object known to exist in the universe? Because the elaborations of the mammalian brain that promoted the survival of the organism overshot the mark in our case. Or because it is intrinsic to our role in the universe as thinkers and perceivers, participants in a singular capacity for wonder as well as for comprehension.

The anomalies that plague accident as an explanatory model — the human mind, most notably — are no problem at all if it is assumed that accident does not explain us, that we are meant to be human, that is, to be aware and capable in the ways the mind — and how else to describe the mind? — makes us aware and capable. And what are those ways? Every poem, theory, philanthropy, invention, scandal, hoax, and crime of violence tells us more. No aspect of reality, from this point of view, need be simplified or limited to fit an explanatory model. One would think that the inadequacy of any model to deal with the complexity of its subject would make its proponents a bit tentative, but in fact the tendency of the kind of thought I wish to draw attention to is to deny the reality of phenomena it cannot accommodate, or to scold them for their irksome, atavistic persistence.

This is surely an odd way to proceed, especially in light of the fact that these schools of thought regard themselves as scientific, or as accepting of certain scientific insights that must lead any honest and enlightened person to embrace their view of things. The Berkeley philosopher John Searle objects to the commonly held conception that “suggests that science names a specific kind of ontology, as if there were a scientific reality that is different from, for example, the reality of common sense.” He says, “I think that is profoundly mistaken.” And he says, “There is no such thing as the scientific world. There is, rather, just the world, and what we are trying to do is describe how it works and describe our situation in it.”18 This seems to me so true that I would consider the statement obvious, or, as the philosophers say, trivial, if it did not make a claim, necessary in the circumstances, for the relevance to the study of mind of the fullness of mental experience.

John Searle is no transcendentalist. I do not wish to seem to recruit him in support of the religious position I have just declared. I do, however, take comfort in the fact that his objections to contemporary philosophic thinking about consciousness and mental phenomena are very like mine. He says of certain arguments offered by philosophers of the materialist school, “What they suggest is that these people are determined to try to show that our ordinary common-sense notions of the mental do not name anything in the real world, and they are willing to advance any argument that they can think of for this conclusion.”19 This is not a new state of affairs, nor one limited to Searle’s colleagues or to writers in fields related to his. The subject that interests me is in fact the persistence, through the very long period we still call “modern” and into the present, of something like a polemic against the mind — not mind as misnomer, nor as the construct of an untenable dualism, but mind in more or less the fullest sense of the word.

The resourcefulness Searle speaks of, the recourse to “any argument they can think of,” seems to me sometimes to be the unifying principle behind an apparent diversity of important schools and theories. Anthropology, positivism, Nietzscheanism in its various forms, Freudian and behaviorist psychology have all brought their insights to bear on this subject.

The word “modern” is itself a problem, since it implies a Promethean rescue from whatever it was that went before, a rupture so complete as to make context irrelevant. Yet if one were to imagine a row of schoolroom modernists hanging beside the schoolroom poets, Marx, Nietzsche, and Wellhausen beside Bryant, Longfellow, and Whittier, one would notice a marked similarity among them of pince-nez and cravat. The modern has been modern for a very long time. As a consequence of its iconic status, the contemporary remains very much in its shadow. Little that is contemporary is not also modern, and little that is modern departs as cleanly from its precursors as myth would have us believe. In one important particular, however, there seems to have been an authentic modern schism whose consequences are persistent and profound. Our conception of the significance of humankind in and for the universe has shrunk to the point that the very idea we ever imagined we might be significant on this scale now seems preposterous. These assumptions about what we are and are not preclude not only religion but also the whole enterprise of metaphysical thought. That the debate about the nature of the mind has tended to center on religion is a distraction which has nevertheless exerted a profound influence on the more central issue. While it may not have been true necessarily, it has been true in fact that the renunciation of religion in the name of reason and progress has been strongly associated with a curtailment of the assumed capacities of the mind.

Загрузка...