Conclusion


The Electron, the Elements and the Elusive Self


The Cavendish Laboratory, in the University of Cambridge, England, is arguably the most distinguished scientific institution in the world. Since it was established in the late nineteenth century it has produced some of the most consequential and innovative advances of all time. These include the discovery of the electron in 1897, the discovery of the isotopes of the light elements (1919), the splitting of the atom (also in 1919), the discovery of the proton (1920), of the neutron (1932), the unravelling of the structure of DNA (1953), and the discovery of pulsars (1967). Since the Nobel Prize was instituted in 1901, more than twenty Cavendish and Cavendish-trained physicists have won the prize for either physics or chemistry.1

Established in 1871, the laboratory opened its doors three years later. It was housed in a mock-Gothic building in Free School Lane, boasting a façade of six stone gables and a warren of small rooms connected, in Steven Weinberg’s words, ‘by an incomprehensible network of staircases and corridors’.2 In the late nineteenth century, few people knew, exactly, what ‘physicists’ did. The term itself was relatively new. There was no such thing as a publicly funded physics laboratory – indeed, the idea of a physics laboratory at all was unheard-of. What is more, the state of physics was primitive by today’s standards. The discipline was taught at Cambridge as part of the mathematical tripos, which was intended to equip young men for high office in Britain and the British empire. In this system there was no place for research: physics was in effect a branch of mathematics and students were taught to learn how to solve problems, so as to equip them to become clergymen, lawyers, schoolteachers or civil servants (i.e., not physicists).3 During the 1870s, however, as the four-way economic competition between Germany, France, the United States and Britain turned fiercer – mainly as a result of the unification of Germany, and the advances of the United States in the wake of the Civil War – the universities expanded and, with a new experimental physics laboratory being built in Berlin, Cambridge was reorganised. William Cavendish, the seventh duke of Devonshire, a landowner and an industrialist, whose ancestor Henry Cavendish had been an early authority on gravity, agreed to fund a laboratory provided the university promised to found a chair in experimental physics. When it was opened, the Duke was presented with a letter, informing him (in elegant Latin), that the laboratory was to be named in his honour.4

The new laboratory became a success only after a few false starts. Having tried – and failed – to attract first William Thomson, later Lord Kelvin, from Glasgow (he was the man who, among other things, conceived the idea of absolute zero and contributed to the second law of thermodynamics), and second Hermann von Helmholtz, from Germany (who had scores of discoveries and insights to his credit, including an early notion of the quantum), Cambridge finally offered the directorship to James Clerk Maxwell, a Scot and a Cambridge graduate. This was fortuitous. Maxwell turned into what is generally regarded as ‘the greatest physicist between Newton and Einstein’.5 Above all, Maxwell finalised the mathematical equations which provided a fundamental understanding of both electricity and magnetism. These explained the nature of light but also led the German physicist Heinrich Hertz at Karlsruhe in 1887 to identify electromagnetic waves, now known as radio.

Maxwell also established a research programme at the Cavendish, designed to devise an accurate standard of electrical measurement, in particular the unit of electrical resistance, the ohm. Because of the huge expansion of telegraphy in the 1850s and 1860s, this was a matter of international importance, and Maxwell’s initiative both boosted Britain to the head of this field, and at the same time established the Cavendish as pre-eminent in dealing with practical problems and devising new forms of instrumentation. It was this latter fact, as much as anything, that helped the laboratory play such a crucial role in the golden age of physics, between 1897 and 1933. Cavendish scientists were said to have ‘their brains in their fingertips’.6

Maxwell died in 1879 and was succeeded by Lord Rayleigh, who built on his work, but retired after five years to his estates in Essex. The directorship then passed, somewhat unexpectedly, to a twenty-eight-year-old, Joseph John Thomson, who had, despite his youth, already made a reputation in Cambridge as a mathematical physicist. Universally known as ‘J. J.’, Thomson, it can be said, kick-started the second scientific revolution, to create the world we have now. The first scientific revolution, it will be recalled from Chapter 23, occurred – roughly speaking – between the astronomical discoveries of Copernicus, released in 1543, and those of Isaac Newton, centring around gravity, and published in 1687 as Principia Mathematica. The second scientific revolution would revolve around new findings in physics, biology, and psychology.

But physics led the way. It had been in flux for some time, due mainly to a discrepancy in the understanding of the atom. As an idea, the atom – an elemental, invisible and indivisible substance – went back to ancient Greece, as we have seen. It was built on in the seventeenth century, when Newton conceived it as rather like a minuscule billiard ball, ‘hard and impenetrable’. In the early decades of the nineteenth century, chemists such as John Dalton had been forced to accept the theory of atoms as the smallest units of elements, in order to explain chemical reactions – how, for example, two colourless liquids, when mixed together, immediately formed a white solid or precipitate. Similarly, it was these chemical properties, and the systematic way they varied, combined with their atomic weights, that suggested to the Russian Dimitri Mendeleyev, playing ‘chemical patience’ with sixty-three cards at Tver, his estate 200 miles from Moscow, the layout of the periodic table of elements. This has been called ‘the alphabet out of which the language of the universe is composed’ and suggested, among other things, that there were elements still to be discovered. Mendeleyev’s table of elements would dovetail neatly with the discoveries of the particle physicists, linking physics and chemistry in a rational way and providing the first step in the unification of the sciences that would be such a feature of the twentieth century.

Newton’s idea of the atom was further refined by Maxwell, when he took over at the Cavendish. In 1873 Maxwell introduced into Newton’s mechanical world of colliding miniature billiard balls the idea of an electro-magnetic field. This field, Maxwell argued, ‘permeated the void’ – electric and magnetic energy ‘propagated through it’ at the speed of light.7 Despite these advances, Maxwell still thought of atoms as solid and hard and essentially mechanical.

The problem was that atoms, if they existed, were too small to observe with the technology then available. Things only began to change with Max Planck, the German physicist. As part of the research for his PhD, Planck had studied heat conductors and the second law of thermodynamics. This law was initially identified by Rudolf Clausius, a German physicist who had been born in Poland, though Lord Kelvin had also had some input. Clausius had presented his law at first in 1850 and this law stipulates what anyone can observe, that energy dissipates as heat when work is done and, moreover, that heat cannot be reorganised into a useful form. This otherwise common-sense observation has very important consequences. One is that since the heat produced – energy – can never be collected up again, can never be useful or organised, the universe must gradually run down into complete randomness: a decayed house never puts itself back together, a broken bottle never reassembles of its own accord. Clausius’ word for this irreversible, increasing disorder was ‘entropy’, and he concluded that the universe would eventually die. In his PhD, Planck grasped the significance of this. The second law shows in effect that time is a fundamental part of the universe, or physics. This book began, in the Prologue, with the discovery of deep time, and Planck brings us full circle. Whatever else it may be, time is a basic element of the world about us, is related to matter in ways we do not yet fully understand. Time means that the universe is one-way only, and that therefore the Newtonian, mechanical, billiard ball picture must be wrong, or at best incomplete, for it allows the universe to operate equally in either direction, backwards and forwards.8

But if atoms were not billiard balls, what were they?

The new physics came into view one step at a time, and emerged from an old problem and a new instrument. The old problem was electricity – what, exactly, was it?21 Benjamin Franklin had been close to the mark when he had likened it to a ‘subtile fluid’ but it was hard to go further because the main naturally-occurring form of electricity, lightning, was not exactly easy to bring into the laboratory. An advance was made when it was noticed that flashes of ‘light’ sometimes occurred in the partial vacuums that existed in barometers. This brought about the invention of a new – and as it turned out all-important – instrument: glass vessels with metal electrodes at either end. Air was pumped out of these vessels, creating a vacuum, before gases were introduced, and an electrical current passed through the electrodes (a bit like lightning) to see what happened, how the gases might be affected. In the course of these experiments, it was noticed that if an electric current were passed through a vacuum, a strange glow could be observed. The exact nature of this glow was not understood at first, but because the rays emanated from the cathode end of the electrical circuit, and were absorbed into the anode, Eugen Goldstein called them Cathodenstrahlen, or cathode rays. It was not until the 1890s that three experiments stemming from cathode-ray tubes finally made everything clear and set modern physics on its triumphant course.

In the first place, in November 1895, Wilhelm Röntgen, at Würzburg, observed that when the cathode rays hit the glass wall of a cathode-ray tube, highly penetrating rays were emitted, which he called X-rays (because x, for a mathematician, signified the unknown). The X-rays caused various metals to fluoresce and, most amazingly, were found to pass through the soft tissue of his hand, to reveal the bones within. A year later, Henri Becquerel, intrigued by the fluorescing that Röntgen had observed, decided to see whether naturally-fluorescing elements had the same effect. In a famous but accidental experiment, he put some uranium salt on a number of photo-electric plates, and left them in a closed (light-tight) drawer. Four days later, he found images on the plates, given off by what we now know was a radio-active source. Becquerel had discovered that ‘fluorescing’ was naturally occurring radio-activity.9

But it was Thomson’s 1897 discovery which capped everything, produced the first of the Cavendish’s great successes and gave modern physics its lift-off, into arguably the most exciting and important intellectual adventure of the modern world. In a series of experiments J. J. pumped different gases into the glass tubes, passed an electric current, and then surrounded them either with electrical fields or with magnets. As a result of this systematic manipulation of conditions, Thomson convincingly demonstrated that cathode ‘rays’ were in fact infinitesimally minute particles erupting from the cathode and drawn to the anode. Thomson further found that the particles’ trajectory could be altered by an electric field and that a magnetic field shaped them into a curve.10 More important still, he found that the particles were lighter than hydrogen atoms, the smallest known unit of matter, and exactly the same whatever the gas through which the discharge passed. Thomson had clearly identified something fundamental – this was in fact the first experimental establishment of the particulate theory of matter.

The ‘corpuscles’, as Thomson called these particles at first, are today known as electrons. It was the discovery of the electron, and Thomson’s systematic examination of its properties, that led directly to Ernest Rutherford’s further breakthrough, a decade later, in conceiving the configuration of the atom as a miniature ‘solar system’, with the tiny electrons orbiting the massive nucleus like stars around the sun. In doing this, Rutherford demonstrated experimentally what Einstein discovered inside his head and revealed in his famous calculation, E = mc2 (1905), that matter and energy are essentially the same.11 The consequences of these insights and experimental results – which included thermonuclear weapons, and the ensuing political stand-off known as the Cold War – fall outside the time-frame of this book.22 But Thomson’s work is important for another reason that does concern us here.

He achieved the advances that he did by systematic experimentation. At the beginning of this book, in the Introduction, it was asserted that the three most influential ideas in history have been the soul, the idea of Europe, and the experiment. It is now time to support this claim. It is most convincingly done by taking these ideas in reverse order.

It is surely beyond reasonable doubt that, at the present time, and for some considerable time in the past, the countries that make up what we call the West – traditionally western Europe and northern America in particular, but with outposts such as Australia – have been the most successful and prosperous societies on earth, in terms of both the material advantages enjoyed by their citizens and the political and therefore moral freedoms they have. (This situation is changing now but these sentiments are true as far as they go.) These advantages are linked, intertwined, in so far as many material advances – medical innovations, printing and other media, travel technology, industrial processes – bring with them social and political freedoms in a general process of democratisation. And these are the fruit, almost without exception, of scientific innovations based on observation, experimentation, and deduction. Experimentation is all-important here as an independent, rational (and therefore democratic) form of authority. And it is this, the authority of the experiment, the authority of the scientific method, independent of the status of the individual scientist, his proximity to God or to his king, and as revealed and reinforced via myriad technologies, which we can all share, that underlies the modern world. The cumulative nature of science also makes it a far less fragile form of knowledge. This is what makes the experiment such an important idea. The scientific method, apart from its other attractions, is probably the purest form of democracy there is. But the question immediately arises: why did the experiment occur first and most productively in what we call the West? The answer to this shows why the idea of Europe, the set of changes that came about between, roughly speaking, AD 1050 and 1250, was so important. These changes were covered in detail in Chapter 15 but to recap the main points here, we may say that: Europe was fortunate in not being devastated to the same extent as Asia was by the plague; that it was the first landmass that was ‘full’ with people, bringing about the idea of efficiency as a value, because resources were limited; that individuality emerged out of this, and out of developments in the Christian religion, which created a unified culture, which in turn helped germinate the universities where independent thought could flourish and amid which the ideas of the secular and of the experiment were conceived. One of the most poignant moments in the history of ideas surely came in the middle of the eleventh century. In 1065 or 1067 the Nizamiyah was founded in Baghdad (see here). This was a theological seminary and its establishment brought to an end the great intellectual openness in Arabic/Islamic scholarship, which had flourished for two to three hundred years. Barely twenty years later, in 1087, Irnerius began teaching law at Bologna and the great European scholarship movement was begun. As one culture ran down, another began to find its feet. The fashioning of Europe was the greatest turning-point in the history of ideas.

It may seem odd to some readers that the ‘soul’ should be a candidate as the third of the most influential ideas in history. Surely the idea of God is more powerful, more universal, and in any case isn’t there a heavy overlap? Certainly, God has been a very powerful idea throughout history, and indeed continues to be so across many parts of the globe. At the same time, there are two good reasons why the soul has been – and still is – a more influential and fecund idea than the Deity itself.

One is that, with the invention of the afterlife (which not all religions have embraced), and without which any entity such as the soul would have far less meaning, the way was open for organised religions the better to control men’s minds. During late antiquity and the Middle Ages, the technology of the soul, its relation with the afterlife, with the Deity, and most importantly with the clergy, enabled the religious authorities to exercise an extraordinary authority. It is surely the idea of the soul which, though it enriched men’s minds immeasurably over many centuries, nevertheless kept thought and freedom back during those same centuries, hindering and delaying progress, keeping the (largely) ignorant laity in thrall to an educated clerisy. Think of Friar Tetzel’s assurance that one could buy indulgences for souls in purgatory, that they would fly to heaven as soon as the coin dropped in the plate. The abuses of what we might call ‘soul technology’ were one of the main factors leading to the Reformation which, despite John Calvin in Geneva, took faith overall away from the control of the clergy, and hastened doubt and non-belief (as was discussed in Chapter 22). The various transformations of the soul (from being contained in semen, in Aristotle’s Greece, the tripartite soul of the Timaeus, the medieval and Renaissance conception of Homo duplex, the soul as a woman, a form of bird, Marvell’s dialogue between the soul and the body, Leibniz’s ‘monads’) may strike us as quaint now, but they were serious issues at the time and important stages on the way to the modern idea of the self. The seventeenth-century transformation – from the humours, to the belly and bowels, to the brain as the locus of the essential self – together with Hobbes’ argument that no ‘spirit’ or soul existed, were other important steps, as was Descartes’ reconfiguration of the soul as a philosophical as opposed to a religious notion.12 The transition from the world of the soul (including the afterlife) to the world of the experiment (here and now), which occurred first and most thoroughly in Europe, describes the fundamental difference between the ancient world and the modern world, and still represents the most important change in intellectual authority in history.

But there is another – quite different – reason why, in the West at least, the soul is important, and arguably more important and more fertile than the idea of God. To put it plainly, the idea of the soul has outlived the idea of God; one might even say it has evolved beyond God, beyond religion, in that even people without faith – perhaps especially people without faith – are concerned with the inner life.

We can see the enduring power of the soul, and at the same time its evolving nature, at various critical junctures throughout history. It has revealed this power through one particular pattern that has repeated itself every so often, albeit each time in a somewhat different form. This may be characterised as a repeated ‘turning inwards’ on the part of mankind, a continual and recurrent effort to seek the truth by looking ‘deep’ within oneself, what Dror Wahrman calls our ‘interiority complex’. The first time this ‘turning in’ took place (that we know about) was in the so-called Axial Age (see Chapter 5), very roughly speaking around the seventh to fourth centuries BC. At that time, more or less simultaneously in Palestine, in India, in China, in Greece and very possibly in Persia, something similar was occurring. In each case, established religion had become showy and highly ritualistic. In particular a priesthood had everywhere arisen and had arrogated to itself a highly privileged position: the clerisy had become an inherited caste which governed access to God or the gods, and which profited – in both a material and sacred sense – from its exalted position. In all of the above countries, however, prophets (in Israel) or wise men (the Buddha and the writers of the Upanishads in India, Confucius in China) arose, denounced the priesthood and advocated a turning inward, arguing that the way to genuine holiness was by some form of self-denial and private study. Plato famously thought that mind was superior to matter.13

These men led the way by personal example. Much the same message was preached by Jesus and by St Augustine. Jesus, for example, emphasised God’s mercy, and insisted on an inner conviction on the part of believers rather than the outward observance of ritual (Chapter 7). St Augustine (354–430) was very concerned with free will and said that humans have within themselves the capacity to evaluate the moral order of events or people and can exercise judgement, to decide our priorities. According to St Augustine, to look deep inside ourselves and to choose God was to know God (Chapter 10). In the twelfth century, as was discussed in Chapter 16, there was another great turning inward in the universal Roman Catholic Church. There was a growing awareness that inner repentance was what God wanted, not external penance. This was when confession was ordered to be made regularly by the Fourth Lateran Council. The Black Death, in the fourteenth century, had a similar impact. The very great number of deaths made people pessimistic and drove them inwards towards a more private faith (many more private chapels and charities were founded in the wake of the plague, and there was a rise in mysticism). The rise of autobiography in the Renaissance, what Jacob Burckhardt called the ‘abundance of pictures of the inmost soul’ was yet another turning in. In Florence, at the end of the fifteenth century, Fra Girolamo Savonarola, convinced that he had been sent by God ‘to aid the inward reform of the Italian people’, sought the regeneration of the church in a series of Jeremiads, terrible warnings of the evil to come unless this inward reform was immediate and total. And of course the Protestant Reformation of the sixteenth century (Chapter 22) was conceivably the greatest ‘turning in’ of all time. In response to the Pope’s claim that the faithful could buy relief for their relatives’ souls ‘suffering in purgatory’, Martin Luther finally exploded and advocated that men did not need the intervention of the clergy to receive the grace of God, that the great pomp of the Catholic Church, and its theoretical theological stance as ‘intercessor’ between man and his maker, was a nonsense and nowhere supported by the scriptures. He urged a return to ‘true inward penitence’ and said that above all inner contrition was needed for the proper remission of sins: an individual’s inner conscience was what mattered most. In the seventeenth century, Descartes famously turned in, arguing that the only thing man could be certain of was his inner life, in particular his doubt. Late-eighteenth-century/early-nineteenth-century romanticism was likewise a turning-in, a reaction against the Enlightenment, the eighteenth-century attitude/idea that the world could best be understood by science. On the contrary, said the romantics, the one unassailable fact of human experience is inward human experience itself. Following Vico, both Rousseau (1712–1778) and Kant (1724–1804) argued that, in order to discover what we ought to do, we should listen to an inner voice.14 The romantics built on this, to say that everything we value in life, morality above all, comes from within. The growth of the novel and the other arts reflected this view.

The romantics in particular show very clearly the evolution of the idea of the soul. As J. W. Burrow has observed, the essence of romanticism, and one might say of all the other ‘turnings in’ throughout history, is the notion Homo duplex, of a ‘second self’, a different – and very often a higher or better – self, whom one is trying to discover, or release. Arnold Hauser put it another way: ‘We live on two different levels, in two different spheres . . . these regions of being penetrate one another so thoroughly that the one can neither be subordinated to nor set against the other as its antithesis. The dualism of being is certainly no new conception, and the idea of the coincidentia oppositorum is quite familiar to us . . . but the double meaning and duplicity of existence . . . had never been experienced so intensively as now [i.e., in romantic times].’15

Romanticism, and its sense of a ‘second self’ was – as we have seen – one of the factors which Henri Ellenberger included in The Discovery of the Unconscious, his massive work on the royal road that led to depth psychology and culminated with the ideas of Sigmund Freud, Alfred Adler and Carl Jung. The unconscious is the last great turning in, an attempt, as discussed in the previous chapter, to be scientific about our inner life. But the fact that it failed is important in a wider sense than its inadequacy as treatment, as we shall now see.

Romanticism, the will, Bildung, Weber’s sense of vocation, the Volkgeist, the discovery of the unconscious, Innerlichkeit . . . the theme of the inner life, the second, inner, or as Kant called it the higher self, runs as strongly through nineteenth-century thought as it does throughout history, if not more strongly. A predominantly German concern with the irrational, it has been seen by some as forming the ‘deep background’ to the horrors of Nazism in the twentieth century (with the creation of the superior human being – the individual who has overcome his limitations by the exercise of will – as the goal of human history). That is not a trivial matter but it is not the main concern here. Instead, we are more interested in what this helps us conclude about the history of ideas. It surely confirms the pattern discussed above, of man’s recurring attempts to look deep inside himself in search of . . . God, fulfilment, catharsis, his ‘true’ motives, his ‘real’ self.

Alfred North Whitehead famously once remarked that the history of Western thought consisted of a series of footnotes to Plato. At the end of our long journey, we can now see that, whether Whitehead was being rhetorical or ironical, he was at best half right. In the realm of ideas, history has consisted of two main streams (I am oversimplifying here, but this is the Conclusion). There has been the history of ‘out there’, of the world outside man, the Aristotelian world of observation, exploration, travel, discovery, measurement, experiment and manipulation of the environment, in short the materialistic world of what we now call science. While this adventure has hardly been a straight line, and advances have been piecemeal at times, and even held up or hindered for centuries on end, mainly by fundamentalist religions, this adventure must be counted a success overall. Few would doubt that the material progress of the world, or much of it, is there for all to see. This advance continued, in accelerated mode, in the twentieth century.

The other main stream in the history of ideas has been the exploration of man’s inner life, his soul and/or second self, what we might label (with Whitehead) Platonic – as opposed to Aristotelian – concerns. This stream may itself be divided into two. In the first place, there has been the story of man’s moral life, his social and political life, his development of ways to live together, and this must be counted a qualified success, or at least as having a predominantly positive outcome. The broad transition in history from autocratic monarchies, whether temporal or papal, through feudalism, to democracy, and from theocratic to secular circumstances, has certainly brought greater freedoms and greater fulfilment to greater numbers of people (generally speaking, of course – there are always exceptions). The various stages in this unfolding process have been described in the pages above. Although political and legal arrangements vary around the world, all peoples have a politics and a legal system. They have concepts of justice that extend well beyond what we may call for simplicity’s sake the law of the jungle. In an institution such as the competitive examination, for example, we see the concept of justice extending beyond the purely criminal/legal area, to education. Even the development of statistics, a form of mathematics, was at times spurred by the interests of justice, as we saw in Chapter 32. Though the achievements of the formal social sciences have been limited in comparison with those of physics, astronomy, chemistry or medicine, say, their very evolution was intended as a more just improvement on the partisan nature of politics. All this must be accounted a (perhaps qualified) success.

The final theme – man’s understanding of himself, of his inner life – has proved the most disappointing. Some, perhaps many, will take issue with this, arguing that the better part of the history of art and creation is the history of man’s inner life. While this is undoubtedly true in a sense, it is also true that the arts don’t explain the self. Often enough, they attempt to describe the self or, more accurately, a myriad selves under a myriad different circumstances. But the very popularity in the contemporary world of Freudianism and other ‘depth’ psychologies, concerned mainly with the ‘inner self’ and self-esteem (and however misguidedly), surely confirms this assessment. If the arts were truly successful, would there be a need for these psychologies, these new ways of looking-in?

It is a remarkable conclusion to arrive at, that, despite the great growth in individuality, the vast corpus of art, the rise of the novel, the many ways that men and women have devised to express themselves, man’s study of himself is his biggest intellectual failure in history, his least successful area of inquiry. But it is undoubtedly true, as the constant ‘turnings-in’, over the centuries, have underlined. These ‘turnings-in’ do not build on one another, in a cumulative way, like science; they replace one another, as the previous variant runs down, or fails. Plato has misled us, and Whitehead was wrong: the great success stories in the history of ideas have been in the main the fulfilment of Aristotle’s legacy, not Plato’s. This is confirmed above all by the latest developments in historiography – which underline that the early modern period, as it is now called, has replaced the Renaissance as the most significant transition in history. As R. W. S. Southern has said, the period between 1050 and 1250, the rediscovery of Aristotle, was the greatest and most important transformation in human life, leading to modernity, and not the (Platonic) Renaissance of two centuries later.

For many years – for hundreds of years – man had little doubt that he had a soul, that whether or not there was some ‘soul substance’ deep inside the body, this soul represented the essence of man, an essence that was immortal, indestructible. Ideas about the soul changed in the sixteenth and seventeenth centuries and, as the loss of belief in God started to gather pace, other notions were conceived. Beginning with Hobbes and then Vico, talk about the self and the mind began to replace talk about the soul and this view triumphed in the nineteenth century, especially in Germany with its development of romanticism, of the human or social sciences, Innerlichkeit and the unconscious. The growth of mass society, of the new vast metropolises, played a part here too, provoking a sense of the loss of self.16

Set against this background, the advent of Freud was a curious business. Coming after Schopenhauer, von Hartmann, Charcot, Janet, the dipsychism of Max Dessoir and the Urphänomene of von Schubert, or Bachofen’s Law of Mothers, Freud’s ideas were not as startlingly original as they are sometimes represented. Yet, after a shaky start, they became immensely influential, what Paul Robinson described in the mid-1990s as ‘the dominant intellectual presence of the [twentieth] century’.17 One reason for this was that Freud, as a doctor, thought of himself as a biologist, a scientist in the tradition of Copernicus and Darwin. The Freudian unconscious was therefore a sophisticated attempt to be scientific about the self. In this sense, it promised the greatest convergence of the two main streams in the history of ideas, what we might call an Aristotelian understanding of Platonic concerns. Had it worked, it would surely have comprised the greatest intellectual achievement in history, the greatest synthesis of ideas of all time.

Today, many people remain convinced that Freud’s efforts succeeded, which is one reason why the whole area of ‘depth psychology’ is so popular. At the same time, among the psychiatric profession and in the wider world of science, Freud is more generally vilified, his ideas dismissed as fanciful and unscientific. In 1972 Sir Peter Medawar, a Nobel Prize-winning doctor, described psychoanalysis as ‘one of the saddest and strangest of all landmarks in the history of twentieth-century thought’.18 23 Many studies have been published which appear to show that psychoanalysis does not work as treatment, and several of Freud’s ideas in his other books (Totem and Taboo, for example, or Moses and Monotheism) have been thoroughly discredited, as misguided, using evidence that can no longer be substantiated. The recent scholarship, considered in the previous chapter, which has so discredited Freud, only underlines this and underlines it emphatically.

But if most educated people accept now that psychoanalysis has failed, it also has to be said that the concept of consciousness, which is the word biologists and neurologists have coined to describe our contemporary sense of self, has not fared much better. If, by way of conclusion, we ‘fast-forward’ from the end of the nineteenth century to the end of the twentieth, we encounter the ‘Decade of the Brain’, which was adopted by the US Congress in 1990. During the ten-year period that followed, many books on consciousness were published, ‘consciousness studies’ proliferated as an academic discipline, and there were three international symposia on consciousness. The result? It depends who you talk to. John Maddox, a former editor of Nature which, with Science, is the foremost scientific journal in the world, wrote that ‘No amount of introspection can enable a person to discover just which set of neurons in which part of his or her head is executing some thought-process. Such information seems to be hidden from the human user.’ Colin McGinn, a British philosopher at Rutgers University, New Jersey, argues that consciousness is resistant to explanation, in principle and for all time.19 Other philosophers, such as Harvard’s Thomas Nagel and Hilary Putnam, argue that at present (and maybe for all time) science cannot account for ‘qualia’, the first-person phenomenal experience that we understand as consciousness, why, in Simon Blackburn’s words, the grey matter of the brain can provide us with the experience of, for example, yellow-ness. Benjamin Libet, in a series of controversial experiments, has claimed that it takes about half a second for consciousness itself to happen (‘Libet’s delay’). Whether this (if true) is an advance is not yet clear. John Gray, professor of European thought at the London School of Economics, is one of those who has identified such phenomena as the ‘hard problem’ in consciousness studies.20

On the other hand, John Searle, Mills Professor of philosophy at the University of California, Berkeley, says there is nothing much to explain, that consciousness is an ‘emergent property’ that automatically arises when you put ‘a bag of neurons’ together. He explains, or tries to, by analogy: the behaviour of H2O molecules ‘explains’ liquidity, but the individual molecules are not liquid – this is another emergent property.21 (Such arguments are reminiscent of the ‘pragmatic’ philosophy of William James and Charles Peirce, discussed in Chapter 34, where the sense of self emerges from behaviour, not the other way round.) Roger Penrose, a physicist from London University, believes that a new kind of dualism is needed, that in effect a whole new set of physical laws may apply inside the brain, which account for consciousness. Penrose’s particular contribution is to argue that quantum physics operates inside tiny structures, known as tubules, within the nerve cells of the brain to produce – in some as yet unspecified way – the phenomena we recognise as consciousness.22 Penrose actually thinks that we live in three worlds – the physical, the mental and the mathematical: ‘The physical world grounds the mental world, which in turn grounds the mathematical world and the mathematical world is the ground of the physical world and so on around the circle.’23 Many people, who find this tantalising, nonetheless don’t feel Penrose has proved anything. His speculation is enticing and original, but it is still speculation.

Instead, it is two forms of reductionism that, in the present climate, attract most support. For people like Daniel Dennett, a biologically inclined philosopher from Tufts University near Boston in Massachusetts, human consciousness and identity arise from the narrative of our lives, and this can be related to specific brain states. For example, there is growing evidence that the ability to ‘apply intentional predicates to other people is a human universal’ and is associated with a specific area of the brain (the orbitofrontal cortex), an ability which in certain states of autism is defective. There is also evidence that the blood supply to the orbitofrontal cortex increases when people ‘process’ intentional verbs as opposed to non-intentional ones and that damage to this area of the brain can lead to a failure to introspect. Other experiments have shown that activity in the area of the brain known as the amygdala is associated with the experience of fear, that the decisions of individual monkeys in certain games could be predicted by the firing patterns of individual neurons in the orbitofrontal-striatal circuits of the brain, that neurotransmitters known as propranolol and serotonin affect decision-making, and that the ventral putamen within the striatum is activated when people experience pleasure.24 Suggestive as this is, it is also the case that the micro-anatomy of the brain varies quite considerably from individual to individual, and that a particular phenomenal experience is represented at several different points in the brain, which clearly require integration. Any ‘deep’ patterns relating experience to brain activity have yet to be discovered, and seem to be a long way off, though this is still the most likely way forward.

A related approach – and this is perhaps to be expected, given other developments in recent years – is to look at the brain and consciousness in a Darwinian light. In what sense is consciousness adaptive? This approach has produced two views – one, that the brain was in effect ‘jerry built’ in evolution to accomplish very many and very different tasks. On this account, the brain is at base three organs, a reptilian core (the seat of our basic drives), a palaeomammalian layer, which produces such things as affection for offspring, and a neomammalian brain, the seat of reasoning, language and other ‘higher functions’.25 The second approach is to argue that throughout evolution (and throughout our bodies) there have been emergent properties: for example, there is always a biochemical explanation underlying a physiological or medical phenomenon – sodium/ potassium flux across a membrane can also be described as ‘nerve action potential’.26 In this sense, then, consciousness is nothing new in principle even if, for now, we don’t fully understand it.

Studies of nerve action throughout the animal kingdom have also shown that nerves work by either ‘firing’ or not firing; intensity is represented by the rate of firing – the more intense the stimulation the faster the turning on and off of any particular nerve. This is of course very similar to the way computers work, in ‘bits’ of information, where everything is represented by a configuration of either 0s or 1s. The arrival of the concept of parallel processing in computing led Daniel Dennett to consider whether an analogous procedure might happen in the brain between different evolutionary levels, giving rise to consciousness. Again, though tantalising, such reasoning has not gone much further than preliminary exploration. At the moment, no one seems able to think of the next step.

So, despite all the research into consciousness in recent years, and despite the probability that the ‘hard’ sciences still offer the most likely way forward, the self remains as elusive as ever. Science has proved an enormous success in regard to the world ‘out there’ but has so far failed in the one area that arguably interests us the most – ourselves. Despite the general view that the self arises in some way from brain activity – from the action of electrons and the elements, if you will – it is hard to escape the conclusion that, after all these years, we still don’t know even how to talk about consciousness, about the self.

Here, therefore, and arising from this book, is one last idea for the scientists to build on. Given the Aristotelian successes of both the remote and the immediate past, is it not time to face the possibility – even the probability – that the essential Platonic notion of the ‘inner self’ is misconceived? There is no inner self. Looking ‘in’, we have found nothing – nothing stable anyway, nothing enduring, nothing we can all agree upon, nothing conclusive – because there is nothing to find. We human beings are part of nature and therefore we are more likely to find out about our ‘inner’ nature, to understand ourselves, by looking outside ourselves, at our role and place as animals. In John Gray’s words, ‘A zoo is a better window from which to look out of the human world than a monastery.’27 This is not paradoxical, and without some such realignment of approach, the modern incoherence will continue.

Загрузка...