34
The American Mind and the Modern University
The high point of empire in the Old World coincided more or less with the American Civil War. In a way, therefore, each continent faced a similar predicament – how different peoples, different races, should live together. The Civil War was a watershed in all ways for America. Although not many people realised it at the time, her dilemma over slavery had kept the country back and the war at last allowed the full forces of capitalism and industrialism to flex their muscles. Only after the war was the country fully free to fulfil its early promise.
The population in 1865 was upwards of 31 million, and therefore, relatively speaking, still small compared with the major European states. Intellectual life was – like everything else – still in the process of formation and expansion.1 After the triumphs of 1776, and the glories of the Constitution, which many Europeans had found so stimulating, Americans did not want for lack of confidence. But there was, even so, much uncertainty: the frontier was continuing to open up (raising questions about how to deal with the Plains Indians), and the pattern of immigration was changing. Louisiana was purchased from the French in 1803. On all sides, therefore, questions of race, tribe, nationality, religious affiliation and ethnic identity were ever-present. In this context, America had to fashion itself, devising new ideas where they were needed, and using ideas from the Old World where they were available and relevant.2
The gradual assimilation of European ideas into an American context has been chronicled both by Richard Hofstadter and, more recently and more fully, by Louis Menand, professor of English at Harvard, by means of biographical accounts of a small number of nineteenth-century individuals, all New Englanders, who knew each other and who between them invented what we may call the characteristically American tradition of modern thought, the American mind. The first part of this chapter relies heavily on Menand’s work.3 The specialities of these few individuals included philosophy, jurisprudence, psychology, biology, geology, mathematics, economics and religion. In particular we are talking of Ralph Waldo Emerson, Oliver Wendell Holmes, William James, Benjamin and Charles Peirce, Louis Agassiz and John Dewey.
‘These people had highly distinctive personalities, and they did not always agree with one another, but their careers intersected at many points, and together they were more responsible than any other group for moving American thought into the modern world . . . Their ideas changed the way Americans thought – and continue to think – about education, democracy, liberty, justice and tolerance. As a consequence, they changed the way Americans live – the way they learn, the way they express their views, the way they understand themselves, and the way they treat people who are different from themselves . . . We can say that what these thinkers had in common was not a group of ideas, but a single idea – an idea about ideas. They all believed that ideas are not “out there” waiting to be discovered, but are tools – like forks and knives and microchips – that people devise to cope with the world in which they find themselves . . . And they believed that since ideas are provisional responses to particular and unreproducible circumstances, their survival depends not on their immutability but on their adaptability . . . They taught a kind of scepticism that helped people cope with life in a heterogeneous, industrialised, mass-market society, a society in which older human bonds of custom and community seemed to have become attenuated . . . There is also, though, implicit in what they wrote, a recognition of the limits of what thought can do in the struggle to increase human happiness.’4 Along the way we shall be concerned with the creation of some major intellectual centres in America – the Universities of Yale, Princeton, Chicago and Johns Hopkins, and of Harvard and MIT in Cambridge, Massachusetts.
One founding father of this American tradition was Dr Oliver Wendell Holmes, Senior. He was well-connected, numbering the Cabots, the Quincys and the Jacksons – old, landowning families – among his friends; but he was himself a professor who had studied medicine in Paris. It was Holmes Sr who invented the term ‘Boston Brahmin’, to include those who were both well-born and scholars at the same time. It was Holmes Sr, in his guise as a doctor, who discovered the causes of puerperal (childbed) fever, demonstrating conclusively that the disease was transmitted from childbirth to childbirth by doctors themselves. This hardly made him popular among his medical colleagues, but it was an important advance in the development of the germ theory of disease and antisepsis.5 His academic career culminated as dean of Harvard Medical School, though he became just as widely known for being what many people regarded as the greatest talker they had ever heard, and for his role in founding the Metaphysical Club, also known as the ‘Saturday Club’, where literary matters were discussed over dinner and whose other members included Emerson, Hawthorne, Longfellow, James Russell Lowell, and Charles Eliot Norton. Holmes also helped establish the Atlantic Monthly; he himself conceived the title to reflect the link between the New World and the Old.6
The other founding father of the American intellectual tradition was Emerson. Holmes Sr and he were good friends, mutual influences on one another. Holmes Sr was in the audience when Emerson gave his famous Phi Beta Kappa address on ‘The American Scholar’ at Harvard in 1837. This address was the first of several in which Emerson declared a literary independence for America, urging his fellow citizens to a writing style all their own, away from the familiarities of Europe (although among his ‘great men’ there were no Americans). A year later, in a no less notorious speech, to Harvard Divinity School, Emerson reported how he had been ‘bored to distraction’ by a sermon, and had contrasted its artificiality to the wild snow storm then raging outside the church. This (plus many other musings) had caused him, he said, to renounce his belief in a supernatural Jesus, and organised Christianity, in favour of a more personal revelation. Partly as a result of this, Harvard – then a Calvinist institution – turned its back on Emerson for thirty years.7 Holmes Sr, however, remained true to his friend. Above all, he shared Emerson’s belief in an American literature, which is why he was so involved in the Atlantic Monthly.8
Holmes Junior was as impressed with Emerson as his father had been. As a freshman at Harvard in 1858, he said many years later, Emerson ‘set me on fire’. But Holmes Jr was not in exactly the same mould as his father. Though Holmes Sr had been an abolitionist on religious grounds, he never had much direct involvement with blacks. Holmes Jr, on the other hand, felt the situation rather more keenly. He found The Pickwick Papers distasteful because of its treatment of West Indians and he likewise detested minstrel shows – they were, he said, ‘demeaning’.9 He agreed with Emerson, that a scientific world view did not preclude a moral life, or that it was possible to live in a better relation with one’s fellow men outside organised religion than within it.
Holding such views, the Civil War, when it broke out in 1861, provided him with an opportunity to do something practical. True to his word, Holmes accepted a commission ‘in a spirit of moral obligation’.10 His very first engagement, the battle of Ball’s Bluff, on 21 October that year, was far from being a success: 1,700 Union soldiers made the advance across the river, but less than half returned. Holmes took a bullet near the heart, the first of three injuries he was to suffer in the war and these wounds, as Menand observes, shaped him. (His handwriting in his letters was less than perfect, he told correspondents, because he had to lie flat on his back.)11 Subsequently, although he might recount his fighting exploits from time to time, he never read histories of the Civil War.12 He knew what he knew and he had no need and no wish to revisit the horror. The Civil War was fought with modern weapons and pre-modern tactics. The close-order infantry charge was designed for use against the musket, a gun with a range of about eighty yards. Nineteenth-century rifles had a range of 400 yards. This accounts for the terrible carnage of the Civil War, which is still the war in which most American lives have been lost and why it had such an effect on Holmes and others.13
Amid the carnage, he learned one thing that was to remain with him all his life. It was a distrust of absolutes and certainty, a conviction that ‘certitude leads to violence’.14 He looked about him and observed that, although the abolitionists in 1850 appeared to many Northerners as subversives, by the end of the war ‘they were patriots’. He concluded from this that ‘There is no one way that life must be.’15 This guided him and formed him into the wise judge that he became. This wisdom emerged in his great book The Common Law,16 which began life as the Lowell Lectures at Harvard University, all twelve given before a full house, where he spoke without notes.17
His biographer Mark DeWolfe Howe says Holmes was the first lawyer, English or American, to subject the common law to the analysis of a philosopher and the explanation of an historian.18 The philosophical brilliance of Holmes was to see that the law has no one overriding aim or idea. (This was the idea he brought from the disaster of the Civil War.)19 That it had evolved pragmatically.20 Every case, in terms of facts at least, is unique. When it reaches court, it is swept up in what Menand calls a ‘vortex’ of intentions, assumptions and beliefs. There is, for example, the intention to find the solution that is just in this case. At the same time, there is an intention to arrive at a verdict that is consistent with analogous cases in the past. There is also the intention to arrive at a verdict that will be most beneficial to society as a whole – the result that will deter others.21 Then there are a number of less pressing aims, which also impinge on a verdict, some of which, Holmes conceded, are unvoiced. These may include a wish to redistribute costs from parties who can’t afford them (often victims) to parties who can (often manufacturers or insurance companies). ‘However over this whole weather pattern – all of which is in motion, so to speak, before any case ever arises – is a single meta-imperative: not to let it appear as though any one of these lesser imperatives has decided the case at the blatant expense of the others. A result that seems just intuitively but is admittedly incompatible with legal precedent is taboo; the court does not want to seem to excuse reckless behaviour (like operating a railroad too close to a heavily populated area), but it does not want to raise too high a liability barrier to activities society wants to encourage (like building railroads).’22
Holmes’ genius was to face the fact that there are no hard-and-fast distinctions in any of these areas. This was made plain in a sentence that became famous, near the opening of The Common Law, where he said ‘The life of the law has not been logic; it has been experience.’23 He thought it was his job to speak harsh truths, not give way to historical legends.24 His argument was that, for the most part, common law judges make up their minds first and come up with ‘a plausible account’ of how they got there afterwards. He even allowed that there were ‘unconscious’ influences on a judge, an early and interesting use of the word.25 Holmes wasn’t saying that judges are wayward, random or even idiosyncratic in their pronouncements. He just wasn’t sure that experience is reducible to general abstractions, even though human beings spend so much time trying to do just that. ‘All the pleasure of life is in general ideas,’ he wrote in 1899, ‘but all the use of life is in specific solutions – which cannot be reached through generalities any more than a picture can be painted by knowing some rules of method. They are reached by insight, tact and specific knowledge.’26 He then built on this idea of experience to arrive at his most important contribution to civil law – his invention of the ‘reasonable man’. Holmes thought that the point of experience is that it is ‘collective and consensual’, social not psychological. This goes to the heart of modern liability theory and is one of the main points where the law treats the question: how are we to live together? In the classic case, as Menand puts it, someone is injured as a result of what someone else does, giving rise to the question: what brings about civil liability? Traditionally, three arguments are brought to bear on this. One, it is enough to prove causation. All citizens act on their own responsibility; therefore they are liable for any costs their actions incur, whether they could have foreseen the consequences or not. This is ‘strict liability’. Two, a citizen is liable for injuries he or she intended but not for those never contemplated. Legally this is called mens rea – the doctrine of ‘the guilty mind’. Third, there is the argument of negligence: even if a citizen, in acting in a particular way, never anticipated the possibility of injury to anyone, that person is liable anyway, if the action were careless or imprudent.27
Holmes’ contribution in this area was to replace the traditional legal terms ‘guilt’ and ‘fault’ with words like ‘carelessness’ and ‘recklessness’.28 He thought that by doing this, it would help make clear what we mean by behaviour that counts as reckless or careless. The main question, as he saw it, was to identify what was and what wasn’t the ‘permissible by-product’ of any activity. His answer, he said, was ‘experience’, and his achievement was to define this ‘experience’.29 What he meant by it, in this context, he said, is that of ‘an intelligent and prudent member of the community’. Law, he said, was not a ‘brooding omniscience in the sky’; it had to operate according to the precepts of an ‘average’ member of society, best exemplified by a jury.30 ‘When men live in society,’ Holmes insisted, ‘a certain average of conduct, a sacrifice of individual peculiarities . . . is necessary to general welfare.’ Thus it was the ‘reasonable man’, his beliefs and conduct, that governed Holmes’ understanding of liability. Now this is, as Menand also points out, a statistical fiction and the ‘legal cousin’ of Adolphe Quetelet’s homme moyen. ‘The “reasonable man” knows, because “experience” tells him, that a given behaviour in a given circumstance – say, taking target practice in a populated area – carries the risk of injuring another person.’31
Holmes also said at one point that a judge ‘should not have a politics’. Yet he himself was in favour of capitalists, as risk takers and wealth generators, and there were those who thought that his arguments actually moved the law away from the theory of strict liability towards that of negligence, which made it easier for big businesses to escape their ‘duty’ to workers and customers. ‘Nevertheless, in his theory of torts, Holmes did what Darwin did in his theory of evolution by chance variation and Maxwell did in his kinetic theory of gases: he applied to his own special field the great nineteenth-century discovery that the indeterminacy of individual behaviour can be regularised by considering people statistically at the level of the mass.’32 This was a crucial step forward in the democratisation of law.
Experience, so important to Oliver Wendell Holmes in the realm of the law, would prove no less invaluable to his colleague from the Saturday Club, the philosopher and psychologist William James. Despite his impeccably Welsh name, James was in fact of Irish stock.33
The first William James, the philosopher’s grandfather, was a dry goods millionaire who, but for John Jacob Astor, would have been the richest man in New York state.34 His son Henry liked the bottle too much and was disinherited on William’s death, but contested the will, and won. According to Richard Hofstadter, William James was the first great beneficiary of the scientific education then emerging in the United States during the 1860s and 1870s (and considered later in this chapter). A wag suggested that he was a better writer than his brother Henry, who was a better psychologist. Like Wendell Holmes, William James was sceptical of certitude. One of his favourite phrases was ‘Damn the Absolute!’35 Instead of a formal education, he had travelled across Europe with his family, and although he had never stayed long at any particular school, this travelling gave him experience. (Somewhere he picked up the ability to draw, too.36) He did finally settle on a career, in science, at Harvard in 1861 and formed part of the circle around Louis Agassiz, the discoverer of the Ice Age and at the time one of the most vociferous critics of Charles Darwin, who based his opposition, he insisted, on science.37 After his early successes, Agassiz’ fortunes had taken a turn for the worse when he lost a quantity of money on a publishing venture. The offer of a lecture series in America promised a way out and indeed, in Boston he was a great success (the Saturday Club was often referred to as Agassiz’ Club). At the time he was in Boston, Harvard was in the process of setting up its school of science (see below, this chapter), and a special chair was founded for him.38
It was Agassiz’ battle with Darwin that interested James the most and, says one of his biographers, it was the example of the Swiss that decided him to become a scientist.39 Agassiz, a deist, described Darwin’s theory as ‘a mistake’; he disputed its facts and considered it ‘mischievous’ rather than serious science.40 James wasn’t so sure. He was particularly sceptical of Agassiz’ dogmatism whereas he thought evolutionary theory sparked all sorts of fresh ideas and, what he liked most, revealed biology as acting on very practical, even pragmatic, principles. Natural selection, for James, was a beautiful idea because it was so simple and down-to-earth, with adaptation being no more than a way to address practical problems wherever they occurred.41 Life, James liked to say, is to be judged by consequences.42
In 1867, after his spell at Harvard, James went to Germany. In the nineteenth century some nine thousand Americans visited Germany to study in the universities there, which, as we have seen, were organised along the lines of the various disciplines, rather than as places to teach priests, doctors and lawyers. James went to study with the leading experimental psychologist of the day, Wilhelm Wundt, who had set up the first psychological laboratory, at Leipzig. Wundt’s speciality – physiological psychology, or ‘psychophysics’ – was then regarded as the most likely area to produce advances. The basic assumption of physiological psychology was that all mind (conscious) processes are linked with brain processes, that every conscious thought or action has an organic, physical basis. One of the effects of this was that experimentation had replaced introspection as the primary means of investigation. In this so-called New Psychology, feelings and thoughts were understood as the result of ‘brain secretions’, organic changes which would in time yield to experimental manipulation. James was disappointed by the New Psychology, and by Wundt, who is little read now (and in fact it has now emerged that Wundt himself was drifting away from a rigid experimental approach to psychology).43 Wundt’s chief legacy is that he improved the standing of psychology thanks to his experimental approach. This improved standing of psychology rubbed off on James.
If Wundt’s influence turned out to be incidental, that of the Peirces was much more consequential. Like the Wendell Holmeses and the Jameses, the Peirces were a formidable father-and-son team. Benjamin Peirce may well have been the first world-class mathematician the United States produced (the Irish mathematician William Rowan Hamilton thought that Peirce was ‘the most massive intellect with which I have ever come into close contact’) and he too was one of the eleven founding members of the Saturday Club.44
His son Charles was equally impressive. A prodigy who wrote a history of chemistry when he was eleven and had his own laboratory at twelve, he could write with both hands at the same time. No wonder, perhaps, that he was bored at Harvard, drank too much, and graduated seventy-ninth in his class of ninety.45 That was the low point. Later, he built on his father’s work and, between them, they conceived the philosophy of pragmatism, which was grounded in mathematics. ‘It is not easy to define pragmatism: the Italian Papini observed that pragmatism was less a philosophy than a method of doing without one.’46 In the first place, Benjamin Peirce became fascinated by the theories and calculations of Pierre-Simon Laplace and Karl Friedrich Gauss (covered in Chapter 32), in particular their ideas about probability.47 Probability, or the laws of error, had a profound impact on the nineteenth century because of the apparent paradox that the accidental fluctuations that make phenomena deviate from their ‘normal’ laws, are themselves bound by a (statistical) law. The fact that this law applied even to human beings pointed many towards determinism.48
Charles Peirce was not one of them. He believed that he could see spontaneous life around him at every turn. (And he attacked Laplace in print.) He argued that, by definition, the laws of nature themselves must have evolved.49 He was Darwinian enough to believe in contingency, indeterminacy, and his ultimate philosophy was designed to steer a way through the confusion.50 In 1812, in his Théorie analytique des probabilités, Laplace had said ‘We must . . . imagine the present state of the universe as the effect of its prior state and as the cause of the state that will follow.’ This is Newton’s billiard-ball theory of matter, applied generally, even to human beings, and where chance has no part.51 Against this, in his Theory of Heat, published in 1871, the Scottish physicist James Clerk Maxwell had argued that the behaviour of molecules in a gas could be understood probabilistically. (Peirce met Maxwell on a visit to Cambridge in 1875.)52 The temperature of a gas in a sealed container is a function of the velocity of the molecules – the faster they move, the more they collide and the higher the temperature. But, and most importantly from a theoretical point of view, the temperature is related to the average velocity of the molecules, which vary in their individual speeds. How was this average to be arrived at, how was it to be understood? Maxwell argued that ‘the velocities are distributed among the particles according to the same law as the errors are distributed among the observations in the theory of the “method of least squares”’. (This had first been observed among astronomers: see here.)53 Maxwell’s point, the deep significance of his arguments, for the nineteenth century, was that physical laws are not Newtonian, not absolutely precise. Peirce grasped the significance of this in the biological, Darwinian realm. In effect, it created the circumstances where natural selection could operate. Menand asks us to consider birds as an example. In any particular species, of finch say, most individuals will have beaks within the ‘normal distribution’, but every so often, a bird with a beak outside the range will be born, and if this confers an evolutionary advantage it will be ‘selected’. To this extent, evolution proceeds by chance, not on an entirely random basis but according to statistical laws.54
Peirce was very impressed by such thinking. If even physical events, the smallest and in a sense the most fundamental occurrences, are uncertain, and if even the perception of simple things, like the location of stars, is fallible, how can any single mind ‘mirror’ reality? The awkward truth was: ‘reality doesn’t stand still long enough to be accurately mirrored’. Peirce therefore agreed with Wendell Holmes and William James: experience was what counted and even in science juries were needed. Knowledge was social.55
All this may be regarded as ‘deep background’ to pragmatism (a word that, for some strange reason, Peirce hardly ever used; he said it was ‘ugly enough to be safe from kidnappers’).56 This was, and remains, far more important than it seems at first sight, and more substantial than the everyday use of the word ‘pragmatic’ makes it appear. It was partly the natural corollary of the thinking that had helped create America in the first place, and is discussed in Chapter 28 above. It was partly the effect of the beginnings of indeterminacy in science, which was to be such a feature of twentieth-century thought, and it was partly – even mainly – a further evolution of thought, yet another twist, on the road to individualism.
Here is a classic pragmatic problem, familiar to Holmes, made much use of by James, and highlighted by Menand. Assume that a friend tells you something but in the strictest confidence. Later, in discussions with a second friend, you discover two things. One, that he or she isn’t aware of the confidence that has been shared with you; and second, that he is, in your opinion, about to make a bad mistake which could be avoided if he knew what you know. What do you do? Do you stay loyal to your first friend and keep the confidence? Or do you break the confidence to help out the second friend, so that he avoids injury or embarrassment? James said that the outcome might well depend on which friend you actually preferred, and that was part of his point. The romantics had said that the ‘true’ self was to be found within, but James was saying that, even in a simple situation like this, there were several selves within – or none at all. In fact, he preferred to say that, until one chose a particular course of action, until one behaved, one didn’t know which self one was. ‘In the end, you will do what you believe is right but “rightness” will be, in effect, the compliment you give to the outcome of your deliberations.’57 We can only really understand thinking, said James, if we understand its relationship to behaviour. ‘Deciding to order lobster in a restaurant helps us determine that we have a taste for lobster; deciding that the defendant is guilty helps us establish the standard of justice that applies in this case; choosing to keep a confidence helps us make honesty a principle and choosing to betray it helps confirm the value we put on friendship.’58 Self grows out of behaviour, not the other way round. This directly contradicts romanticism.
James was eager to say that this approach didn’t make life arbitrary or that someone’s motivation was always self-serving. ‘Most of us don’t feel that we are always being selfish in our decisions regarding, say, our moral life.’ He thought that what we do carry within us is an imperfect set of assumptions about ourselves and our behaviour in the past, and about others and their behaviour, which informs every judgement we make.59 According to James, truth is circular: ‘There is no noncircular set of criteria for knowing whether a particular belief is true, no appeal to some standard outside the process of coming to the belief itself. For thinking just is a circular process, in which some end, some imagined outcome, is already present at the start of any train of thought . . . Truth happens to an idea, it becomes true, is made true by events.’60
At about the time James was having these ideas, there was a remarkable development in the so-called New [Experimental] Psychology. Edward Thorndike, at Berkeley, had placed chickens in a box which had a door that could be opened if the animals pecked at a lever. In this way, the chickens were given access to a supply of food pellets, out through the door. Thorndike observed ‘that although at first many actions were tried, apparently unsystematically (i.e., at random), only successful actions performed by chickens who were hungry were learned’.61 James wasn’t exactly surprised by this, but it confirmed his view, albeit in a mundane way. The chickens had learned that if they pecked at the lever the door would open, leading to food, a reward. James went one step further. To all intents and purposes, he said, the chickens believed that if they pecked at the lever the door would open. As he put it, ‘Their beliefs were rules for action.’ And he thought that such rules applied more generally. ‘If behaving as though we have free will, or as if God exists, gets us the results we want, we will not only come to believe those things; they will be, pragmatically, true . . . “The truth” is the name of whatever proves itself to be good in the way of belief.’62 In other words, and most subversively, truth is not ‘out there’, it has nothing to do with ‘the way things really are’. This is not why we have minds, James said. Minds are adaptive in a Darwinian sense: they help us to get by, which involves being consistent, between thinking and behaviour.
Most controversially of all, James applied his reasoning to intuition, to innate ideas. Whereas Locke had said that all our ideas stem from sensory experience, Kant had insisted that some fundamental notions – the idea of causation being one – could not arise from sensory experience, since we never ‘see’ causation, but only infer it. Therefore, he concluded, such ideas ‘must be innate, wired in from birth’.63 James took Kant’s line (for the most part), that many ideas are innate, but he didn’t think that there was anything mysterious or divine about this.64 In Darwinian terms, it was clear that ‘innate’ ideas are simply variations that have arisen and been naturally selected. ‘Minds that possessed them were preferred over minds that did not.’ But this wasn’t because those ideas were more ‘true’ in an abstract or theological sense; instead, it was because they helped organisms to adapt.65 The reason that we believed in God (when we did believe in God) was because experience showed that it paid to believe in God. When people stopped believing in God (as they did in large numbers in the nineteenth century – see next chapter), it was because such belief no longer paid.
America’s third pragmatic philosopher, after Peirce and James, was John Dewey. A professor in Chicago, Dewey boasted a Vermont drawl, rimless eyeglasses and a complete lack of fashion sense. In some ways he was the most successful pragmatist of all. Like James he believed that everyone has his own philosophy, their own set of beliefs, and that such philosophy should help people to lead happier and more productive lives. His own life was particularly productive. Through newspaper articles, popular books, and a number of debates conducted with other philosophers, such as Bertrand Russell or Arthur Lovejoy, author of The Great Chain of Being, Dewey became known to the general public in a way that few philosophers are.66 Like James, Dewey was a convinced Darwinist, someone who believed that science and the scientific approach needed to be incorporated into other areas of life. In particular, he believed that the discoveries of science should be adapted to the education of children. For Dewey, the start of the twentieth century was an age of ‘democracy, science and industrialism’ and this, he argued, had profound consequences for education. At that time, attitudes to children were changing fast. In 1909 the Swedish feminist Ellen Key published her book The Century of the Child, which reflected the general view that the child had been rediscovered – rediscovered in the sense that there was a new joy in the possibilities of childhood and in the realisation that children were different from adults and from one another.67 This seems no more than common sense to us, but in the nineteenth century, before the victory over a heavy rate of child mortality, when families were much larger and many children died, there was not – there could not be – the same investment in children, in time, in education, in emotion, as there was later. Dewey saw that this had significant consequences for teaching. Hitherto, schooling, even in America, which was in general more indulgent to children than in Europe, had been dominated by the rigid authority of the teacher, who had a concept of what an educated person should be and whose main aim was to convey to his or her pupils the idea that knowledge was the ‘contemplation of fixed verities’.68 Dewey was one of the leaders of a movement which changed such thinking, and in two directions. The traditional idea of education, he saw, stemmed from a leisured and aristocratic society, the type of society that was disappearing fast in European societies and had never existed in America. Education now had to meet the needs of democracy. Second, and no less important, education had to reflect the fact that children were very different from one another in abilities and interests. In order for children to make the best contribution to society that they were capable of, education should be less about ‘drumming in’ hard facts which the teacher thought necessary, and more about drawing out what the individual child was capable of. In other words, pragmatism applied to education.
The ideas of Dewey, along with those of Freud, were undoubtedly influential in helping attach far more importance to childhood than before. The notion of personal growth and the drawing back of traditional, authoritarian conceptions of what knowledge is, and what education should seek to do, were liberating ideas for many people. (Dewey’s frank aim was to make society, via education, more ‘worthy, lovely and harmonious’.)69 In America, with its many immigrant groups and wide geographical spread, the new education helped to create many individualists. At the same time, the ideas of the ‘growth movement’ always risked being taken too far – with children left to their own devices too much. In some schools where teachers believed that ‘No child should ever know failure . . .’, examinations and grades were abolished.70
Dewey’s view of philosophy agreed very much with James and the Peirces. It should be concerned with living in this world, now.71 Both thinking and behaviour are different sides of the same coin. Knowledge is part of nature. We all make our way in the world, as best we can, learning as we go as to what works and what doesn’t: behaviour is not pre-ordained at birth.72 This approach, he felt, should be applied to philosophy where, traditionally, people had been obsessed by the relation between mind and world. Because of this, the celebrated philosophical mystery, How do we know?, was in a sense the wrong question. Dewey illustrated his argument by means of an analogy which Menand highlights: no one has ever been unduly bothered by the no less crucial question, the relation between, for example, the hand and the world. ‘The function of the hand is to help the organism cope with the environment; in situations in which a hand doesn’t work, we try something else, such as a foot or a fishhook, or an editorial.’73 His point was that nobody worries about those situations where the hand doesn’t ‘fit’, doesn’t ‘relate to the world’. We use hands where they are useful, feet where they are useful, tongues where they are useful.
Dewey was of the opinion that ideas are much like hands: they are instruments for dealing with the world. ‘An idea has no greater metaphysical stature than, say, a fork. When your fork proves inadequate to eating soup, you don’t worry about the inherent shortcomings in the nature of forks; you reach for a spoon.’ Ideas are much the same. We have got into difficulty because ‘mind’ and ‘reality’ don’t exist other than as abstractions, with all the shortcomings that we find in any generalisation. ‘It therefore makes as little sense to talk about a “split” between the mind and the world as it does to talk about a split between the hand and the environment, or the fork and the soup.’ ‘Things,’ he wrote, ‘. . . are what they are experienced as.’74 According to Menand, Dewey thought that philosophy had got off on the wrong foot right at the start, and that we have arrived where we are largely as a result of the class structure of classical Greece. Pythagoras, Plato, Socrates, Aristotle and the other Greek philosophers were for the most part a leisured, ‘secure and self-possessed’ class, and it was pragmatically useful for them to exalt reflection and speculation at the expense of making and doing. Since then, he thought, philosophy had been dogged by similar class prejudices, which maintained the same separation of values – stability above change, certainty above contingency, the fine arts above the useful arts, ‘what minds do over what hands do’.75 The result is there for us all to see. ‘While philosophy pondered its artificial puzzles, science, taking a purely instrumental and experimental approach, had transformed the world.’ Pragmatism was a way for philosophy to catch up.
That pragmatism should arise in America is not so surprising, not surprising at all in fact. The mechanical and materialist doctrines of Hegel, Laplace, Malthus, Marx, Darwin and Spencer were essentially deterministic whereas for James and Dewey the universe – very much like America – was still in progress, still in the making, ‘a place where no conclusion is foregone and every problem is amenable to the exercise of what Dewey called intelligent action’. Above all, he felt that – like everything else – ethics evolve. This was a sharp deduction from Darwin, quickly reached and still not often enough appreciated. ‘The care of the sick has taught us how to protect the healthy.’76
William James, as we have seen, was a university man. In one capacity or another, he was linked to Harvard, Johns Hopkins and the University of Chicago. Like some nine thousand other Americans in the nineteenth century, he studied at German universities. At the time that Emerson, Holmes, the Peirces and the Jameses were developing their talents, the American universities were in the process of formation and so, it should be said, were the German and the British. Particularly in Britain, universities are looked upon fondly as ancient institutions, dating from medieval times. So they are, in one sense, but that should not blind us to the fact that universities, as we know them now, are largely the creation of the nineteenth century.
One can see why. Until 1826 there were just the two universities in existence in England – Oxford and Cambridge – and offering a very restricted range of education.77 At Oxford the intake was barely two hundred a year and many of those did not persevere to graduation. The English universities were open only to Anglicans, based on a regulation which required acceptance of the Thirty-Nine Articles. Both seats of learning had deteriorated in the eighteenth century, with the only recognised course, at Oxford at least, being a narrow classics curriculum ‘with a smattering of Aristotelian philosophy’, whereas in Cambridge the formal examination was almost entirely mathematical. There was no entrance examination at either place and, moreover, peers could get a degree without examination. Examinations were expanded and refined in the first decades of the nineteenth century but more to the point, in view of what happened later, were the attacks mounted on Oxford and Cambridge by a trio of Scotsmen in Edinburgh – Francis Jeffrey, Henry Brougham and Sydney Smith. Two of these were Oxford graduates and in the journal they founded, the Edinburgh Review, they took Oxford and Cambridge to task for offering an education which, they argued, was far too grounded in the classics and, as a result, very largely useless. ‘The bias given to men’s minds is so strong that it is no uncommon thing to meet with Englishmen, whom, but for their grey hair and wrinkles, we might easily mistake for school-boys. Their talk is of Latin verses; and, it is quite clear, if men’s ages are to be dated from the state of their mental progress, that such men are eighteen years of age and not a day older . . .’78 Sydney Smith, the author of this attack, went on to criticise Oxbridge men for having no knowledge of the sciences, of economics or politics, of Britain’s geographical and commercial relations with Europe. The classics, he said, cultivated the imagination but not the intellect.
There were two responses we may mention. One was the creation of civic universities in Britain, particularly University College and King’s College, London, both of which were established deliberately to accept Nonconformists, and which were based partly on the Scottish universities and their excellent medical schools. One of the men involved in the creation of University College, London, Thomas Campbell, visited the Universities of Berlin (founded 1809) and Bonn (1816), as a result of which he opted for the professorial system of tuition, in use there and in Scotland, rather than Oxford’s tutorial system. Another source of inspiration came from the University of Virginia, founded in 1819 thanks largely to the efforts of Thomas Jefferson. The main ideals of this institution were set out in the report of a State Commission which met at Rockfish Gap in the Blue Ridge in 1818 and which became known as the Rockfish Gap Report. The specific aim of this university, according to the report, was ‘to form the statesmen, legislature and judges, on whom public prosperity and individual happiness are so much to depend . . .’ Politics, law, agriculture, commerce, mathematical and physical sciences, and the arts, were all included. University College, London, followed this more practical vision and the even more practical – and novel – idea was adopted of floating a public company to finance the building of the college. Non-denominational university education was begun in England.79
This became a bone of contention, which culminated in May 1852 in a series of five lectures given in Dublin by John Henry Newman, later Cardinal Newman, on ‘The Idea of the University’. The immediate spur to Newman’s lectures was the founding of the new universities, like the University of London, and the Queen’s Colleges in Ireland (Belfast, Cork and Galway), in which the study of theology was excluded on principle. Newman’s lectures, which became famous as the classic defence of what is still sometimes called ‘a liberal education’, argued two points. The first was that ‘Christianity, and nothing short of it, must be made the element and principle of all education’.80 Newman argued that all branches of knowledge were connected together and that to exclude theology was to distort wisdom. His second point was that knowledge is an end in itself, that the purpose of a university education was not to be immediately useful but to bear its fruits throughout life. ‘A habit of mind is formed which lasts throughout life, of which the attributes are, freedom, equitableness, calmness, moderation, and wisdom; or what in a former Discourse I have ventured to call a philosophical habit . . . Knowledge is capable of being its own end.’81 Newman’s seminal idea, and the most controversial – a dispute that is still with us – was set out in his seventh lecture (five were given at Dublin, five others published but not delivered). In this, he says: ‘. . . the man who has learned to think and to reason and to compare and to discriminate and to analyse, who has refined his taste, and formed his judgement, and sharpened his mental vision, will not indeed at once be a lawyer, or a pleader, or an orator, or a statesman, or a physician, or a good landlord, or a man of business, or a soldier, or an engineer, or a chemist, or a geologist, or an antiquarian, but he will be placed in that state of intellect in which he can take up any one of the sciences or callings I have referred to . . . with an ease, a grace, a versatility, and a success, to which another is a stranger. In this sense, then, . . . mental culture is emphatically useful.’82
Apart from Newman’s concern with ‘liberal’ education, his emphasis on religion was not as out of place as it may seem, especially in America. As George M. Marsden has shown, in his survey of early American colleges, some five hundred were founded in the pre-Civil War era, of which perhaps two hundred survived into the twentieth century. Two-fifths were either Presbyterian or Congregationalist colleges, down from over a half in Jefferson’s day, at the expense of Methodist, Baptist and Catholic establishments, which accelerated after 1830 and especially after 1850.83 In nineteenth-century America, in the educational sphere, there was a widely shared article of faith that science, common sense, morality and true religion ‘were firmly allied’.84
For many years, say the mid-seventeenth to the mid-eighteenth century, Harvard and Yale were almost all there was to American higher education. Only towards the end of that period was an Anglican college established in the South, William and Mary (chartered in 1693, opened in 1707, and only gradually becoming a fully-fledged college). Beyond that, most of the colleges that became well-known universities were founded by New Light clergy – New Jersey (Princeton), 1746, Brown, 1764, Queen’s (Rutgers), 1766, and Dartmouth, 1769. ‘New Light’ was a religious response in America to the Enlightenment. Yale had been founded in 1701 as a response to a perceived decline in theological orthodoxy at Harvard. The new moral philosophy presupposed that ‘virtue’ could be discovered on a rational basis, that God would reveal to man the moral basis of life, based on reason, much as He had revealed to Newton the laws by which the universe operated. This was essentially the basis on which Yale was founded.85 In a short while the new approach developed into what became known as the Great Awakening, which, in the American context, described a shift from the predominantly pessimistic view of human nature to a far more optimistic – positive – outlook, as typified by Anglicanism. This was a far more humanistic cast of mind (unlike Harvard, which remained Calvinist) and led to a much greater appreciation of the achievements of the Enlightenment in those colleges, such as Princeton, which followed Yale.
Such thinking culminated in the famous Yale Report of 1828, which argued that the human personality was made up of various faculties of which reason and conscience were the highest, and that these must be kept in balance. So the goal of education was ‘to maintain such a proportion between the different branches of literature and science, as to form in the student a proper balance of character’.86 The report then went on to argue that the classics should form the core of this balanced character-building.
A large mission of the colleges was to spread Protestant Christianity to the untamed wilds of the west and in 1835, in his Plea for the West, Lyman Beecher urged that education beyond the seaboard could not be achieved simply by sending teachers out from the east – the west must have colleges and seminaries of its own. There was then a fear that Catholics would take over the west, a fear fortified by the growing immigration into America from the Catholic countries of southern Europe. The warning was heeded and, by 1847, Presbyterians had built a system of about a hundred schools in twenty-six states.87 The University of Illinois was founded in 1868 and California in 1869. It was about now that the attractions of the German system began to be appreciated, with several professors and university administrators travelling to Prussia, in particular, to study the way things were done there. In this way, religion began to occupy less of a role in American university education. The fact that the Germans led the way in history, for example, increasingly implied that theology was itself an historical development, and this encouraged biblical criticism. Germany was also responsible for the idea that education should be the responsibility of the state, not just a private matter. Finally, it was a German idea that the university should be the home of scholars (researchers, writers) and not just of teachers.
This was nowhere more evident than at Harvard. It had begun as a Puritan college in 1636. More than thirty partners of the Massachusetts Bay Colony were graduates of Emmanuel College, Cambridge, and so the college they established near Boston naturally followed the Emmanuel pattern. Equally influential was the Scottish model, in particular Aberdeen. Scottish universities were nonresidential, democratic rather than religious, and governed by local dignitaries – a forerunner of boards of trustees.
The man who first conceived the modern university as we know it was Charles Eliot, a chemistry professor at Massachusetts Institute of Technology who, in 1869, at the age of only thirty-five, was appointed President of Harvard, where he had been an undergraduate. When Eliot arrived, Harvard had 1,050 students and fifty-nine members of the faculty. In 1909, when he retired, there were four times as many students and the faculty had grown ten-fold. But Eliot was concerned with more than size. ‘He killed and buried the limited arts college curriculum which he had inherited. He built up the professional schools and made them an integral part of the university. Finally, he promoted graduate education and thus established a model which practically all other American universities with graduate ambitions have followed.’ Above all, Eliot followed the German system of higher education, the system that gave the world Planck, Weber, Strauss, Freud and Einstein. Intellectually, Johann Fichte, Christian Wolff and Immanuel Kant were the significant figures in German thinking about education, freeing German scholarship from its stultifying reliance on theology. As a result, and as we have seen, German scholars acquired a clear advantage over their European counterparts in philosophy, philology and the physical sciences. It was in Germany, for example, that physics, chemistry and geology were first regarded in universities as equal to the humanities.88 The graduate seminar, the PhD, and student freedom were all German ideas.
From Eliot’s time onwards, the American universities set out to emulate the German system, particularly in the area of research. However, this German example, though impressive in advancing knowledge and in producing new technological processes for industry, nevertheless sabotaged the ‘collegiate way of living’ and the close personal relations between undergraduates and faculty which had been a major feature of American higher education until the adoption of the German approach. The German system was chiefly responsible for what William James called ‘the PhD octopus’. Yale awarded the first PhD west of the Atlantic in 1861; by 1900 well over three hundred were being granted every year.89
The price for following Germany’s lead was a total break with the British collegiate system. At many universities, housing for students disappeared entirely, as did communal eating. At Harvard in the 1880s the German system was followed so slavishly that attendance at classes was no longer required – all that counted was performance in the examinations. Then a reaction set in. Chicago was first, building seven dormitories by 1900 ‘in spite of the prejudice against them at the time in the [mid-] West on the ground that they were medieval, British and autocratic’. Yale and Princeton soon adopted a similar approach. Harvard reorganised after the English housing model in the 1920s.90
At much the same time that the pragmatists of the Saturday Club were forming their friendship and their views, a very different group of pragmatists was having an effect on American life. Beginning around 1870, in the wake of the Civil War, America produced a generation of the most original inventors that nation – or any other – has seen. Thomas P. Hughes, in his history of American invention, goes so far as to say that the half-century between 1870 and 1918 was a comparable era to Periclean Athens, Renaissance Italy or the Britain of the industrial revolution. Between 1866 and 1896 the number of patents issued annually in the United States more than doubled and in the decade from 1879 to 1890 rose from 18,200 to 26,300 a year.91
Richard Hofstadter, in his book Anti-Intellectualism in American Life, has written about the tension in the United States between businessmen and intellectuals, of Herman Melville’s warning, ‘Man disennobled – brutalised / By popular science’, of Van Wyck Brooks chiding Mark Twain because ‘his enthusiasm for literature was as nothing beside his enthusiasm for machinery’, of Henry Ford who famously remarked ‘history is more or less bunk’.92 But America’s first generation of inventors do not seem to have been especially anti-intellectual. Rather, they inhabited a different culture and this was because, as we have seen, scholarship and research were still coming into being in the nineteenth-century universities. They were still predominantly religious institutions and would not become universities as we know them until the very end of the nineteenth century.
And likewise, since the industrial research laboratory didn’t come into common use until around 1900, most of these inventors had to construct their own private laboratories. It was in this environment that Thomas Edison invented the electric light and the phonograph, Alexander Graham Bell invented the telephone, the Wright brothers invented their flying machine, and telegraphy and radio came into being.93 It was in this environment that Elmer Sperry pioneered his gyrocompass and automatic control devices for the navy and in which Hiram Stevens Maxim, in 1885, set up for manufacture, and demonstrated, ‘the world’s most destructive machine gun’. By using the recoil from one cartridge to load and fire the next, the Maxim far surpassed the Gatling gun, which had been invented in 1862. It was the Maxim gun that inflicted a great deal of the horror in colonial territories at the high point of empire.94 It was the German Maxim which inflicted 60,000 casualties at the Somme on 1 July 1916. And it was these inventors who, in collaboration with financial entrepreneurs, were to create some of America’s most enduring business and educational institutions, household names to this day – General Electric, AT&T, Bell Telephone Company, Consolidated Edison Company, MIT.
In the context of this book, perhaps the telegraph is worth singling out from these other inventions. The idea of using electricity as a means of signalling had been conceived around 1750 but the first functioning telegraph had been set up by Francis Ronalds, in his garden in Hammersmith in London, in 1816. Charles Wheatstone, professor of experimental philosophy at King’s College, London, and the man who had first measured the speed of electricity (wrongly), was the first to realise that the ohm, a measure of resistance, was an important concept in telegraphy and, together with his colleague Fothergill Cooke, took out the first patent in 1837. Almost as important as the technical details of telegraphy was Wheatstone and Cooke’s idea to string the wires alongside the newly built railways. This helped ensure the rapid spread of the telegraph, though the much-publicised capture of John Tawell, who was arrested in London after fleeing a murder scene in Slough, thanks to the telegraph, hardly did any harm. Samuel Morse’s code played its part, of course, and Morse was one of several Americans pushing for a transatlantic cable. The laying of this cable was an epic adventure that lies outside the scope of this book. While the cables were being laid, many had high hopes that the more speedy communication they would permit would prove an aid to world peace, by keeping statesmen in closer touch with one another. This hope proved vain, but the transatlantic cable, achieved in 1866, made its mark quickly in commercial terms. And, as Gillian Cookson has written in The Cable: The Wire that Changed the World, ‘From this moment began a sense of shared experience, a convergence of cultures, between the two English-speaking nations.’95