1 Revolutions in Science and Perceptions

In 1974 the first world conference on population ever to be held took place in Romania. The unease of an informed few about the demographic outlook had for the first time found a forum for the human race to consider its numbers. In the quarter century that followed, unease gave way to alarm; many were asking whether the world would be able to hold an ever increasing population, which may reach 10,000 million people by 2050. In round numbers, a world population of about 750 million two and a half centuries ago more than doubled in 150 years to reach about 1,600 million in 1900. It then took fifty years to add the next 850 million or so; by 1950 the world had about 2,500 million inhabitants. The next 850 million were added in only twenty years, and now the world’s population is over 7,000 million. This can be set in a still longer timescale. It had taken at least 50,000 years for Homo sapiens to number 1,000 million (a number reached in 1840 or thereabouts), while the last 1,000 million of the species has been added in only twelve years. Until only a few decades ago the total was growing faster and faster, reaching its peak at a rate over 2.2 per cent per year in 1963.

Such growth made the spectre of Malthusian disaster walk again for some, although as Malthus himself observed, ‘no estimate of future population or depopulation, formed from any existing rate of increase or decrease, can be depended upon’. We still cannot be sure what might further change the pattern. Some societies, for instance, have set out to control their size and shape. Such efforts, strictly speaking, are not entirely new. In some places, murder and abortion had long been customary ways of holding down demands on scarce resources. Babies were exposed to die in medieval Japan; female infanticide was widespread in nineteenth-century India and returned (or, perhaps, was acknowledged openly again) in China in the 1980s. What was new was that governments began to put resources and authority behind more humane methods of population control. Their aim was positive social and economic improvement instead of the mere avoidance of personal and family distress.

Only a few governments made such efforts and economic and social facts did not everywhere produce the same response, even to unquestionable advance in technology and knowledge. A new contraceptive technique spread rapidly, with radical impact on behaviour and thinking, in many western countries in the 1960s, while it has yet to be adopted with anything like the same alacrity by women in the non-western world. It was one of many reasons why population growth, though worldwide, did not everywhere take the same form or provoke the same responses. Though many non-European countries have followed nineteenth-century European patterns (in first showing a fall in death rates without a corresponding fall in birth rates), it would be rash to predict that all of them will simply go on to repeat the next phase of the population history of developed countries. The dynamics of population growth are exceedingly complex, reflecting limits set to them by ignorance and by personal and social attitudes hard to measure, let alone manipulate.

Infant mortality is one helpful rough indicator of potential for future growth. In the century before 1970 this fell from an average of about 225 per thousand live births to under 20 in developed countries; in 2010 the comparative figures for Sierra Leone and Singapore were 135 and 2. Such discrepancies between rich and poor countries are greater than in the past. There are comparable differences of life expectancy at all ages, too. At birth in developed countries it rose from slightly over 40 in 1870 to slightly over 70 a hundred years later. It has shown a remarkable evenness; in 1987, for example, 76, 75 and 70 years respectively in the United States, the United Kingdom and the USSR (Russia’s has now fallen to 63 years for men). The overall contrast today is even starker. Japan now tops the list with 83 years on average, while in Mozambique it is less than 40, identical to that in France before 1789 (in part because of the AIDS epidemic that devastates the Mozambican population).

In the immediate future, such disparities will present new problems. For most of history, all societies resembled pyramids, with large numbers of young people at their base, and a few old. Now, developed societies are looking like tapering columns; the proportion of much older people is larger than in the past – in Italy and Japan, for instance, less than 15 per cent of the population is under the age of fifteen. In poorer countries, the reverse is true. About half of Niger’s population is under fifteen and one-third of India’s is. To talk simply of overall population growth therefore obscures important facts. World population goes on growing mightily, but in ways that have very different origins and will have very different historical implications and effects.

Among these are the ways in which population is shared out. In 2010, it was distributed between continents roughly as follows: Continent Millions % of total Europe (including Russia) 733 10.61 Asia 4,167 60.31 Africa 1,033 14.95 South America and the Caribbean 587 8.52 North America 351 5.09 Australasia and Oceania 36 0.52

The fall from Europe’s mid-nineteenth century share of world population (a quarter) is striking. So is the ending of four centuries in which emigrants of European stocks had left the continent to spread around the world; until the 1920s, Europe was still exporting people overseas, notably to the Americas. That outflow was much cut down by restrictions on entry to the United States in that decade, dwindled further during the Great Depression, and has never since recovered its former importance. On the other hand, immigration to the United States from the Caribbean, Central and South America and Asia surged upwards in the last decades of the twentieth century. Moreover, although some European countries still sent out emigrants (in the early 1970s more Britons still left the country each year than there were immigrants to it from abroad), they began also from the 1950s to attract North Africans, Turks, Asians and West Indians, seeking work they could not find at home. Europe is now, overall, an importer of people.

Yet present patterns may not remain unchanged for long. Asia now contains over half of mankind, and China and India together more than 37 per cent of it, but some of the huge growth rates that have produced these numbers are beginning to fall. In Brazil, where population increase ran at more than twice the world rate in the early 1960s, it does so no longer, though Brazilians continue to grow in number. The difference between India (2.8 children per woman of child-bearing age) and China (1.5) is significant, but Niger, which tops the list, has 7.7, and Lithuania and South Korea, which are at the bottom, have 1.2. The general global growth trend is downwards, though: it now stands at roughly half the annual figure for 1963.

Even though all generalization in these matters is dangerous, in our time when average incomes increase, fertility in most societies starts to drop. Families have many children when they are seen as a life-insurance for parents in old age, or when having many sons gives prestige and protection. As wealth increases, large families are seen as an unnecessary drain on resources. Women who work outside the household are inclined to have fewer children, at least if they themselves can make the decision (and given their economic independence they mostly do). What is often surprising to historians and demographers alike is how quickly demographic patterns may change; what has been seen as received wisdom for several generations may change in a decade or less. The Roman Catholic Church’s teachings were often seen as responsible for high birth rates in southern Europe or Latin America, but Italy now has only 1.3 children per woman of child-bearing age, and Chile 1.8.

The highest average birth rates in the world today are found in Africa south of the Sahara – probably the region that can least afford a rapid population increase – while some Muslim countries are not far behind (Iraq 3.8 and Jordan 3.4). This population growth will put considerable pressure on both resources and polities. But the countries that have rapidly declining populations are also in trouble. Many European countries will have to rely on immigration to look after an elderly population if the trend is not reversed, and in some countries the natural birth rate is now so low that it will be very hard to reverse. In China the Communist-enforced one-child policy has gone from a blessing to a curse – even if its population still grows, the demographics are changing very rapidly so that China will be old before it is rich. Worse, it is the part of the Chinese population who still live in poverty that has the most children and the urban middle classes who have most obeyed official policy.

Urbanization is the other key aspect of population change today. As the twentieth century ended nearly half of us lived in cities. The city is becoming the typical habitat of Homo sapiens. This was a remarkable change from most of human history. It registered the fact that cities have been losing their old killing-power. In the past, the high death rates of city life required constant demographic nourishment by country-born immigrants in order to keep up numbers. In the nineteenth century, city-dwellers in some countries had begun to reproduce themselves in sufficient surviving numbers for cities to grow organically. The results are startling; there are now many cities whose numbers of inhabitants are literally uncountable. Calcutta already had a million in 1900, but now has more than fifteen times as many; Mexico City had only 350,000 inhabitants as the twentieth century began, but ended it with over 20 million. Other impressions can be derived from the longer term. The world had only five cities of more than half a million inhabitants in 1700; in 1900 there were forty-three; and now Brazil alone has seven of more than a million. Sanitary regimes and public health measures have moved more slowly in some countries than others to make such changes possible, and the tide of urbanization is far from ebbing.

Population and urbanization dynamics both imply a huge growth in world resources. To simplify ruthlessly, though many have starved, many more have lived. Millions may have died in famines, but there has so far been no worldwide Malthusian disaster. If the world had not been able to feed them, human numbers would be smaller. Whether this can continue for long is another question. Experts have concluded that we can for a good while to come provide food for growing numbers. But in such matters we enter the realms of speculation, though the very existence of such hopes interests the historian, for they say something about a present and actual state of the world where what is believed to be possible is important in settling what will happen. In considering that, we have to recognize the major economic fact of modern history, and especially of the last half-century: that it brought about an unprecedented production of wealth.

Readers of this book are probably used to seeing harrowing pictures of famine and deprivation on their television screens. Yet in about half of the world, since 1945 continuing economic growth has, for the first time, come to be taken for granted. It has become the ‘norm’, in spite of hiccups and interruptions along the way. Any slowing down in its rate, such as we have seen since 2008, now provokes alarm. What is more, in gross terms real economic growth has been the story in many other places as well, even if inequality or high birth rates still keeps most of the population in poverty. Against the background of the way the world still thought, even in 1939, this can be accounted a revolution.

Yet that story does not just begin with the decades since the end of the Second World War, the golden age of unprecedented growth for some. The appropriate historical background for the surge in wealth creation which has successfully carried the burden of soaring world population is much deeper. One way of measuring it is to reflect that the average human being today commands about nine times the wealth of an average human being in 1500. Some economists have calculated that the world’s Gross Domestic Product (GDP) is 185 times bigger today than in 1500. But it is an uncertain estimate, because it is hard to evaluate ‘value’ of new products, and the GDP has, of course, to be shared between many more people and – at least in some countries – is much more unevenly distributed.

Changes in per Capita GDP in US dollars Country 1900 2010 Brazil 678 10,816 China 545 4,382 UK 4,492 36,000 USA 4,091 46,800 India 599 1,370 Germany 2,895 40,274 Japan 1,135 42,783

Wealth and human numbers, indeed, tended to rise more or less in parallel until the nineteenth century. Then some economies began to display much faster growth than others. Even at the beginning of the twentieth century, a new intensification of wealth creation was already under way which, though badly set back by two world wars and the upheavals caused by the depression of the 1930s, was to be resumed after 1945 and has barely ceased since, in spite of serious challenges and striking contrasts between different economies. For all the huge disparities and setbacks in some countries, economic growth has taken place more widely than ever before.

Selected figures like those in the table above must be interpreted cautiously, and they can change very quickly, but they give a truthful impression of the way in which the world has become richer in a century. Yet some of humanity still remains woefully poor. Even with their recent economic spurts, China and India remain poor countries in terms of average income. But the poorest are those that have been racked by war or epidemic disease in addition to a meagre starting point. Burundi had 192 dollars per capita GDP in 2010 and Afghanistan 362.

If the overriding fact is one of wealth creation, it must have helped that the major powers were at peace with one another for so long. The years since 1945 have, of course, been studded with many bloody smaller-scale or incipient conflicts, and men and women have died every day of them, hundreds of thousands in warlike operations or their aftermath. The great powers have had much fighting done for them by surrogates. Yet no such destruction of human and economic capital as that of the two world wars took place. The international rivalry that underlay often notable tension tended, rather, to sustain or provoke economic activity in many countries. It provided much technological spin-off and led to major capital investments and transfers for political motives, some of which did much to increase real wealth.

The first such transfers took place in the later 1940s, when American aid made possible the recovery of Europe. For this to be successful, the American dynamo had to be available to promote recovery, as it had not been after 1918. The enormous wartime expansion of the American economy that had at last brought it out of the pre-war depression, together with the immunity of the American home base from physical damage by war, had ensured that it would be. Explanation for the deployment of American economic strength as aid has to be sought in the prevailing circumstances (of which the Cold War was an important one). International tension made it seem in America’s interest to behave as it did; an imaginative grasp of opportunities was shown by many of its statesmen and businessmen; there was for a long time no alternative source of capital on such a scale; and finally, it helped that men of different nations, even before the end of the war, had already set in place institutions for regulating the international economy in order to avoid any return to the near-fatal economic anarchy of the 1930s. The story of the reshaping of the economic life of the world thus begins before 1945, in the wartime efforts that produced the International Monetary Fund, the World Bank and the General Agreement on Tariffs and Trade (GATT). The economic stability they provided in the non-Communist world after 1945 underpinned two decades of growth in world trade at nearly 7 per cent per annum in real terms, even though it would take the end of the Cold War to get global trade back up to its pre-1914 levels. Still, between 1945 and the 1980s the average level of tariffs on manufactured goods fell from 40 per cent to 5 per cent, and world trade multiplied more than five-fold.

Over a longer term still, scientists and engineers were making their contribution to economic growth in less formal, often less visible, ways. The continued application of scientific knowledge through technology, and the improvement and rationalization of processes and systems in the search for greater efficiency, were all very important before 1939. They came more dramatically to the fore and began to exercise even greater influence after 1945. What they meant in agriculture, where improvement had begun long before industrialization was a recognizable phenomenon, is one of the clearest examples of their effects. For thousands of years farmers edged their returns upwards almost entirely by ancient methods, above all by clearing and breaking in new land. There is still a lot left that, with proper investment, could be made to raise crops (and much has been done in the last twenty-five years to use such land, even in a crowded country like India). Yet this does not explain why world agricultural output has recently risen so dramatically. The root explanation is a continuation and acceleration of the agricultural revolution that began in early modern Europe and has been visible at least from the seventeenth century. Two hundred and fifty years later, it was vastly speeded up, thanks, largely, to applied science.

Well before 1939, wheat was being successfully introduced to lands in which, for climatic reasons, it had not been grown hitherto. Plant geneticists had evolved new strains of cereals, one of the first twentieth-century scientific contributions to agriculture on a scale going far beyond the trial-and-error ‘improvement’ of earlier times; only much later did genetic modification of crop species begin to attract adverse criticism. Even greater contributions to world food supplies had by then been made in areas already growing grain by using better chemical fertilizers (of which the first had become available in the nineteenth century). An unprecedented rate of replacement of nitrogen in the soil underlay the larger yields that have now become commonplace in countries with advanced agriculture.

Their costs include huge energy inputs, though, and fears of ecological consequences began to be expressed in the 1960s. By then better fertilizers had been joined by effective herbicides and insecticides, too, while the use of machinery in agriculture had grown enormously in developed countries. England had in 1939 the most mechanized farming in the world in terms of horsepower per acre cultivated; English farmers none the less then still did much of their work with horses, while combine harvesters (already familiar in the United States) were rare. But not only were the fields mechanized. The coming of electricity brought automatic milking, grain-drying, threshing, the heating of animal sheds in winter. Now, the computer and automation have begun to reduce dependence on human labour even more; in the developed world the agricultural workforce has continued to fall while production per acre has risen, and genetically modified crops promise even greater yields.

For all that, paradoxically, there are more subsistence farmers in the world today than in 1900, just because there are more people. Their share of cultivated land and of the value of the crops produced, though, has fallen. The 2 per cent of the farmers who live in developed countries now supply about half the world’s food. In Europe the peasant is fast disappearing, as he disappeared in Great Britain 200 years ago. But this change has been unevenly spread and easily disrupted. Russia was traditionally one of the great agricultural economies, but as recently as 1947 suffered famine so severe as to provoke outbreaks of cannibalism once more.

Local dearth is still a danger in countries with large and rapidly growing populations where subsistence agriculture is the norm and productivity remains low. Just before the First World War, the British yield of wheat per acre was already more than two and a half times that of India; by 1968 it was roughly five times. Over the same period the Americans raised their rice yield from 4.25 to nearly 12 tons an acre, while that of Burma, once the ‘rice bowl of Asia’, rose only from 3.8 to 4.2 tons. In 1968, one agricultural worker in Egypt was providing food for slightly more than one family, while in New Zealand each farm employee was producing enough for forty. And even if the yield ratio had narrowed for some countries in the developing world by the early twenty-first century, most African and some South Asian regions still have desperately low yields.

Countries economically advanced in other ways show the greatest agricultural productivity. Countries in greatest need have found it impossible to produce crops more cheaply than can leading industrial economies. Ironic paradoxes result: the Russians, Indians and Chinese, big grain and rice producers, have found themselves buying American and Canadian wheat. Disparities between developed and undeveloped countries have widened in the decades of plenty. Roughly half of mankind now consumes about six-sevenths of the world’s production; the other half shares the rest. The United States has been the most extravagant consumer by far. In 1970 the half-dozen or so Americans in every 100 human beings used about 40 of every 100 barrels of oil produced in the world each year. They each consumed annually roughly a quarter of a ton of paper products; the corresponding figure for China was then about 20 pounds. The electrical energy used by China for all purposes in a year at that time would (it was said) just have sustained the supply of power to the United States’ air conditioners. Electricity production, indeed, is one of the best ways of making comparisons, since relatively little electrical power is traded internationally and most of it is consumed in the country where it is generated. At the end of the 1980s, the United States produced nearly 40 times as much electricity per capita as India and 23 times as much as China, but only 1.3 times as much as Switzerland.

In all parts of the world the disparity between rich and poor nations has grown more and more marked since 1945, not usually because the poor have grown poorer, but because the rich have grown much richer. Almost the only exceptions to this were to be found in the comparatively rich (by poor world standards) economies of the USSR and eastern Europe, where mismanagement and the exigencies of a command economy imposed lower growth rates, or even no growth at all. With these exceptions, even spectacular accelerations of production (some Asian countries, for example, pushed up their agricultural output between 1952 and 1970 proportionately more than Europe and much more than North America) have only occasionally succeeded in improving the position of poor countries in relation to that of the rich, because of inequality and rising populations – and rich countries, in any case, began at a higher level.

Although their rankings in relation to one another may have changed, those countries that enjoyed the highest standards of living in 1950 still, by and large, enjoy them today (even though they have been joined by a number of East Asian countries). These are the major industrial countries. Their economies are today the richest per capita, and their example spurs poorer countries to seek their own salvation in economic growth, which is too often read simply as industrialization. True, major industrial economies today do not much resemble their nineteenth-century predecessors; the old heavy and manufacturing industries, which long provided the backbone of economic strength, are no longer simple and satisfactory measures of it. Once-staple industries in leading countries have declined. Of the three major steel-making countries of 1900, the first two (the United States and Germany) were still among the first five world producers eighty years later, but in third and fifth places respectively; the United Kingdom (third in 1900) came tenth in the same world table – with Spain, Romania and Brazil close on her heels. Nowadays, Poland makes more steel than did the United States a century ago. What is more, newer industries often found a better environment for rapid growth in some developing countries than in the mature economies. Thus the people of Taiwan came by 2010 to enjoy per capita GDP nearly fourteen times that of India, while that of South Korea was fifteen times as big.

Modern economic growth has often been in sectors – electronics and plastics are examples – which barely existed even in 1945 and in new sources of power. Coal replaced running water and wood in the nineteenth century as the major source of industrial energy, but long before 1939 it was joined by hydro-electricity, oil and natural gas; very recently, power generated by nuclear fission was added to these. Industrial growth has raised standards of living as power costs have come down and with them those of transport. One particular innovation was of huge importance. In 1885 the first vehicle propelled by internal combustion was made – one, that is to say, in which the energy produced by heat was used directly to drive a piston inside the cylinder of an engine, instead of being transmitted to it via steam made in a boiler with an external flame. Nine years later came a four-wheeled contraption made by the French Panhard Company, which is a recognizable ancestor of the modern car. France, with Germany, dominated the production of cars for the next decade or so and they remained rich men’s toys. This is automobile prehistory. Automobile history began in 1907, when Henry Ford, an American, set up a production line for what became famous as his ‘Model T’. Planned deliberately for a mass market, its price was low. By 1915 a million Ford cars were being made each year and by 1926 the Model T cost less than $300 (about £60 in British money at rates then current). An enormous commercial success was underway.

So was a social and economic revolution. Ford changed the world. By giving the masses something previously considered a luxury, and a mobility unavailable even to the millionaire fifty years earlier, his impact was as great as the coming of railways. This increase in amenity was to spread around the world, too, with enormous consequences. A worldwide car manufacturing industry was one result, often dominating domestic manufacturing sectors and bringing, eventually, large-scale international integration; in the 1980s eight large producers made three out of four of the world’s cars. The industry stimulated huge investment in other sectors, too; only a few years ago, half the robots employed in the world’s industry were welders in car factories, and another quarter painted their products. Over a similar time period, car production enormously stimulated demand for oil. Huge numbers of people came to be employed in supplying fuel and other services to car owners. Investment in road-building became a major concern of governments, as it had not been since the days of the Roman empire.

Ford, like many other great revolutionaries, had brought other men’s ideas to bear on his own. In the process he also transformed the workplace. Stimulated by his example, assembly lines became the characteristic way of making consumer goods. On those set up by Ford, the motor car moved steadily from worker to worker, each one of them carrying out in the minimum necessary time the precisely delimited and, if possible, simple task in which he (or, later, she) was skilled. The psychological effect on the worker was soon deplored, but Ford saw that such work was very boring and paid high wages (thus also making it easier for his workers to buy his cars). This was a contribution to another fundamental social change that had cultural consequences of incalculable significance – the fuelling of economic prosperity by increasing purchasing power and, therefore, demand.

Some assembly lines nowadays are ‘manned’ entirely by robots. The single greatest technological change to affect the major industrial societies since 1945 has come in the huge field of what is comprehensively called information technology, the complex science of devising, building, handling and managing electronically powered machines that process information. Few innovatory waves in the history of technology have rolled in so fast. Applications of work done only during the Second World War were widely diffused in services and industrial processes over a couple of decades. This was most obvious in the spread of ‘computers’, electronic data processors of which the first only appeared in 1945. Rapid increases in power and speed, reductions in size and improvements in visual display capacity brought a huge increase in the amount of information that could be ordered and processed in a given time.

Quantitative change, though, brought qualitative transformation. Technical operations hitherto unfeasible because of the mass of data involved now became possible. Intellectual activity had never been so suddenly accelerated. Moreover, at the same time as there was revolutionary growth in the power of computers, so there was in their availability, cheapness and portability. Within thirty years a ‘microchip’ the size of a credit card was doing the job that had at first required a machine the size of the average British living room. It was observed in 1965 that the processing power of a ‘chip’ doubled every eighteen months; the 2,000 or so transistors carried by a chip thirty years ago have now multiplied to millions. The transforming effects have been felt exponentially, and in every human activity – from money- and war-making, to scholarship and pornography.

Computers are, of course, only part of another long story of development and innovation in communication of all kinds, beginning with advances in the physical and mechanical movement of solid objects – goods and people. The major nineteenth-century achievements were the application of steam to land and sea communication, and later electricity and the internal combustion engine. In the air, there had been balloons, and the first ‘dirigible’ airships flew before 1900, but it was only in 1903 that the first flight was made by a man-carrying ‘heavier than air’ machine (that is, one whose buoyancy was not derived from bags of a gas lighter than air). This announced a new age of physical transport; a hundred years later, the value of goods moving through London’s biggest airport was greater than that through any British seaport. Millions now regularly travel by air on business and professional concerns, as well as for leisure, and flight has given a command of space to the individual only faintly imaginable as the twentieth century began.

The communication of information had already advanced far into another revolution. The essence of this was the separation of the information flow from any physical connection between source and signal. In the middle of the nineteenth century, poles carrying the wires for the electric telegraph were already a familiar sight beside railway lines, and the process of linking the world together with undersea cables had begun. Physical links were still fundamental. Then, Heinrich Hertz identified radio-magnetic waves, and by 1900 scientists were exploiting electromagnetic theory to make possible the sending of the first, literally, ‘wireless’ messages. The transmitter and the receiver no longer needed any physical connection. Appropriately, it was in 1901, the first year of a new century to be profoundly marked by this invention, that Marconi sent the first radio message across the Atlantic. Thirty years later, most of the millions who by then owned wireless receivers had ceased to believe that they needed to open windows for the mysterious ‘waves’ to reach them, and large-scale broadcasting systems existed in all major countries.

A few years before this the first demonstration had been made of the devices on which television was based. In 1936, the BBC opened the first regularly scheduled television broadcasting service; twenty years later the medium was commonplace in leading industrial societies and now that is true worldwide. Like the coming of print, the new medium had huge implications, but for their full measurement they must be placed in the context of the whole modern era of communications development. Like the coming of print, the implications were incalculable, though they were politically and socially neutral or, rather, double-edged. Telegraphy and radio made information more quickly available, and this could be advantageous both to governments and to their opponents. The ambiguities of television became visible even more rapidly. Its images could expose things governments wanted to hide to the gaze of hundreds of millions, but it was also believed to shape opinion in the interests of those who controlled it.

By the end of the twentieth century, too, it was clear that the Internet, the latest major advance in information technology, also had ambiguous possibilities. From its origins in the Arpanet – developed by the Advanced Research Projects Agency of the American Department of Defense in 1969 – by 2010 the Internet had almost 2,000 million regular users, many of them in developing countries. By then, the ease of communication that it offered had helped revolutionize world markets and strongly influence world politics, both in open political systems and within authoritarian states. It had spawned profound political change and even revolutions. E-commerce – the buying and selling of consumer goods and services through the Internet – became a major part of commerce in the United States in the early 2000s, with companies such as Amazon and eBay among the wealthiest and most influential in the market. By 2005, electronic mail had replaced postal services as the preferred way of communication in North America, Europe and parts of East Asia. But at the same time much of the ever-increasing speed capacity of Internet transfers was used for watching pornographic films or playing interactive games. And with much of this capacity wasted, the social differences between those who spend much of their day online and those who have no access to the Internet is increasing rapidly.

By 1950 modern industry was already dependent on science and scientists, directly or indirectly, obviously or not and acknowledged or not. Moreover, the transformation of fundamental science into end products was by then often very rapid, and has continued to accelerate in most areas of technology. A substantial generalization of the use of the motor car, after the grasping of the principle of the internal combustion engine, took about half a century; in recent times, the microchip made hand-held computers possible in about ten years. Technological progress is still the only way in which large numbers of people become aware of the importance of science. Yet there have been important changes in the way in which it has come to shape their lives. In the nineteenth century, most practical results of science were still often by-products of scientific curiosity. Sometimes they were even accidental. By 1900 a change was under way. Some scientists had seen that consciously directed and focused research was sensible. Twenty years later, large industrial companies were beginning to see research as a proper call on their investment, albeit a small one. Some industrial research departments were in the end to grow into enormous establishments in their own right as petrochemicals, plastics, electronics and biochemical medicine made their appearance.

Nowadays, the ordinary citizen of a developed country cannot lead a life that does not rely on applied science. This all-pervasiveness, coupled with its impressiveness in its most spectacular achievements, was one of the reasons for the ever-growing recognition given to science. Money is one yardstick. The Cavendish Laboratory at the University of Cambridge, for example, in which some of the fundamental experiments of nuclear physics were carried out before 1914, had then a grant from the university of about £300 a year – roughly $1,500 at rates then current. When, during the war of 1939–45, the British and Americans decided that a major effort had to be mounted to produce nuclear weapons, the resulting ‘Manhattan Project’ (as it was called) is estimated to have cost as much as all the scientific research previously conducted by mankind from the beginnings of recorded time.

Such huge sums – and there were to be even larger bills to meet in the Cold War world – mark another momentous change, the new importance of science to government. After being for centuries the object of only occasional patronage by the state, it now became a major political concern. Only governments could provide resources on the scale needed for some of the things done since 1945. One benefit they usually sought was better weapons, which explained much of the huge scientific investment of the United States and the Soviet Union. The increasing interest and participation of governments has not, on the other hand, meant that science has grown more national; indeed, the reverse is true. The tradition of international communication among scientists is one of their most splendid inheritances from the first great age of science in the seventeenth century, but even without it, science would jump national frontiers for purely theoretical and technical reasons.

Once again, the historical context is complex and deep. Already before 1914 it was increasingly clear that boundaries between the individual sciences, some of them intelligible and usefully distinct fields of study since the 1600s, were tending to blur and then to disappear. The full implications of this have only begun to appear very lately, however. For all the achievements of the great chemists and biologists of the eighteenth and nineteenth centuries, it was the physicists who did most to change the scientific map of the twentieth century. James Clerk Maxwell, the first professor of experimental physics at Cambridge, published in the 1870s the work in electromagnetism which first broke effectively into fields and problems left untouched by Newtonian physics. Maxwell’s theoretical work and its experimental investigation profoundly affected the accepted view that the universe obeyed natural, regular and discoverable laws of a somewhat mechanical kind and that it consisted essentially of indestructible matter in various combinations and arrangements. Into this picture had now to be fitted the newly discovered electromagnetic fields, whose technological possibilities quickly fascinated laymen and scientists alike.

The crucial work that followed and that founded modern physical theory was done between 1895 and 1914, by Röntgen, who discovered X-rays, Becquerel, who discovered radioactivity, Thomson, who identified the electron, the Curies, who isolated radium, and Rutherford, who investigated the structure of the atom. They made it possible to see the physical world in a new way. Instead of lumps of matter, the universe began to look more like an aggregate of atoms, which were tiny solar systems of particles held together by electrical forces in different arrangements. These particles seemed to behave in a way that blurred the distinction between matter and electromagnetic fields. Moreover, such arrangements of particles were not fixed, for in nature one arrangement might give way to another and thus elements could change into other elements. Rutherford’s work, in particular, was decisive, for he established that atoms could be ‘split’ because of their structure as a system of particles. This meant that matter, even at this fundamental level, could be manipulated. Two such particles were soon identified: the proton and the electron; others were not isolated until after 1932, when Chadwick discovered the neutron. The scientific world now had an experimentally validated picture of the atom’s structure as a system of particles. But as late as 1935 Rutherford said that nuclear physics would have no practical implications – and no one rushed to contradict him.

What this radically important experimental work did not at once do was supply a new theoretical framework to replace the Newtonian system. This only came with a long revolution in theory, beginning in the last years of the nineteenth century and culminating in the 1920s. It was focused on two different sets of problems, which gave rise to the work designated by the terms ‘relativity’ and ‘quantum theory’. The pioneers were Max Planck and the man who was undoubtedly the greatest scientist of the twentieth century, Albert Einstein. By 1905 they had provided experimental and mathematical demonstration that the Newtonian laws of motion were an inadequate framework for explanation of a fact no longer to be contested: that energy transactions in the material world took place not in an even flow but in discrete jumps – quanta, as they came to be termed. Planck showed that radiant heat (from, for example, the sun) was not, as Newtonian physics required, emitted continuously; he argued that this was true of all energy transactions. Einstein argued that light was propagated not continuously but in particles. Though much important work was to be done in the next twenty or so years, Planck’s contribution had the most profound effect and it was again unsettling. Newton’s views had been found wanting, but there was nothing to put in their place.

Meanwhile, after his work on quanta, Einstein had published in 1905 the work for which he was to be most widely, if uncomprehendingly, celebrated, his statement of the theory of relativity. This was essentially a demonstration that the traditional distinctions of space and time, and mass and energy, could not be consistently maintained. It therefore constituted a revolution in science, although it took a long time for the implications to be thoroughly absorbed. Instead of Newton’s three-dimensional physics, Einstein directed men’s attention to a ‘space–time continuum’ in which the interplay of space, time and motion could be understood. This was soon to be corroborated by astronomical observation of facts for which Newtonian cosmology could not properly account, but which could find a place in Einstein’s theory. One strange and unanticipated consequence of the work on which relativity theory was based was his demonstration of the relations of mass and energy, which he formulated as E = mc2, where E is energy, m is mass and c is the constant speed of light. The importance and accuracy of this theoretical formulation was not to become clear until much more nuclear physics had been done. It would then be apparent that the relationships observed when mass energy was converted into heat energy in the breaking up of nuclei also corresponded to his formula.

While these advances were absorbed, attempts continued to rewrite physics, but they did not get far until a major theoretical breakthrough in 1926 finally provided a mathematical framework for Planck’s observations and, indeed, for nuclear physics. So sweeping was the achievement of Schrödinger and Heisenberg, the two mathematicians mainly responsible, that it seemed for a time as if quantum mechanics might be of virtually limitless explanatory power in the sciences. The behaviour of particles in the atom observed by Rutherford and Bohr could now be accounted for. Further development of their work led to predictions of the existence of new nuclear particles, notably the positron, which was duly identified in the 1930s. The discovery of new particles continued. Quantum mechanics seemed to have inaugurated a new age of physics.

By mid-century much more had disappeared in science than just a once-accepted set of general laws (and in any case it remained true that, for most everyday purposes, Newtonian physics was still all that was needed). In physics, from which it had spread to other sciences, the whole notion of a general law was being replaced by the concept of statistical probability as the best that could be hoped for. The idea, as well as the content, of science was changing. Furthermore, the boundaries between sciences collapsed under the onrush of new knowledge made accessible by new theories and instrumentation. Any one of the great traditional divisions of science was soon beyond the grasp of a single mind. The conflations involved in importing physical theory into neurology or mathematics into biology put further barriers in the way of attaining that synthesis of knowledge that had been the dream of the nineteenth century, just as the rate of acquisition of new knowledge (some in such quantities that it could only be handled by the newly available computers) became faster than ever. Such considerations did nothing to diminish either the prestige of the scientists or the faith that they were mankind’s best hope for the better management of its future. Doubts, when they came, arose from other sources than their inability to generate an overarching theory as intelligible to lay understanding as Newton’s had been. Meanwhile, the flow of specific advances in the sciences continued.

In a measure, the baton passed after 1945 from the physical to the biological or ‘life’ sciences. Their current success and promise have, once again, deep roots. The seventeenth-century invention of the microscope had first revealed the organization of tissue into discrete units called cells. In the nineteenth century, investigators already understood that cells could divide and that they developed individually. Cell theory, widely accepted by 1900, suggested that individual cells, being alive themselves, provided a good approach to the study of life, and the application of chemistry to this became one of the main avenues of biological research. Another mainline advance in nineteenth-century biological science was provided by a new discipline, genetics, the study of the inheritance by offspring of characteristics from parents. Darwin had invoked inheritance as the means of propagation of traits favoured by natural selection. The first steps towards understanding the mechanism that made this possible were those of an Austrian monk, Gregor Mendel, in the 1850s and 1860s. From a meticulous series of breeding experiments on pea plants, Mendel concluded that there existed hereditary units controlling the expression of traits passed from parents to offspring. In 1909 a Dane gave them the name ‘genes’.

Gradually the chemistry of cells became better understood and the physical reality of genes was accepted. In 1873 the presence in the cell nucleus of a substance that might embody the most fundamental determinant of all living matter was already established. Experiments then revealed a visible location for genes in chromosomes, and in the 1940s it was shown that genes controlled the chemical structure of protein, the most important constituent of cells. In 1944 the first step was taken towards identifying the specific effective agent in bringing about changes in certain bacteria, and therefore in controlling protein structure. In the 1950s it was at last identified as ‘DNA’, the physical structure of which (the double helix) was established in 1953. The crucial importance of this substance (its full name is deoxyribonucleic acid) is that it is the carrier of the genetic information that determines the synthesis of protein molecules at the basis of life. The chemical mechanisms underlying the diversity of biological phenomena were at last accessible. Physiologically, and perhaps psychologically, this implied a transformation of man’s view of himself unprecedented since the diffusion of Darwinian ideas in the previous century.

The identification and analysis of the structure of DNA was the most conspicuous single step towards a new manipulation of nature, the shaping of life forms. Already in 1947, the word ‘biotechnology’ had been coined. Once again, not only more scientific knowledge but also new definitions of fields of study and new applications followed. ‘Molecular biology’ and ‘genetic engineering’, like ‘biotechnology’, quickly became familiar terms. The genes of some organisms could, it was soon shown, be altered so as to give those organisms new and desirable characteristics; by manipulating their growth processes, yeast and other micro-organisms could be made to produce novel substances, too – enzymes, hormones or other chemicals. This was one of the first applications of the new science; the technology and data accumulated empirically and informally for thousands of years in making bread, beer, wine and cheese was at last to be overtaken. Genetic modification of bacteria could now grow new compounds. By the end of the twentieth century, three-quarters of the soya beans grown in the United States were the product of genetically modified seed, while agricultural producers like Canada, Argentina and Brazil were also raising huge amounts of genetically modified crops.

More dramatically, by the end of the 1980s there was underway a worldwide collaborative investigation, the Human Genome Project. Its almost unimaginably ambitious aim was the mapping of the human genetic apparatus. The position, structure and function of every human gene – of which there were said to be from 30,000 to 50,000 in every cell, each gene having up to 30,000 pairs of the four basic chemical units that form the genetic code – were to be identified. As the century closed, it was announced that the project had been completed. (Shortly afterwards, the sobering discovery was made that human beings possessed only about twice the number of genes as the fruit fly – substantially fewer than had been expected.) The door had been opened to a great future for manipulation of nature at a new level – and what that might mean was already visible in a Scottish laboratory in the form of the first successfully ‘cloned’ sheep. Already, too, the screening for the presence of defective genes is a reality and the replacement of some of them is possible. The social and medical implications are tremendous, as are the implications for history. Some of what has been discussed in the early chapters of this book could not have been known without DNA evidence.

By the dawn of the new century it was becoming clear that genetic engineering would shape a substantial part of our future, in spite of the controversy created by many research programmes in this field. The ‘new’ micro-organisms created by geneticists are now patentable and therefore commercially available in many parts of the world. Likewise, genetically modified crops are used to increase yields through the creation of more resistant and more productive strains, thereby giving some regions their first ever opportunity to become self-sufficient in staple foods. But while providing obvious benefits, biotechnology has also come under scrutiny for delivering food products that may not be safe and for the increasing dominance of large multi-national corporations in both research and production worldwide. Such concerns have, for obvious reasons, become particularly strong when genetic research on human material has been involved, such as in work on stem cells from embryos. Many scientists fail to realize how the matters they are dealing with raise immense concerns among the public, mostly because of warnings from the history of the twentieth century.

Progress in these matters has owed much of its startling rapidity to the availability of new computer power, another instance of the acceleration of scientific advance so as both to provide faster applications of new knowledge and to challenge more quickly the world of settled assumptions and landmarks with new ideas that must be taken into account by laymen. Yet it remains as hard as ever to see what such challenges imply or may mean. For all the huge recent advances in the life sciences, it is doubtful that even their approximate importance is sensed by more than tiny minorities, especially when they deal with the ultimate human questions that have been with us since the beginning of history: the creation of life and the avoidance of death.

For a brief period in the middle of the twentieth century the focus on the power of science shifted from the earth to the heavens. The exploration of space may well turn out one day to dwarf in significance other historical processes (discussed at greater length in this book) but as yet shows no sign of doing so. Yet it suggests that the capacity of human culture to meet unprecedented challenges is as great as ever and it has provided what is so far the most spectacular example of human domination of nature. For most people, the space age began in October 1957 when an unmanned Soviet satellite called Sputnik I was launched by rocket and could soon be discerned in orbit around the earth, emitting radio signals. Its political impact was vast: it shattered the belief that Soviet technology lagged significantly behind American. The full importance of the event, though, was still obscured because superpower rivalries swamped other considerations for most observers. In fact, it ended the era when the possibility of human travel in space could still be doubted. Thus, almost incidentally, it marked a break in historical continuity as important as the European discovery of the Americas, or the Industrial Revolution.

Visions of space exploration could be found in the last years of the nineteenth century and the early years of the twentieth, when they were brought to the notice of the western public in fiction, notably, in the stories of Jules Verne and H. G. Wells. Its technology went back almost as far. A Soviet scientist, K. E. Tsiolkovsky, had designed multi-staged rockets and devised many of the basic principles of space travel (and he, too, had written fiction to popularize his obsession) well before 1914. The first Soviet liquid-fuelled rocket went up (3 miles) in 1933, and a two-stage rocket six years later. The Second World War prompted a major German rocket programme, which the United States had drawn on to begin its own programme in 1955.

The American programme started with more modest hardware than the Soviets’ (who already had a commanding lead) and the first American satellite weighed only 3 pounds (Sputnik I weighed 184 pounds). A much-publicized launch attempt was made at the end of December 1957, but the rocket caught fire instead of taking off. The Americans would soon do much better than this, but within a month of Sputnik I the Soviets had already put up Sputnik II, an astonishingly successful machine, weighing half a ton and carrying the first passenger in space, a black-and-white mongrel called Laika. For nearly six months Sputnik II orbited the earth, visible to the whole inhabited world and enraging thousands of dog-lovers, for Laika was not to return.

The Soviet and American space programmes had by then somewhat diverged. The Soviets, building on their pre-war experience, had put much emphasis on the power and size of their rockets, which could lift big loads, and here their strength continued to lie. The military implications were more obvious than those (equally profound but less spectacular) which flowed from American concentration on data-gathering and on instrumentation. A competition for prestige was soon underway, but although people spoke of a ‘space race’ the contestants were running towards somewhat different goals. With one great exception (the wish to be first to put a man in space) their technical decisions were probably not much influenced by one another’s performance. The contrast was clear enough when Vanguard, the American satellite that failed in December 1957, was successfully launched the following March. Tiny though it was, it went much deeper into space than any predecessor and provided more valuable scientific information in proportion to its size than any other satellite. It is likely to be in orbit for another couple of centuries or so.

New achievements then quickly followed. At the end of 1958 the first satellite for communications purposes was successfully launched (it was American). In 1960 the Americans scored another ‘first’ – the recovery of a capsule after re-entry. The Soviets followed this by orbiting and retrieving Sputnik V, a four-and-a-half-ton satellite, carrying two dogs that became the first living creatures to have entered space and returned to earth safely. In the spring of the following year, on 12 April, a Soviet rocket took off carrying a man, Yuri Gagarin. He landed 108 minutes later after one orbit around the earth. Humanity’s life in space had begun, four years after Sputnik I.

Possibly spurred by a wish to offset a recent publicity disaster in American relations with Cuba, President Kennedy proposed in May 1961 that the United States should try to land a man on the moon (the first man-made object had already crash-landed there in 1959) and return him safely to earth before the end of the decade. His publicly stated reasons for recommending this compare interestingly with those that led the rulers of fifteenth-century Portugal and Spain to back their Magellans and da Gamas. One was that such a project provided a good national goal; the next that it would be prestigious (‘impressive to mankind’ were the president’s words); the third was that it was of great importance for the exploration of space; and the fourth was (somewhat oddly) that it was of unparalleled difficulty and expense. Kennedy said nothing of the advancement of science, of commercial or military advantage – or, indeed, of what seems to have been his real motivation: to do it before the Soviets did. Surprisingly, the project met virtually no opposition and the first money was soon allocated.

During the early 1960s the Soviets continued to make spectacular progress. The world was perhaps most excited when they sent a woman into space in 1963, but their technical competence continued to be best shown by the size of their vehicles – a three-man machine was launched in 1964 – and in the achievement the following year of the first ‘space walk’, when one of the crew emerged from his vehicle and moved about outside while in orbit (though reassuringly attached to it by a lifeline). The Soviets were to go on to further important advances in achieving rendezvous for vehicles in space and in engineering their docking, but after 1967 (the year of the first death through space travel, when a Soviet cosmonaut was killed during re-entry) the glamour transferred to the Americans. In 1968, they achieved a sensational success by sending a three-man vehicle into orbit around the moon and transmitting television pictures of its surface. It was by now clear that ‘Apollo’, the moon-landing project, was going to succeed.

In May 1969 a vehicle put into orbit with the tenth rocket of the project approached to within six miles of the moon to assess the techniques of the final stage of landing. A few weeks later, on 16 July, a three-man crew was launched. Their lunar module landed on the moon’s surface four days later. On the following morning, 21 July, the first human being to set foot on the moon was Neil Armstrong, the commander of the mission. President Kennedy’s goal had been achieved with time in hand. Other landings were to follow. In a decade that had opened politically with humiliation for the United States in the Caribbean and was ending in the morass of an unsuccessful war in Asia, it was a triumphant reassertion of what America (and, by implication, capitalism) could do. It was also the outstanding signal of the latest and greatest extension by Homo sapiens of his environment, the beginning of a new phase of his history, that to be enacted on other celestial bodies.

Even at the time this wonderful achievement was decried, and now it is difficult to shake off a sense of anti-climax. Its critics felt that the mobilization of resources the programme needed was unjustified, because it was irrelevant to the real problems of the earth. To some, the technology of space travel has seemed to be our civilization’s version of the Pyramids, a huge investment in the wrong things in a world crying out for money for education, nutrition, medical research – to name but a few pressing needs. But the scientific and technological gains made through the programmes cannot be denied, and neither can their mythical importance. However regrettable it may be, modern societies have shown few signs of being able to generate much interest and enthusiasm among their members for collective purposes, except for brief periods (or in war, whose ‘moral equivalent’ – as one American philosopher put it well before 1914 – it is still to seek). The imagination of large numbers of people was not really fired by the prospect of adding marginally to the GDP or introducing one more refinement to a system of social services, however intrinsically desirable these things might have been. Kennedy’s identification of a national goal was shrewd; in the troubled 1960s Americans had much to agitate and divide them, but they did not turn up to frustrate launchings of the space missions.

Space exploration also became more international as it went on. Before the 1970s there was little co-operation between the two greatest nations concerned, the United States and the Soviet Union, and much duplication of effort and inefficiencies. Ten years before the Americans planted the American flag on it, a Soviet mission had dropped a Lenin pennant on the moon. This seemed ominous; there was a basic national rivalry in the technological race itself and nationalism might provoke a ‘scramble for space’. But the dangers of competition were avoided, at least in some fields; it was soon agreed that celestial objects were not subject to appropriation by any one state. In July 1975, some 150 miles above the earth, co-operation became a startling reality in the era of détente, when Soviet and American machines connected themselves so that their crews could move from one to the other. In spite of doubts, exploration continued in a relatively benign international setting. The visual exploration of further space was carried beyond Jupiter by unmanned satellite, and 1976 brought the first landing of an unmanned exploration vehicle on the surface of the planet Mars. In 1977 the American space shuttle, the first reusable space vehicle, made its maiden voyage, in a programme that was to last up to 2011.

These achievements were tremendous, yet now there is great uncertainty about the way forward in man’s encounter with space. The end of the space shuttle programme raises questions about whether manned exploration has a future in space research. Yet to have landed safely on the moon and returned had been a dazzling affirmation of the belief that we live in a universe we can manage. The instruments for doing so were once magic and prayer; they are now science and technology. But continuity lies in the growing human confidence throughout history that the natural world could be manipulated. Landing on the moon was a landmark in that continuity, an event perhaps of the same order as the mastery of fire, the invention of agriculture or the discovery of nuclear power. It will be followed up, as the 2012 landing of an unmanned American science laboratory on Mars shows. (Perhaps symbolically, the Soviets had crash-landed a probe on Mars forty years earlier.)

Exploration of the skies can be compared also to the great age of terrestrial discovery, even though space travel is a good deal safer and more predictable than fifteenth-century seafaring. Both, however, build on a slow accumulation of knowledge. Cumulatively, the base of exploration widened as data was added, piece by piece, to what was known. Da Gama had to pick up an Arab navigator once around the Cape of Good Hope. Unknown seas lay ahead. Five hundred years later, Apollo was launched from a far broader but still cumulative base – nothing less than the whole scientific knowledge of mankind. In 1969, the distance to the moon was already known, so were the conditions that would greet men arriving there, most of the hazards they might encounter, the quantities of power, supplies and the nature of the other support systems they would need to return, the stresses their bodies would undergo. Though things might have gone wrong, there was a widespread feeling that they would not. In its predictable, as in its cumulative quality, space exploration epitomizes our science-based civilization. Perhaps this is why space does not seem to have changed minds and imaginations as did former great discoveries.

Behind the increasing mastery of nature achieved in over 10,000 years lay the hundreds of millennia during which prehistoric technology had inched forwards from the discovery that a cutting edge could be put on a stone chopper and that fire could be mastered, while the weight of genetic programming and environmental pressure still loomed much larger then than did conscious control. The dawning of consciousness that more than this was possible was the major step in man’s evolution after his physical structure had settled into more or less what it is today. With it, the control and use of experience had become possible.

Already in the 1980s, nevertheless, space exploration was overshadowed in many minds by a new uneasiness about man’s interference with nature. Within only a few years of Sputnik I, doubts were being voiced about the ideological roots of so masterful a view of our relationship to the natural world. This uneasiness, too, could now be expressed with a precision based on observed facts not hitherto available or not considered in that light; it was science itself which provided the instrumentation and data that led to dismay about what was going on. A recognition of the possible future damage interference with the environment might bring was beginning to arise.

It was, of course, the recognition that was new, not the phenomena which provoked it. Homo sapiens (and perhaps his predecessors) had always scratched away at the natural world in which he lived, modifying it in many particulars, destroying other species. Millennia later, migration southward and the adoption of dryland crops from the Americas had devastated the great forests of south-west China, bringing soil erosion and the consequential silting of the Yangzi drainage system in its train, and so culminating in repeated flooding over wide areas. In the early Middle Ages, Islamic conquest had brought goat-herding and tree-felling to the North African littoral on a scale that destroyed a fertility once able to fill the granaries of Rome. But such sweeping changes, though hardly unnoticed, were not understood. The unprecedented rapidity of ecological interference initiated from the seventeenth century onwards by Europeans, however, was to bring things to a head. The unconsidered power of technology forced the dangers on the attention of mankind in the second half of the twentieth century. People began to reckon up damage as well as achievement, and by the middle of the 1970s it seemed to some of them that even if the story of growing human mastery of the environment was an epic, that epic might well turn out to be a tragic one.

Suspicion of science had never wholly disappeared in western societies, although tending to be confined to a few surviving primitive or reactionary enclaves as the majesty and implication of the scientific revolution of the seventeenth century gradually unrolled. History can provide much evidence of uneasiness about interference with nature and attempts to control it, but until recently such uneasiness seemed to rest on non-rational grounds, such as the fear of provoking divine anger or nemesis. As time passed, it was steadily eroded by the palpable advantages and improvements that successful interference with nature brought about, most obviously through the creation of new wealth expressed in all sorts of goods, from better medicine to better clothing and food.

In the 1970s, however, it became clear that a new scepticism about science itself was abroad, even though only among a minority and only in rich countries. There, a cynic might have said, the dividends on science had already been drawn. None the less, scepticism manifested itself there first in the 1970s and 1980s as ‘green’ political parties sought to promote policies protective of the environment. Although their direct political impact was limited, they did proliferate; the established political parties and perceptive politicians therefore began toying with ‘green’ themes, too.

Environmentalists, as the concerned came to be called, benefited from the new advances in communications, which rapidly broadcast disturbing news even from previously uncommunicative sources. In 1986, an accident occurred at a Ukrainian nuclear power station. Suddenly and horribly, human interdependence was made visible. Grass eaten by lambs in Wales, milk drunk by Poles and Yugoslavs and air breathed by Swedes were all contaminated. An incalculable number of Soviets, it appeared, were going to die over the years from the slow effects of radiation. The alarming event was brought home to millions by television not long after other millions had watched on their screens an American rocket blow up with the loss of all on board. Chernobyl and Challenger showed to huge numbers of people for the first time both the limitations and the possible dangers of an advanced technological civilization.

Such accidents reinforced and diffused the new concern with the environment. It soon became tangled with much else. Some of the doubts that have recently arisen accept that our civilization has been good at creating material wealth, but note that by itself that does not necessarily make men happy. This is hardly a new idea but its application to society as a whole instead of to individuals is a new emphasis. It led to a wider recognition that improvement of social conditions may not remove all human dissatisfactions and may actually irritate some of them more acutely. Pollution, the oppressive anonymity of crowded cities and the nervous stress and strain of modern work conditions easily erase satisfactions provided by material gain and they are not new problems: 4,000 people died of atmospheric pollution in a week in London in 1952, but the word ‘smog’ had been invented nearly half a century before that. Scale, too, has now become a problem in its own right. Some modern cities may even have grown to the point at which they present problems that are, for the moment, insoluble.

Some fear that resources are now so wastefully employed that we confront a new version of the Malthusian peril. Energy has never been used so lavishly as it is today; one calculation suggests that more has been used by humanity during the last century than during the whole of previous history – say, in the last 10,000 years. Eighty-seven per cent of this energy comes from fossil fuels, created from the fossilized remains of plants accumulated in the earth’s crust over millions of years. Reserves are running out just as thousands of millions of people hope to raise their levels of consumption to current levels in the West. This is clearly an unsustainable situation. Many governments and companies are now investing much in developing ‘sustainable’ forms of energy, such as geo-thermal, solar, tide, wind and waste. But in reality very little progress has been made over the past decades, especially in developing applied technologies based on these resources. With nuclear energy still encountering much resistance, humankind is facing a bleak future in energy terms.

We may, too, already have passed the point at which energy consumption is putting unmanageable strains on the environment (for instance, in pollution or damage to the ozone layer), and to further increase those strains would be intolerable. The social and political consequences that might follow from environmental changes that have already occurred have not yet begun to be grasped and we have nothing like the knowledge, techniques or consensus over goals such as were available to land men on the moon.

This became much clearer as a new spectre came to haunt the last decades of the twentieth century – the possibility of man-made, irreversible climatic change. The year 1990 had hardly ended before it was being pointed out that it had been the hottest year since climatic records began to be kept. Was this, some asked, a sign of ‘global warming’, of the ‘greenhouse effect’ produced by the release into the atmosphere of the immense quantities of carbon dioxide produced by a huge population burning fossil fuels as never before? One estimate is that there is now some 25 per cent more carbon dioxide in the atmosphere than in pre-industrial times. It may be so (and as the world’s output of the stuff is now said to be 30,000 million tons a year, it is not for laymen to dispute the magnitudes). Not that this was the only contributor to the phenomenon of accumulating gases in the atmosphere whose presence prevents the planet from dissipating heat; methane, nitrous oxide and chlorofluorocarbons (CFCs) all add to the problem.

And if global warming is not enough to worry about, then acid rain, ozone depletion leading to ‘holes’ in the ozone layer, and deforestation at unprecedented rates, all provided major grounds for new environmental concern. The consequences, if no effective counter-measures are forthcoming, could be enormous, expressing themselves in fears of climatic change (average surface temperature on the earth might rise by between one and four degrees Celsius over the next century), agricultural transformation, rising sea-levels (2½ inches a year has been suggested as possible and plausible) and major migrations.

The Kyoto Protocol to the UN Framework Convention on Climate Change, which came into force in 2005, is an attempt to deal with these problems through limiting the amount of greenhouse gases that are released into the atmosphere. Thirty-eight industrial nations have pledged to reduce their emissions to below 1990 levels by 2012. But the world’s largest polluter, China, is exempt from most of the regulations because of its status as a developing country, while the world’s second largest polluter, the United States, has refused to sign. Even if the signatories live up to their commitments (and there are no signs at present that they are doing so fully), most experts believe that much more is needed to avoid the long-term effects of global warming. By the turn of the twenty-first century it was abundantly clear that if the major states could eventually come to co-operate rather than compete, there would be plenty of common concerns for mankind to co-operate about – if they could agree on what had to be done.

Historians should not pontificate about what goes on in the minds of the majority, for they know no more than anyone else; it is the untypical, who have left disproportionately prominent evidence, whom they know most about. They should be careful, too, about speculating on the effect of what they think are widely held ideas. Obviously, as recent political responses to environmental concerns show, changes in ideas can soon affect our collective life. But this is true even when only a minority know what the ozone layer is. Ideas held more widely, and of a vaguer, less-defined sort, also have historical impact; a Victorian Englishman invented the expression ‘cake of custom’ to speak of the attitudes, formed by deep-seated and usually unquestioned assumptions, which exercise decisive conservative weight in most societies. To be dogmatic about how such ideas operate is even more hazardous than to say how ideas tie up with specific matters (such as environmental change), yet the effort has to be made.

We can now see, for example, that more than any other single influence a growing abundance of commodities has recently shattered what was for millions – still not long ago – a world of stable expectations. This is still happening, most strikingly in some of the poorest countries. Cheap consumer goods and the images of them increasingly available in advertisements, especially on television, bring major social changes in their train. Such goods confer status; they generate envy and ambition, provide incentives to work for wages with which to buy them, and often encourage movement towards towns and centres where those wages are to be had. This severs ties with former ways and with the disciplines of ordered, stable life, and forms one of many currents feeding the hastening onrush of what is new.

Part of the complicated background to and the process of such changes is an obvious paradox: the last century was one of unprecedentedly dreadful tragedy and disaster on any measurable scale, and yet it appeared to end with more people than ever believing that human life and the condition of the world could be improved, perhaps indefinitely, and therefore that they should be. The origins of such optimistic attitudes lie centuries back in Europe; until recently, they were confined to cultures rooted in that continent. Elsewhere they have still to make much progress. Few could formulate such an idea clearly or consciously, even when asked; yet it is one shared more widely than ever before and one that is changing behaviour everywhere.

Almost certainly such a change owes less to exhortatory preaching (though there has been plenty of that) than to the material changes whose psychological impact has everywhere helped to break up the cake of custom. In many places they were the first comprehensible sign that change was in fact possible, that things need not always be as they have been. Once, most societies consisted mainly of peasants living in similar bondage to routine, custom, the seasons, poverty. Now, cultural gulfs within mankind – say, those between the European factory-worker and his equivalent in India or China – are often vast. That between the factory-worker and peasant is wider still. Yet even the peasant begins to sense the possibility of change. To have spread the idea that change is not only possible but also desirable is the most important and troublesome of all the results of European cultural influence.

Technical progress has often promoted such change by undermining inherited ways over very broad areas of behaviour. As already mentioned, an outstanding example has been the appearance over the last two centuries of better forms of contraception, whose apogee was reached in the 1960s with the rapid and wide diffusion of what became (in many languages) known simply as ‘the Pill’. Though women in western societies had long had access to effective techniques and knowledge in these matters, the Pill – essentially a chemical means of suppressing ovulation – implied a greater transference of power to women in sexual behaviour and fertility than any earlier device. Although still not taken up by women in the non-western world so widely as by their western sisters, and although not legally available on the same basis in all developed countries, it has, through the mere spread of awareness of its existence, marked an epoch in relations between the sexes.

But many other instances of the transforming power of science and technology on society could be cited. It is difficult not to feel, for example, that two centuries’ changes in communication, and particularly those of the last six or seven decades, imply even more for the history of culture than, say, did the coming of print. Technical progress also operates in a general way through the testimony it provides of the seemingly magical power of science, since there is greater awareness of its importance than ever before. There are more scientists about; more attention is given to science in education; scientific information is more widely diffused through the media and more readily comprehensible.

Yet success, paradoxically, as in space, has provided diminishing returns in awe. When more and more things prove possible, there is less that is very surprising about the latest marvel. There is even (unjustifiable) disappointment and irritation when some problems prove recalcitrant. Yet the grip of the master idea of our age, the notion that purposive change can be imposed upon nature if sufficient resources are made available, has grown stronger in spite of its critics. It is a European idea, and the science now carried on around the globe (all based on the European experimental tradition) continues to throw up ideas and implications disruptive of traditional, theocentric views of life. This has accompanied the high phase of a long process of dethroning the idea of the supernatural, even in the form of the great religions.

Science and technology have thus both tended to undermine traditional authority, customary ways and accepted ideology. While they appear to offer material and technical support to the established order, their resources also become available to its critics. Improving communication has pushed new ideas more quickly into mass culture than ever before, though the impact of scientific ideas on élites is easier to trace. In the eighteenth century, Newtonian cosmology had been able to settle down into co-existence with Christian religion and other theocentric modes of thought without much troubling the wide range of social and moral beliefs tied to them. As time passed, however, science has seemed harder and harder to reconcile with any fixed belief at all. It has appeared at times to stress relativism and the pressure of circumstance to the exclusion of any unchallengeable assumption or viewpoint.

A very obvious instance can be seen in one new branch of science – psychology – which evolved in the nineteenth century. After 1900 more began to be heard of it by the lay public, and especially of two of its expressions. One, which eventually took the name ‘psychoanalysis’, can be considered, as an influence on society at large, to begin with the work of Sigmund Freud, which had begun in the clinical observation of mental disorder, a well-established method. His own development of this became, with comparative rapidity, notorious because of its wide influence outside medicine. As well as stimulating a mass of clinical work that claimed to be scientific (though its status was and is contested by many scientists), it undermined many accepted assumptions, above all attitudes to sexuality, education, responsibility and punishment.

Meanwhile, another psychological approach was that pursued by practitioners of ‘behaviourism’ (like ‘Freudian’ and ‘psychoanalytical’, a word often used somewhat loosely). Its roots went back to eighteenth-century ideas, and it appeared to generate a body of experimental data certainly as impressive as (if not more impressive than) the clinical successes claimed by psychoanalysis. The pioneer name associated with behaviourism is still that of the Russian I. P. Pavlov, the discoverer of the ‘conditioned reflex’. This rested on the manipulation of one of a pair of variables in an experiment, in order to produce a predictable result in behaviour through a ‘conditioned stimulus’ (the classical experiment provided for a bell to be sounded before food was given to a dog; after a time, sounding the bell caused the dog to salivate without the actual appearance of food). Refinements and developments of such procedures followed which provided much information and, it was believed, insight into the sources of human behaviour.

Whatever the benefits these psychological studies may have brought with them, what is striking to the historian is the contribution that Freud and Pavlov made to a larger and not easily definable cultural change. The doctrines of both were bound – like more empirical approaches to the medical treatment of mental disorder by chemical, electrical and other physical interference – to suggest flaws in the traditional respect for moral autonomy and personal responsibility that lay at the heart of European-inspired moral culture. In a sharper focus, too, their weight was now added to that of the geologists, biologists and anthropologists in the nineteenth century who contributed to the undermining of religious belief.

At any rate, in western societies the power of the old idea that things mysterious and inexplicable were best managed by magical or religious means now seems to have vanished, except, perhaps, among south-east European peasants and some American evangelical Christians. It may be conceded that where this has happened it has gone along with a new acceptance, even if halting and elementary, that science was now the way to manage most of life. But to speak of such things demands very careful qualification. When people talk about the waning power of religion, they often mean only the formal authority and influence of the Christian Churches; behaviour and belief are quite different matters. No English monarch since Elizabeth I, four and a half centuries ago, has consulted an astrologer about an auspicious day for a coronation. Yet in the 1980s the world was amused (and perhaps a little alarmed) to hear that the wife of the president of the United States liked to seek astrological advice.

It seems more revealing, perhaps, that in 1947 the timing of the ceremony marking the establishment of Indian independence was only settled after appropriate consultation with the astrologers, even though India has a constitution that is non-confessional and, theoretically, secular. Around the world, too, confessional states or established religions are now unusual outside Muslim countries (though England and some of the Nordic countries still have state churches). This reduction need not mean, however, that the real power of religious belief or of religions over their adherents has declined everywhere. The founders of Pakistan were secular-minded, westernized men, but in a struggle with the conservative ulema after independence, they often lost. Some of the same could be said for Israel, another state created by a secular élite but on a religious basis.

It may well be true that today more people give serious attention to what is said by religious authorities than have ever done so before: there are more people alive, after all, even if the adherence to any formal religion has declined in parts of the West. Many people in Britain were startled in the 1980s when Iranian clergymen denounced a fashionable author as a traitor to Islam and pronounced a sentence of death upon him; it was a surprise to bien pensant and progressive circles to discover that, as it were, the Middle Ages were still in full swing in some parts of the world, without their having noticed it. They were even more startled when numbers of their Muslim fellow citizens appeared to agree with the fatwa.

‘Fundamentalism’, though, is a word borrowed from American religious sociology. Within Christian churches, too, it expresses a protest against modernization by those who feel threatened and dispossessed by it. Nevertheless, some believe that here as elsewhere, western society has indicated a path that other societies will follow, and that conventional western liberalism will prevail. It may be so. Equally, it may not be. The interplay of religion and society is very complex and it is best to be cautious. That the numbers of pilgrims travelling to Mecca have risen dramatically may register a new fervour or merely better air travel facilities.

Alarm has been felt recently over the vociferous reassertion of their faith by many Muslims. Yet Islam does not seem able to avoid cultural corruption by the technology and materialism of the European tradition, though successfully resisting that tradition’s ideological expression in atheistic Communism. Radicals in Islamic societies are frequently in conflict with westernized and laxly observant Islamic élites. Islam is, of course, still an expanding and missionary faith and the notion of Islamic unity is far from dead in Muslim lands. It can still nerve men to action, too, as it did in India in 1947 or in Iran in 1978. In Ulster and Eire, sectarian Irishmen long mouthed their hatreds and bitterly disputed the future of their country in the vocabulary of Europe’s seventeenth-century religious wars, though a truce has now been made there. Although the hierarchies and leaders of different religions find it appropriate to exchange public courtesies, it cannot be said that religion has ceased to be a divisive force. Doctrine may have become more amorphous, but whether the supernatural content of religion is losing its hold in all parts of the world, and is important today merely as a badge of group membership, is contestable.

What is less doubtful is that within the world whose origins are Christian, which did so much to shape today’s world, the decline of sectarian strife has gone along with the general decline of Christian belief and, often, with a loss of confidence. Ecumenism, the movement within Christianity whose most conspicuous expression was the setting up of a World Council of Churches (which Rome did not join) in 1948, owes much to Christians’ growing sense in developed countries that they live in hostile environments. It also owes something to widespread ignorance and uncertainty about what Christianity is, and what it ought to claim. The only unequivocally hopeful sign of vigour in Christianity has been the growth (largely by natural increase) in numbers of Roman Catholics. Most of them are now non-Europeans, a change dramatized in the 1960s by the first papal visits to South America and Asia and the presence at the Vatican Council of 1962 of seventy-two archbishops and bishops of African descent. By 2010 only a quarter of the world’s Catholics lived in Europe, and the faith was growing faster in Africa than anywhere else.

As for the papacy’s historic position within the Roman Church, that seemed to be weakening in the 1960s, some symptoms being provided by the Second Vatican Council itself. Among other things registering its work of aggiornamento or updating, for which Pope John XXIII had asked, it went so far as to speak respectfully of the ‘truths’ handed down in the teachings of Islam. But 1978 (a year of three popes) brought to the throne of St Peter John Paul II, the first non-Italian pope in four and a half centuries, the first Polish pope, and the first whose coronation was attended by an Anglican archbishop of Canterbury. His pontificate soon showed his personal determination to exercise the historic authority and possibilities of his office in a conservative sense; yet he was also the first pope personally to travel to Greece in search of reconciliation with the Orthodox Churches of eastern Europe.

The changes in eastern Europe in 1989 – and especially those in his native Poland – owed a great deal to the activism and moral authority of John Paul II. When he died in 2005, after a pontificate that was the third longest in history, he left a mixed legacy: a staunch conservative on matters of doctrine, the Polish pope had grown increasingly concerned with the materialism that he saw as pervading the contemporary world, not least in the countries he had helped to break away from their Communist past. It would be hazardous to project further trends in the history of an institution whose fortunes have fluctuated so much across the centuries as those of the papacy (up with Hildebrandine reform; down with Schism and conciliarism; up with Trent; down with Enlightenment; up with the First Vatican Council). It is safest simply to recognize that one issue at least, posed by twentieth-century advances in the knowledge, acceptability and techniques of contraception, may for the first time be inflicting mortal wounds on the authority of Rome in the eyes of millions of Roman Catholics.

Some of the most influential changes of recent times have still to reveal their full weight and implications; after all, the issue of contraception affects, potentially, the whole human race, although we usually think about it as part of the history of women. But the relations of men and women should be considered as a whole, even if it is traditional and convenient to approach the subject from one side only. Much that settles the fate of many women can none the less be roughly measured and measurement, even at its crudest, quickly makes it clear that great as the level of change has been, it still has a long way to go. Radical change has only taken place in a few places, and is measurable (if at all) only in the last couple of centuries even there. Our recognition of the changes has to be very carefully qualified; most western women now live lives dramatically unlike those of their great-grandmothers while the lives of women in some parts of the world have been little changed for millennia.

Advances in women’s political and legal equality with men are one of the greatest revolutions of our times, and one that has set free enormous intellectual and productive power. Still, there is much that needs to be done, even if a large majority of members of the United Nations now accepts female suffrage and if, in most countries, formal and legal inequalities between the sexes have now been under attack for more than a generation. The range of legislation attempting to assure equity in the treatment of women has been steadily extended (for instance, into the recognition of disadvantages in employment which had long been ignored). Examples thus set have been noted and influential in non-western countries, even in the teeth of conservative oppositions. This has been a new operative force in changing perceptions and, of course, it has been all the more influential in a world where women’s labour has confronted growing opportunities thanks to technological and economic change.

Such matters continued to unroll in the interconnected, interlocking ways that had existed since industrialization began. Even the home was transformed as a place of work – piped water and gas were soon followed by electricity and the possibility of easier management of domestic processes, by detergents, synthetic fibres and prepared foods, while information became available to women as never before through radio, cinema, television and cheap print. It is tempting to speculate, though, that no such changes had anything like the fundamental impact of the appearance in the 1960s of the Pill. Thanks to its convenience and the way in which it was used, it did more than any earlier advance in contraceptive knowledge or technique to transfer power over their own lives in these matters to women. It opened a new era in the history of sexual culture, even if that was obvious only in a few societies three or four decades later.

Another aspect of women’s struggle for equality was a new feminism that broke away from the liberal tradition in which its predecessors had been rooted. Arguments for traditional feminism had always had a liberal flavour, saying that for women to live unencumbered by laws and customs which were not imposed on men but only on them was merely a logical extension of the truth that freedom and equality were good things unless specific cause otherwise were to be shown. The new feminism took a new tack. It embraced a wider spectrum of causes specific to women – the protection of lesbians, for example – laid particular stress on women’s sexual liberation, and, above all, strove to identify and uncover unrecognized instances of psychological, implicit and institutionalized forms of masculine oppression. Its impact has been substantial, even if its radical elements are unlikely to be accepted by most women, not to mention by men.

In some societies any feminist advance at all has been fiercely contested. Parts of the Islamic world maintain restrictions and practices that protect an ultimate male dominance, and adherents of other great religions have attempted to hold back female liberation, too. Yet only some Muslim societies impose specific forms of dress on women and, in some cases, headscarf-wearing or even chador-wearing women are fierce defenders of female rights. Whether such facts turn out to establish sensible compromise or uneasy equilibrium will differ from one society to another. It should not be forgotten that violent contrasts in what is thought appropriate behaviour for women have until recently existed in European societies, too. It is not easy to relate such paradoxes as they have sometimes presented to what are supposed to be uniformities of faith.

Whether organized religion and the notion of fixed, unchanging moral law have or have not lost some of their power as social regulators, the state, the third great historic agent of social order, at first sight seems to have kept its end up much better. In spite of challenges from its opponents, it has never been so widely taken for granted. There are more states – recognized, geographically defined political units claiming legislative sovereignty and a monopoly of the use of force within their own borders – than ever before; between 1945 and 2010 the number increased from less than 50 to almost 200. More people than ever before look to government as their best chance of securing well-being rather than as their inevitable enemy. Politics as a contest to capture state power has at times apparently replaced religion (sometimes even appearing to eclipse market economics) as the focus of faith that can move mountains.

One of the most visible institutional marks left by Europe on world history has been the reorganization of international life as basically a matter of sovereign (and now, in name at least, often republican and usually national) states. Beginning in the seventeenth century, this was already in the nineteenth century beginning to look a possible global outcome, and the process was virtually completed in the twentieth century. With it went the diffusion of similar forms of state machinery, sometimes through adoption, sometimes through imposition first by imperial rulers. This was assumed to be a concomitant of modernization. The sovereign state is now taken for granted, as in many places it still was not even a century ago. This has been largely a mechanical consequence of a slow demolition of empires. That new states should come into being to replace them was scarcely questioned at any stage. With the collapse of the USSR almost half a century after the dissolution of other empires, the global generalization of the constitutional language of the sovereignty of the people, representative institutions and the separation of powers reached its greatest extent.

The aggrandizement of the state – if we may so put it – thus long met with little effective resistance. Even in countries where governments have traditionally been distrusted or where institutions exist to check them, people tend now to feel that they are much less resistible than even a few years ago. The strongest checks on the abuse of power remain those of habit and assumption; so long as electorates in liberal states can assume that governments will not quickly fall back on the use of force, they do not feel very alarmed. But although there are more democracies around the world now than ever before, there is now a rich undercurrent of opinion in the developing world claiming that authoritarian regimes are best for the initial growth phase of a country’s economy. They often point to post-Mao China as an example. But most dictatorships are not economically successful, and almost all developed countries are democracies.

Still, in the nineteenth and twentieth centuries, for some countries there is no doubt that the modernization process was furthered by authoritarian rule, even if these regimes did not always manage to create lasting growth. The role played by the urge to modernize in strengthening the state – something prefigured long ago outside Europe in a Muhammad Ali or an Atatürk – was an indication of new sources from which the state increasingly drew its moral authority. Instead of relying on personal loyalty to a dynasty or a supernatural sanction, it has come to rely increasingly on the democratic and utilitarian argument that it is able to satisfy collective desires. Usually these were for material improvement, but sometimes not; now, individual freedom or greater equality may be among them.

If one value more than any other legitimizes state authority today it is in fact nationalism, still the motive and fragmenting force of much of world politics and paradoxically often the enemy of many particular states in the past. Nationalism has been successful in mobilizing allegiance as no other force has been able to do; the forces working the other way, to integrate the world as one political system, have been circumstantial and material, rather than comparably powerful moral ideas or mythologies. Nationalism was also the greatest single force in the politics of history’s most revolutionary century, engaging for most of it with multi-national empires as its main opponents. Now, though, it is more often engaged with rival nationalisms and with them continues to express itself in violent and destructive struggles.

When in conflict with nationalism, admittedly, the state often came off badly even when, to all appearances, enormous power had been concentrated in its apparatus. Buttressed by the traditions of Communist centralization though they were, both the USSR and Yugoslavia have now disintegrated into national units. Quebecois still talk of separating from Canada, and Tibetans from China. There are many other instances of disturbingly violent potential. Yet nationalism has also greatly reinforced the power of government and extended its real scope, and politicians in many countries are hard at work fostering new nationalisms where they do not exist in order to bolster shaky structures that have emerged from decolonization.

Nationalism, too, has gone on underwriting the moral authority of states, by claiming to deliver collective good, if only in the minimal form of order. Even when there is disagreement or debate about exactly what benefits the state should provide in specific instances, modern justifications of government rest at least implicitly on its claim to be able to provide them, and so to protect national interests. Whether states actually did deliver any such good at all, has, of course, often been disputed. Marxist orthodoxy used to argue, and in a few places still does, that the state was a machine for ensuring the domination of a class and, as such, would disappear when overtaken by the march of History. Even Marxist regimes, though, have generally not behaved as if that were true.

As for the idea that a state might be a private possession of a dynasty or an individual, serving private interests, this is now everywhere formally disavowed, whatever the reality in many places. Most states now participate to a degree far surpassing any of their predecessors in elaborate systems, connections and organizations for purposes going well beyond those of simple alliance and requiring concessions of sovereignty. Some are groupings to undertake specific activities in common, some give new opportunities to those who belong to them, while others consciously restrict state power. They differ greatly in their structures and their impact on international behaviour. The United Nations is made up of sovereign states, but it has organized or authorized collective action against an individual member as the League of Nations or earlier associations never did.

On a smaller, but important scale, regional groupings have emerged, requiring the observance of common disciplines. Some, like those of eastern Europe, have proved evanescent, but the European Union, even if many of the visions that attended its birth remain unrealized, inches forward. On 1 January 2002 a new common currency was introduced among twelve of its member states and 300 million people. Nor are formal organizations the whole story. There are some unorganized or only vestigially organized supranational realities that from time to time appear to eclipse the freedom of individual states. Islam has at times been feared or welcomed as such a force, and perhaps the racial consciousness of pan-Africanism, or of what is called négritude, inhibits some nations’ actions. The spread of this luxuriant undergrowth to international affairs must make obsolete the old notion that the world consists of independent and autonomous players operating without restraint except that of individual interest. Paradoxically, the first substantial inter-state structures emerged from a century in which more blood was shed by states in quarrels with one another than ever before.

International law, too, now aspires to greater practical control of states’ behaviour than previously despite all the notorious examples that remain of failure to comply with it. In part this is a matter of slow and still sporadic change in the climate of opinion. Uncivilized and barbarous regimes go on behaving in uncivilized and barbarous ways, but decency has won its victories, too. The shock of uncovering in 1945 the realities of the Nazi regime in wartime Europe meant that great evils cannot now be launched and carried through without concealment, denial or attempts at plausible justification. In 1998, representatives of 120 nations – although those of the United States were not among them – agreed to set up a permanent international court to try war crimes and crimes against humanity. In the following year, the highest of the British courts of justice ruled, unprecedentedly, that a former head of state was liable to extradition to another country to answer there charges of crimes alleged against him. In 2001, the former president of Serbia was surrendered by his countrymen to an international court and appeared there in the dock.

It is important not to exaggerate. Hundreds, if not thousands, of wicked men continue to practise around the world brutalities and cruelties for which there is little practical hope at present of holding them to account. International criminality is a concept that infringes state sovereignty and the United States is not likely under any conceivable presidency to admit the jurisdiction of an international court over its own citizens. But the United States itself also explicitly adopted revolutionary foreign policy goals for quasi-moral ends in the 1990s in seeking to overthrow the governments of Saddam Hussein and Slobodan Milošević and it is now concerned with the organization of efforts against terrorism which must imply some further interference with others’ sovereignty.

Nevertheless, at home, governments have for 200 or 300 years enjoyed more and more power to do what was asked of them. Lately, economic distress in the 1930s and great wars required a huge mobilization of resources and new extensions of governmental power. To such forces have also been added demands that governments indirectly promote the welfare of their subjects and undertake the provisions of services either unknown hitherto or left in the past to individuals or such ‘natural’ units as families and villages. The welfare state was a reality in Germany and Great Britain before 1914. In the last fifty years, the share of GDP taken by the state has shot up almost everywhere. There has also been the urge to modernize. Few countries outside Europe achieved this without direction from above and even in Europe some countries have owed most of their modernization to government. The twentieth century’s outstanding examples were Russia and China, two great agrarian societies that sought and achieved modernization through state power. Finally, technology, through better communications, more powerful weapons and more comprehensive information systems, has advantaged those who could spend most on it, namely governments.

Once, and not long ago, even the greatest of European monarchies could not carry out a census or create a unified internal market. Now, the state has a virtual monopoly of the main instruments of physical control. Even a hundred years ago, the police and armed forces of government unshaken by war or uncorrupted by sedition gave them a security; technology has only increased their near-certainty. New repressive techniques and weapons, though, are now only a small part of the story. State intervention in the economy through its power as consumer, investor or planner, and the improvement of mass communications in a form that leaves access to them highly centralized, all matter immensely. Hitler and Roosevelt made great use of radio (though for very different ends); and attempts to regulate economic life are as old as government itself.

None the less, governments in most countries have had to grapple more obviously in recent times with a new integration of the world economy and, consequently, less freedom in running their own economic affairs. This goes beyond the operation of supranational institutions like the World Bank or International Monetary Fund; it is a function of a long-visible tendency, often now called ‘globalization’, in its latest manifestations. Sometimes institutionalized by international agreement or by the simple economic growth of large companies, but driven by rising expectations everywhere, it is a phenomenon that often dashes the hopes of politicians seeking to direct the societies over which they are expected to preside. Economic and political independence can be hugely infringed by unregulated global financial flows, and even by the operations of great companies, some of which can call on resources far larger than those of many small states. Paradoxically, complaints about the curbing of state independence to which globalization can give rise are sometimes voiced most loudly by those who would urge even more vigorous interference with sovereignty in cases of, for example, the abuse of human rights.

The play of such forces is discernible in the pages that follow. Perhaps they are bringing about some reduction in state power while leaving forms largely intact as power accumulates elsewhere. This is at least more probable than that radical forces will succeed in destroying the state. Such forces exist, and at times they draw strength from and appear to prosper in new causes – ecology, feminism and a generalized anti-nuclear and ‘peace’ movement have all patronized them. But in fifty years of activity they have only been successful when they have been able to influence and shape state policy, bringing changes in the law and the setting up of new institutions. The idea that major amelioration can be achieved by altogether bypassing so dominant an institution still seems as unrealistic as it was in the days of the anarchistic and utopian movements of the nineteenth century.

Загрузка...