HANGING OVER EVERY political and military decision throughout the Cold War was the threat of atomic and, later, of thermonuclear attack. This dreadful prospect faced politicians, the military and civil populations alike, and as much in neutral countries as in those involved in any possible conflict. As with many such issues, however, the majority were aware only of an appalling, but vague, Doomsday threat; indeed, as will be shown, even the so-called ‘experts’ could make only imprecise forecasts of what it would involve. It is therefore necessary to identify the major characteristics of nuclear weapons and to highlight some of their possible effects, in order to place the subsequent chapters in perspective.[1]
A nuclear explosion releases energy on a scale vastly greater than that of conventional high explosives, its yield being expressed in terms of its equivalence to the detonation of TNT;[2] thus a 1 kiloton (1 kT) nuclear weapon is equivalent to 1,000 tons of TNT, a 1 megaton (1 MT) weapon is equivalent to 1 million tons of TNT, and so on. Some comparisons will place the figures in perspective:
• During its strategic bombing campaign in Europe between mid-1942 and May 1945, the United States’ Eighth Air Force dropped approximately 700 kT of bombs. In the 1970s a single USAF FB-111 fighter-bomber could carry six B61 bombs with a total yield of 3 MT.
• The largest known single detonation of high explosives occurred on 27 November 1944 at a British underground ammunition store at Hanbury, Staffordshire, when approximately 4 kT of aircraft bombs of various sizes exploded at a depth of some 27 m. The resulting crater was 274 m long, 244 m wide and approximately 24 m deep, and, although in a sparsely populated farming area, the explosion killed seventy people and wounded another twenty. In the 1960s a single W33 203 mm howitzer nuclear shell had a yield of 10 kT.
• The atomic bomb dropped on Hiroshima on 6 August 1945 had a yield of about 15 kT, while that dropped on Nagasaki three days later had a yield of 22 kT.
• The largest known nuclear explosion was a Soviet 58 MT weapon, exploded in a 3,700 m airburst at the Novaya Zemlya test site on 30 October 1961.
• The most powerful US nuclear test took place on 28 February 1954. A surface test on Bikini Atoll, it was expected to produce a yield of 6 MT, but actually produced 15 MT, gouging a crater 1,830 m in diameter and 73 m deep.{1}
Nuclear explosions release energy in five forms which affect humans: flash (light), blast (shock and sound), thermal radiation (heat), initial nuclear radiation and residual nuclear radiation (fallout). The proportions vary according to the height of the burst, but, in a typical airburst, blast and thermal radiation account for some 85 per cent of the energy output, initial radiation approximately 5 per cent and residual radiation some 10 per cent. Nuclear weapons also release two forms of energy which affect electronic equipments only: electromagnetic pulse (EMP) and transient radiation effects on electronics (TREE).
The effects of a nuclear explosion depend to a large degree on the height of the burst.
An ‘airburst’ takes place where the fireball just fails to touch the surface of the earth.[3] In a 1 MT weapon, for example, the fireball is 1,700 m in diameter, meaning that an airburst for such a weapon would have to be at an altitude greater than 870 m. In an airburst nearly all the shock energy leaves the fireball as blast, while the thermal radiation travels long distances, but there is no ground crater. Initial radiation also travels long distances, although it decreases more rapidly with the distance from the explosion, but there is no residual radiation. Technically, there are two types of airburst: endo-atmospheric (i.e. within the atmosphere), which takes place at a height of less than 30 km, and exo-atmospheric (i.e. outside the atmosphere), which takes place at a height greater than 30 km. In practice, an exo-atmospheric burst has only one effect of any military significance – EMP – and ‘airburst’ is normally taken to mean an endo-atmospheric burst.
One aspect of airbursts is that, if it is decided to replace a single large warhead with a number of smaller warheads but with the same overall yield (e.g. replace a single 3 MT warhead by six, each with a yield of 500 kT, and provided they are detonated so that their blast patterns do not overlap), then the total damage inflicted will increase greatly. In general terms, therefore, airbursts would have been used where maximum blast effect and minimum fallout were required (e.g. to destroy cities, airfields, oil refineries), with the height of burst optimized to ensure that the desired blast effect covered the target.
Nuclear bursts which take place either on the surface or sufficiently low above it that the fireball will touch it are known as ‘groundbursts’ or ‘surface bursts’. Much of the energy appears as air blast and ground shock, but part is expended in creating a surface crater. Fallout from such a burst is much greater than immediate radiation. Thus a groundburst would have been used either to optimize blast against a pinpoint target such as a missile silo or a hardened building, or to generate fallout to attack rural populations.
A ‘subsurface burst’ is one in which the explosion occurs at some depth underground or underwater. Here most of the energy is dissipated in shock, although some may also be released as air blast. Due to the contamination of the surrounding earth or water with radioactive products, the residual radiation will be significant. ‘Subsurface’ bursts would be used for anti-submarine warfare at sea or to demolish buried headquarters on land.
The first evidence of a nuclear explosion is a very intense flash of light, which covers a large geographical area. It is of major significance to people in the open, and particularly to those who happen to be facing the explosion, in whom it will cause temporary flash blindness and eye damage, including retina burns. Its effect is enhanced at night, when those facing the explosion could be dazzled for up to ten minutes. The effects of flash are, however, reduced by cloudy weather and rain.
Most of the material damage caused by a nuclear explosion is due – directly or indirectly – to the pressure wave, which has two components: blast wave through the air and shock wave through the ground. The blast wave travels outwards from the centre of the explosion at a speed of some 305 m/s with both speed and intensity decreasing rapidly with distance. Blast is defined in terms of ‘overpressure’ – i.e. the pressure in excess of the ambient pressure.[4]
An overpressure of 0.2 kilograms-force per square centimetre (0.2 kgf/cm2) (equivalent to a wind of 161 km/h) would collapse wooden houses, but brick-built houses would probably survive, although windows, doors, floors and ceilings would be seriously damaged; the remains of such houses might be used for survival, but not for ‘living’ as currently understood. Industrial premises would be damaged, but the stronger the structure, the less the damage. Within the 0.2 kgf/cm2 area about 10 per cent of the population would die.
An overpressure of 0.4 kgf/cm2 (equivalent to a wind speed of 322 km/h would cause both wooden-framed and conventional two-storey, brick-built houses to collapse, and would render most industrial premises unusable, destroy oil storage tanks, collapse steel-truss bridges, and uproot some 90 per cent of trees. Within this 0.4 kgf/cm2 overpressure area, approximately 80 per cent of the population would die – some from direct exposure to the blast, but most from injuries resulting from collapsing buildings and flying debris. Fire would also be a major hazard, but would probably not be of great significance compared to the devastation and deaths already caused.
One strong possibility is the creation of a firestorm. In this, once the blast had spread outward, there would be a negative pressure at the centre, resulting in winds blowing inward towards ground zero,[5] fanning the fires and in turn increasing the wind, as happened in the Second World War in the conventional bombing raids on Hamburg, Dresden and Tokyo. This has a curious and contradictory effect, in that the wind towards the centre tends to limit the spread of the fire outward, but ensures that the fire destroys virtually everything at the centre.
A nuclear explosion generates heat as intense as that at the centre of the sun. This heat travels outward at a speed of some 300 million metres per second, and in a groundburst it will vaporize most substances within the fireball and for distances up to 5 km from ground zero, while many substances will spontaneously ignite at greater distances. Fifty per cent of people caught in the open suffer will flash burns, the severity depending upon the distance from ground zero; a 1 MT airburst, for example, would cause third degree burns at 11,000 m and second degree burns at 13,000 m.
Anything which throws a shadow will provide protection, and a British study in the 1960s showed that in the UK in peacetime in daylight some 10 per cent of the population (approximately 5 million people) was in the open at any one time, but that 75 per cent (3.75 million) of these would always be offered at least some protection by buildings. If adequate warning of an impending nuclear strike had been given, however, it would have been reasonable to expect that the numbers in the open would be substantially reduced.
A very powerful pulse of initial nuclear radiation (INR) is released within the first minute of an explosion. INR expands in a circular pattern and is relatively short-ranged: the lethal range for a 1 MT weapon, for example, is 2,600 m. INR consists, in the main, of neutrons and gamma rays which penetrate the body and react with bone marrow, but these are substantially attenuated by dense materials such as concrete, steel or earth, so that people inside a building, in a steel vehicle (such as a battle tank or an armoured personnel carrier) or in an underground bunker receive varying degrees of protection.
People in the open are very vulnerable to INR, and the majority of radiation victims at Hiroshima and Nagasaki suffered from this initial radiation rather than from fallout. With high-yield nuclear weapons, however, the blast effect has a greater lethal range than INR, so that above a yield of about 100 kT INR ceases to be significant.
The ‘enhanced radiation warhead’ (popularly known as the ‘neutron bomb’) was designed to optimize the effects of INR, by using low-yield weapons in low airbursts over a target such as a company of tanks. The INR would have penetrated the armour and inflicted high radiation doses, while the low blast effect would have caused little serious damage to vehicles or buildings.
Residual nuclear radiation is caused by materials which are vaporized in the initial heat and then sucked up as dust into the fireball, where they are irradiated and then fall back to earth as radioactive fallout. Larger particles return to earth within a few hours, but the remaining, increasingly small, particles may take weeks, or even months, to return to earth. The area covered lies downwind of ground zero and is generally elliptical in shape, giving rise to its colloquial name of the ‘fallout plume’.
Radiation is measured in rads, and accumulated doses have the following effects:[6]
• 5,000 rads and above: death in up to two days;
• 1,000 to 5,000 rads: death within fourteen days, although the lower the dose the more protracted the period;
• 600 to 1,000 rads: 90–100 per cent deaths over a period of up to six weeks;
• 200 to 600 rads: 0 to 90 per cent deaths over a period of 2 to 12 weeks;
• below 200 rads: no long-term effects, although there will be a period of several weeks’ convalescence from effects of radiation such as skin burns etc.
Nuclear explosions cause ionization of the atmosphere, which affects radio and radar systems whose waves pass through the disturbed areas. The period of disruption may be brief (a few seconds) or lengthy (several hours), and the severity will depend upon the yield of the nuclear explosion and its height, as well as upon the characteristics of the equipment itself. Systems which depend upon reflected waves, such as radars, tropospheric scatter systems and high-frequency radios, would be particularly affected.[7]
EMP is an extremely powerful short-duration burst of broad-band radio energy generated by a nuclear explosion. This could affect electronic equipment, such as telephone systems, radio and television equipment, radars, computers and power supplies. As far as is known, it is harmless to man and animals.
EMP travels with the speed of light and radiates over 360 degrees and out to the line of sight from the source; thus, the higher the altitude of the burst the wider the area covered, until the point is reached where an exo-atmospheric burst would be intended primarily as an anti-electronic-systems weapon. An explosion at an altitude of 80 km would cover a circular area of 966 km radius, while an explosion at a height of 320 km would cover the whole of the contiguous United States and most of Canada.[8] In a similar manner to lightning, EMP tends to home in on and then travel along conductors such as overhead or buried communications-cable runs, power cables, railway tracks and aircraft fuselages, and is particularly effective against transistorized equipment.
In aircraft, for example, EMP can cause computer malfunctions, inject energy into the aircraft wiring looms (resulting in unwanted signals to the equipment), and cause power surges which can result in system or component burn-out. This problem can be alleviated by shielding and filtering.
On the ground, protection against EMP is provided by careful planning of systems and good detailed design of equipments, including the use of efficient grounding (earth) and appropriate components. The EMP threat was taken very seriously in the West, particularly in the latter half of the Cold War, and vast sums of money were spent in developing and installing ‘nuclear hardening’ and in testing the results. Protection was also necessary against the EMP effects of weapons released by one’s own side; this might have included switching equipments off before an explosion.
Although TREE occurs at the same time as EMP and has a similar source, it is a different phenomenon, caused by the initial nuclear radiation acting on electronic components. With high-yield nuclear weapons the range of TREE is probably less than that of damage caused by heat or blast, but it is of considerably greater significance in low-yield weapons, particularly those with enhanced-radiation warheads. Although the actual phenomenon is of very brief duration (typically a fraction of a second) the effect on electronic equipment may be long-lasting, if components are destroyed. Again, protection is achieved by good design and the use of filters.
A detailed assessment of the effects of nuclear weapons in a particular situation needs to take account of a wide variety of variable factors. In considering urban areas, for example, these include the location, density and distribution of the population in peacetime, as well as ambient conditions such as wind (which dictates the direction of the fallout plume), rain and temperature, all of which will affect the velocity and deceleration of the blast wave. The time of day is also relevant, not only because it will affect the ambient conditions, but also because the population distribution may be different between daytime and night-time, while the blinding effect of the light flash will be much more serious in the hours of darkness. Terrain also has an effect: for example, the blast wave will behave differently in hilly country compared to a plain.
Further differences arise according to whether the population has been warned of an impending attack, and, if so, whether it has been told to stay put (as in the UK), to disperse to the countryside (as in the USA) or to go to shelters (as in Sweden and Switzerland). The outcome will be further affected according to whether, having received the relevant instructions, the population has actually obeyed them. The availability of protective clothing – especially respirators – will also affect the outcome, as will the post-strike availability of the essentials of life such as food, water and fuel. House construction methods also have to be taken into account, since these vary not only between countries, but also between regions within a country and between areas in a city (for example, between poor and wealthy districts).
In interpreting nuclear-casualty tables, it is important to note that casualties and damage are not necessarily cumulative from the different causes. Thus, for example, people within the danger zone for radiation may well have already been killed by blast or fire, and they can only die once! Similarly, a communication link in the area susceptible to TREE or EMP might well not be functioning because its antenna mast has already been blown down by blast.
One of the concerns of both sides in the Cold War was of being subjected to a pre-emptive attack against their military forces. Most troop concentrations (such as barracks) either are in urban areas or form large population centres equivalent to urban areas. The effect of a nuclear strike on such troops would depend on whether they had received adequate warning of an attack, enabling them to disperse to rural areas and, once there, to take adequate protective measures.[9] It would also depend on the distribution of the troops: in the former USSR, for example, troops deploying from an urban area likely to be a nuclear target would have been well advised to take up a position generally west of the city, since that would have placed them upwind of the fallout plume (the prevailing wind is westerly), although this might have placed them downwind of a strike on another city.
There were, however, yet more considerations for the military. It was possible that troops might have survived the attack only to discover that the city where their families had remained had been devastated. Depending upon the prevailing state of discipline, such troops might then have given priority to aiding the civil population rather than to taking part in any continuing military operations.
Airfields were somewhat different, since they covered large areas, while their population was concentrated in a small part of this. Most front-line airfields were essentially unprotected until the mid-1970s, when many of the facilities were given ‘nuclear hardening’ and fitted with filters.[10] Such airfields would have been high-priority targets for the opposing side, although in some cases aircraft and their support facilities could be deployed away from the large static airfields; for example, the Swedish and German air force used highways as runways, while the British deployed their V/STOL (vertical/short take-off and landing) Harriers to greenfield sites.
Damage to ships from either an airburst or a groundburst would be primarily caused by the shock wave. A powerful weapon at close range could have caused the hull to rupture, or even made the ship roll over, while a more distant weapon might have damaged the superstructure and deck equipment without actually sinking the ship. From the 1970s onwards most new warships in the major navies were fitted with ‘NBC-proof citadels’ – proof against nuclear, biological and chemical weapons – giving the crew protection against immediate radiation and fallout, while wash-down systems provided effective decontamination.
As far as is known, no tests were conducted against submerged submarines. The usual method of destroying a submarine is by a depth charge, which is used to generate a large pressure near to the submarine with the aim of puncturing the pressure hull. A nuclear weapon would have served a similar, but very much more powerful, function, although its effect would have diminished with distance.
Nuclear explosions give rise to consequential effects which are not a direct result of the explosion itself. For example, human casualties would have been far more likely to be caused by fires started by thermal radiation, or by flying debris (e.g. from falling buildings) and explosions (e.g. from ruptured gas mains), rather than by the blast effect itself. Similarly, in the post-strike period many deaths would have occurred from starvation and thirst (due to contamination or destruction of food stocks and water supplies), from exposure to cold climates (due to the destruction of buildings and clothing stocks), and from general debility and despair. There would also undoubtedly have been massive epidemics of diseases such as typhoid and cholera.
Consideration of the effects of nuclear weapons usually concentrated on urban areas, where the human casualties would have been highest and the devastation most obvious. There would, however, have been many effects in the countryside. Even under non-nuclear conditions, grasslands, heath-lands, forests and some crops are extremely vulnerable to fire, especially during the dry season, and nuclear explosions would have been much worse. Indeed, since it was unlikely that human agencies would have been available to extinguish them, such fires could have raged over wide areas and for long periods. In addition, fallout would have affected both humans and animals in the rural areas, and, as in the urban areas, diseases would have spread rapidly.
Urban-population dispersal to the countryside, whether as government policy or as a panic measure, would inevitably have affected the rural areas and population. The sudden and unplanned arrival of large numbers of city-dwellers, ill-prepared both mentally and physically for rural life in a nuclear environment, would quickly have caused problems over accommodation, but in the mid-term the problems would have centred on food, water and disease.
A major unknown factor in assessing the long-term effects of nuclear war was that, apart from the very limited examples of Hiroshima and Nagasaki, there was no precedent for what would happen. There was a degree of agreement over the types of consequences of a nuclear strike, such as cancers and genetic abnormalities (certainly among women pregnant at the time of the nuclear war, and possibly also hereditary), but there was no agreement on the scale. The US Office of Technology Assessment, for example, estimated in 1980 that following a general nuclear war, long-term radiation might possibly affect between 3.5 and 25 million people in the USA, 16 to 44 million in the USSR, and 11 to 37 million in the rest of the world. The very wide ranges resulted from the extreme sensitivity of the estimates to the assumptions made – at one extreme, that all factors were favourable to the defence; at the other extreme, that all factors were unfavourable.
All known nuclear powers (the USA, the USSR, the UK, France and China) carried out long-running programmes of tests, while India conducted just one test. The majority of these tests were conducted in order to check that the devices would function properly, but many – particularly those conducted before the signing of the Partial Test Ban Treaty in 1963 – were also used to try to establish the effects of nuclear explosions on buildings, aircraft, ships and so on.
One particular shortcoming of the testing programme was that, because of the fallout problem, only a very small number of the atmospheric tests carried out (see Table 7.2) were properly monitored groundbursts, the remainder being airbursts and high-altitude bursts or underground tests. This meant that assessments of the effects of groundbursts had to be based upon mathematical models, which might or might not have been accurate.
Another significant unknown was the effect of multiple explosions. At Hiroshima and Nagasaki and in all known subsequent tests, only one nuclear weapon was ever detonated at a time. Thus the possible effect of tens or even hundreds of more or less simultaneous nuclear explosions over a relatively small geographical area such as Germany or the western USSR was simply not known, and there could have been cumulative effects which were unforeseeable.
Another unknown was the behaviour pattern of people. In general terms, during the Second World War the mass of people contradicted what was thought to be the ‘lesson of Guernica’[11] and stayed put in their cities. Indeed, instead of rioting and bringing massive pressure to bear on governments to surrender, as had also been predicted, not only did they remain passive, but in many instances the attacks actually increased their determination to resist. Attacks by V-1 and A-4 (V-2) missiles on cities such as London and Antwerp gave rise to slightly greater degrees of panic than did manned bombers, but not on the scale the Germans had expected. The threat from nuclear weapons was, however, different by many orders of magnitude, and included not only immediate damage on an almost unimaginable scale but also the certainty of long-term suffering for those who survived. How people might have responded to that threat was simply impossible to predict.
It must therefore be borne in mind throughout this book that the forecasts of the effects of individual nuclear weapons, especially groundbursts, and the predictions of the outcomes of nuclear wars were essentially ‘best guesses’. They were also very sensitive to the assumptions on which they were based, and it was by no means unknown for officials and academics (in both East and West) to ‘fine-tune’ their assumptions in order to produce outcomes favourable to the case they were trying to make.
AMONG THE MANY military legacies of the Second World War, two of the most significant were land- and sea-based ballistic missiles,[1] which quickly enabled the two superpowers to threaten each other directly. The German A-4 (V-2)[2] rocket entered service in 1944 and carried out attacks on the UK, Belgium and the Netherlands in a programme unique in the annals of warfare, the concept, delivery system, propulsion, guidance and method of deployment all being totally new.
Because it was developed by the army as a form of very-long-range artillery, the A-4 was highly mobile, using a simple transporter–erector, while its size was the maximum that could be transported through a standard European railway tunnel. The missile had a range of approximately 320 km, and the warhead contained 910 kg of high explosive (Amatol). Some 4,320 A-4s were launched in anger, the principal targets being London and, later, Antwerp. The A-4 caused the Allies severe problems in the last months of the war, because there was no known defence against it, other than overrunning the launching sites on the ground.
The German missile designers’ sights were aimed at even more distant targets, and presaged the intercontinental-missile era with two plans for ballistic-missile attacks on the continental USA.[3] The first of these envisaged mounting an A-4 missile in a submerged container/launcher which would be towed across the Atlantic behind a submarine – the embryo of the concept of submarine-launched ballistic missiles (SLBMs), which were to appear in the late 1950s. The second was for a two-stage missile with a 5,000 km range; this would have been launched against New York from sites in western France – the precursor of the intercontinental ballistic missile (ICBM). Both projects were technically feasible, but, fortunately for the Allies, the Germans ran out of time before either could be implemented. As a result of the success of the A-4, however, the Americans, Russians and British captured as many A-4s as they could, and took as many sample missiles, designs and designers as they could lay their hands on back home to develop new versions of these ‘terror weapons’.
Such missiles, coupled with the most significant weapon of all, the atomic bomb, also brought into the realms of possibility the destruction of the civilized world in what US president Jimmy Carter once described as ‘one long, cold, final afternoon’. The atomic bomb gave military planners new destructive power, far in excess of anything that had gone before, with one bomber or missile able to carry a warhead more powerful than thousands of its predecessors. Not surprisingly, these new weapons and new delivery means required novel strategic concepts for their use, one of the most important – and contradictory – elements of which was that they would truly fulfil their function only if they never had to be used.
Nuclear strategy and nuclear-weapon targeting in the Cold War were very complicated businesses, not least because all those involved were venturing into the unknown. The USA declared itself wedded to the concept of deterrence, whose fundamental proposition was that a rational opponent would not attack if the risks of retaliation outweighed the predicted gains of the attack. Caspar Weinberger, secretary of state for defense under President Reagan, stated that, to be effective, deterrence had to meet four tests:
• Survivability: our [i.e. US] forces must be able to survive a pre-emptive attack with sufficient strength to threaten losses that outweigh gains;
• Credibility: our threatened response to an attack must be credible; that is, of a form that the potential aggressor believes we can and would carry it out;
• Clarity: the action to be deterred must be sufficiently clear to our adversaries that the potential aggressor knows what is prohibited; and
• Safety: the risk of failure through accident, unauthorized use, or miscalculation must be minimized.{1}
In other words, an aggressor who was considering a first strike would be deterred from carrying out an attack on the enemy’s population centres if he considered that the enemy would retain both the capability and the will to attack the aggressor’s population in turn.
At least in public, the Soviet Union was very dismissive of the doctrine of deterrence, but such a concept seems to have been at the heart of Marshal V. D. Sokolovskiy’s statement in 1975 (in what may be assumed to be a close reflection of the Kremlin’s views) that:
Nuclear rocket attacks by strategic weapons will have decisive primary significance on the outcome of a modern war. Mass nuclear attacks on the strategic nuclear weapons of the enemy, on his economy and government control system, with simultaneous defeat of the armed forces in theatres of military operations, will make it possible to attain the political aims of a war in a considerably shorter period of time than in past wars.{2}
Sokolovskiy then went on to say that:
The basic aim of this type of military operation is to undermine the military power of the enemy by eliminating the nuclear means of fighting and formations of armed forces, and eliminating the military–economic potential by destroying the economic foundations for war, and by disrupting governmental and military control. The basic means for attaining these ends are the Strategic Rocket troops equipped with ICBMs and IRBMs with powerful thermonuclear and atomic warheads, and also long-range aviation and rocket-carrying submarines armed with rockets with nuclear warheads, hydrogen and atomic bombs. These ends can be achieved by attacks on selected objectives by nuclear rocket and nuclear aviation strikes. The most powerful attack may be the first massed nuclear rocket strike with which our Armed Forces will retaliate against the actions of the imperialist aggressors who unleash a nuclear war [my italics]. In making nuclear rocket and nuclear aviation strikes, military bases (air, missile and naval), industrial objects, primarily atomic, aircraft, missile, power and machine-construction plants, communications centres, ports, control points, etc. can be destroyed.{3}
In other words, the Soviet Union would have responded to a Western first strike with a massive counter-attack, directed against both military and military–industrial targets.
There were three important elements in the strategies of both sides. The first was that each side needed to have an accurate knowledge and understanding of the opponent’s value system, especially when judging what would be considered ‘unacceptable’ and ‘credible’. Second, peacetime discussions and war gaming were inevitably conducted in ‘ivory-tower’ conditions. The third factor in the strategic area was whether or not it was feasible to use what were termed ‘tactical nuclear weapons’ on the battlefield or at sea without escalating immediately to strategic nuclear warfare.
One of the fundamental requirements of deterrence, at least as discussed within the United States, was that commanders and planners needed to be certain about what the rational planner on the other side would find to be totally unacceptable. The problem was, of course, that perceptions of unacceptability can differ widely. The Russian people have been notable during many centuries for their stoic resistance to suffering; during the Second World War, for example, the western part of the USSR suffered dreadfully, with at least 20 million deaths. Nevertheless, the Soviet Union recovered remarkably quickly after the war. Further, in a state such as the USSR, where one group dominated a number of disparate groups, it seems possible that a nuclear strike in the Ukraine or Kazakhstan might not have been considered ‘unacceptable’ to an ethnic Russian in a command bunker in Moscow, while a nuclear attack on Moscow might have had little relevance in Siberia.
The countries of western Europe also had experienced suffering. Germany had incurred tremendous losses among its young male population, and the state had been almost totally destroyed twice in the space of thirty years, but the recoveries had been both rapid and complete. France had been occupied and had its territory fought over twice, while the British had been bombed but not occupied. British, French and German post-war planners might therefore have had some, albeit differing, perceptions of the Russian wartime suffering and what Soviet leaders might have deemed to be ‘unacceptable damage’. A US planner, brought up in a country which had never suffered a direct major attack, would have had a different perception still. On the other hand, despite the openness of Western society, the Soviets may not have had sufficient knowledge and understanding of Western countries to be able to judge correctly what the United States or western Europeans would consider unacceptable losses.
An additional hazard was the distinct danger that, if a real war had started, time would have been so short that one side or the other might well have escalated rapidly to the highest level in order not to be caught out by its opponent. This was in many ways a modern equivalent of the mobilization timetables which so influenced general staffs in 1914 and were so inflexible that the generals brought enormous pressure on their governments to call up the reserves and start the railways moving in order to complete national deployment before the prospective enemy could do so.
In addition to these weighty factors, there were many other issues which were new to war planning. For example, if side A’s aim was to force side B into negotiating, then it made little sense to destroy B’s political and military leadership, which was precisely the group required to conduct such negotiations. Also, while A might gain some short-term military advantages by destroying B’s communications systems, such destruction would only prevent B’s leadership from communicating with the attackers to negotiate a cessation of hostilities. A further factor was that the loss of communications would prevent B’s leadership from exercising control over its subordinates, giving rise to the possibility that the junior echelons might then act in a totally unpredictable and irrational manner. This could possibly escalate the war well beyond what A’s leadership had planned and force B into escalatory retaliation, to which A then felt it necessary to respond, and so on. Thus, while in virtually all previous wars, up to and including the Second World War, it had been a traditional aim to destroy the enemy’s capital city and communications systems, in a Third World War it might have proved more effective and symbolic to leave them alone, while destroying other cities.[4]
Another novel factor, of growing importance during the Cold War, was that in earlier eras it had been assumed that war and military matters in general were best left to the military, and that academics, if they insisted on meddling, should confine their attentions to military history. To be sure, an occasional civilian commentator might use either newspapers or books to publish his views, but defence matters were considered the province of admirals and generals, who in turn consigned the detail to the commanders and colonels. One of the significant innovations during the Cold War, however, was the increasing interest taken by sections of the academic community, particularly in matters of nuclear strategy, and an enormous volume of articles, theses and books appeared. Such academics, particularly in the United States, sometimes also achieved positions of great influence in the government.
One contradiction these academics could not escape was that if they had ‘inside’ official information it was invariably so highly classified that they could not use it, while if they did not have such information their arguments were unavoidably based on information available in the public sector, which was frequently out of date, incomplete or, in some cases, just wrong. To take just one example, in the mid-1950s various books were published based on the hypothesis that H-bombs could never be reduced in size sufficiently to fit on the front end of an ICBM, the authors being quite unaware that, even as they wrote, such ‘miniaturized’ warheads were actually under test.
A further problem facing the population at large was that public pronouncements by senior service officers (especially when appearing before congressional or parliamentary committees) were almost always gloomy. Their equipment, they claimed, was at best obsolescent, and the shining new equipment which had been introduced into service the previous year with such ceremony was now completely outclassed by something just introduced by the enemy; and, in any case, the manpower was insufficient. But they would always claim that, given more money, new equipment, greater resources and more men – and provided these were given to the admiral’s or general’s own service or branch of service – all would be solved.[5]
Indeed, there were frequently two quite different agendas, especially in the United States. On the one side there was the public rhetoric, which was frequently designed to meet political or even bureaucratic aims – e.g. congressional appropriations or the aims of national (sometimes even local) politics and even inter-service rivalries. On the other side were the real policy and the actual governmental plans for the employment of the nuclear arsenal in a real-world conflict.
A particular case was the concept of the triad, a term which originated in the US Department of Defense and which was used to describe – and justify – a threefold order of strategic forces, consisting of sea-based missiles, land-based missiles and bombers. In reality it was not so much a philosophical concept – although it certainly had a fine Hegelian ring to it[6] – as an attempt to rationalize a situation which already existed and to continue to procure new systems to equip all three legs, such as bombers, which might not otherwise have been sustainable.
Various types of attack were envisaged. First strike (also known as a ‘pre-emptive strike’) was the most feared, in which one superpower would launch an attack on its opponent’s strategic weapons with no preliminary warning and no sign of a build-up. In such a case the target superpower might have received about thirty minutes’ warning of weapons being launched from the opponent’s home territory, although it was always possible that some weapons might be launched from closer in (e.g. by Soviet Yankee-class nuclear-powered ballistic-missile submarines (SSBNs) patrolling off the US coast, or US Pershing missiles located in West Germany), when the warning would have been of the order of four minutes.
One of the possible responses to an incoming first strike was launch-on-warning, in which the victim launched its ICBMs in the minutes available between detecting the strike and its actual arrival. The main problem with this was that the aggressor would almost certainly have retained a reserve of ICBMs and it seemed unlikely that, in the time available, the victim would have been able to establish which silos the incoming missiles had been launched from and thus which silos remained occupied and so were worth attacking. It appeared logical, therefore, that a launch-on-warning aimed at enemy ICBM silos and thus hitting many empty silos would have been largely wasted and that such a strike would more profitably have been aimed at ‘other military targets’ (see below).
US strategists discussed a possible Soviet strategy known as ‘pin down’, which postulated that Soviet SLBMs might be launched from SSBNs close to the US coast to prevent a launch-on-warning by US ICBMs. It was at least theoretically possible that such SLBMs could have been aimed and timed in such a way that they exploded among the US ICBMs as they lifted off from their silos, thus either destroying the US missiles or seriously affecting their accuracy.
Another form of Soviet strike which caused concern to US planners was the development of the Fractional Orbital Bombardment System (FOBS), which involved launching a missile into a low (approximately 160 km) orbit and then, after less than one complete orbit, firing retrorockets to make the warhead descend rapidly and steeply on to the target. This would have greatly reduced the warning time and, because the missiles would have approached the continental USA from a hitherto unexpected direction (south-east), it forced the USA to build a new radar station at Eglin Air Force Base in Florida. The Soviets tested both SS-9 and SS-X-10 missiles in this role, but neither was ever deployed operationally.
At the lowest end of the strategic nuclear ladder was the use of a very small number of nuclear weapons in an exemplary (or demonstrative) attack, intended to show determination either to use nuclear weapons on a large scale, if pushed any further, or to attack certain types of target. Such an attack would have had to be either preceded or accompanied by a specific warning to make it clear to the other side what was intended, but there was an argument that such an attack would invite retaliation in kind, in which the other side would launch a similar number of warheads against similar types of target. Indeed, the other side might well have felt compelled to make a response-in-kind even if it intended to take matters no further.[7] Thus the originator of an exemplary attack needed to expect similar losses and casualties to its opponent’s, which (at least in theory) would not have been escalatory. Contingency plans for such ‘demonstrations’ were included in NATO nuclear planning, and were certainly part of the Berlin plans.[8]
One step higher on the ladder of nuclear escalation was a limited-objective attack. Attacks on numerous types of target fell into this category, such as Soviet strikes against one or more US carrier groups, US strikes against Soviet naval surface-action groups, and strikes against bases such as Pearl Harbor or Plesetsk. The Soviet Union was considered more likely than the USA to undertake such strikes, although the USA needed to have the capability in case it was required for a response-in-kind. The conduct of such attacks would have required elaborate command-and-control facilities, and it could well have been very difficult for the victim to distinguish between tactical and strategic weapons (if, indeed, such a distinction retained any relevance once war had started).
Massive counter-military attacks also covered a wide spectrum of possibilities, ranging from a strike at the opponent’s strategic offensive forces to an all-out attack on all military forces, logistics installations, military-oriented research establishments and military–industrial facilities. Targets for such an attack would have been selected from a list of some 10,000 such targets in the USSR and some 5,000–7,000 in the USA. Such an attack could not avoid collateral damage to urban centres and would, at least in theory, have been deterred by a city-attack capability.
Throughout the Cold War both sides intended to maintain a reserve of weapons as an urban–industrial reserve, which was to be used as a last resort, when all else had failed, against the enemy’s vitals. Thus US plans for a response-in-kind following a Soviet first strike against ICBM sites could only have been carried out using weapons which were not part of the urban–industrial reserve. There was a point, certainly in US planning in the 1970s, where targeting enemy cities simply in order to kill people changed to a more precise form of targeting in which specific military, economic and political targets were selected with the intention of inhibiting the enemy’s post-war recovery.
All US plans, and presumably the Soviets’ too, enabled the plan to be implemented with the exclusion of either a specific target or targets, or of a category of target. In US strategic jargon, these were known as ‘withholds’. Thus communications systems were withholds, in some US plans, while Moscow was a withhold in a US attack on major cities.[9] The Reagan plans, however, specifically instructed that weapons were to be retained to attack such ‘withholds’, presumably on the semantic ground that if a target was omitted from a plan but no weapons were left to attack it, it was an ‘exclusion’ rather than a withhold.
Response-in-kind was designed to attack the same character of target and to inflict casualties of the same order of magnitude as the attack being replied to. To be credible it had maintain the same character of engagement, although it was foreseen, at least in the USA, that it could be escalatory if force assymetries existed such that the opponent could not counter-escalate.{5}
Nth-country reserve was a concept in which a superpower retained a reserve of strategic weapons to deal with another country, apart from the opposing superpower. At the height of the Cold War this might have been China in the case of the USA, and France and the UK in the case of the USSR.
The USA divided targets into three major categories: counter-force, counter-value, and other military targets.
Counter-force targets were the enemy’s strategic nuclear forces, which comprised ICBM silos, bomber bases and SSBNs in harbour. The category also included political and military nuclear command-and-control centres, and their relevant communications systems. Such targets were given progressively greater protective ‘hardening’ as the Cold War progressed, and their destruction depended increasingly upon the power and accuracy of the warheads.
Counter-value targets were cities and industrial complexes. In the late 1980s the USA had 162 cities with populations greater than 100,000, of which thirty-five exceeded 1 million inhabitants. In contrast, the USSR had 254 cities of over 100,000, of which only thirteen exceeded 1 million inhabitants. Western Europe had some exceptional concentrations, including eight areas with over 2.5 million inhabitants. For the USA and the USSR, cities were the targets for their SLBMs and the less accurate ICBMs.
Other military targets covered a collection of low-collateral-damage, high-military-value targets, including barracks, nuclear storage sites, nuclear production facilities, and headquarters. According to US sources there were about 2,000–3,000 such targets in the USSR and approximately 1,000 in the USA, although Soviet target analysts could well have included more in the USA.
In the original ballistic missiles, such as the German A-4 and its immediate derivatives, the warhead was an integral part of the missile. From the mid-1950s onwards, however, the warhead separated from the missile in space and descended on an independent, unpowered trajectory to the target; the missile itself thus became simply a means of transportation. Initially, such re-entry vehicles (RVs) contained one warhead each, but by the early 1960s US and Soviet strategic planners found themselves faced by many more targets than they had missiles. One possible solution was to build vast numbers of missiles, but this would have required an equal number of silos, plus the associated command-and-control facilities, and would have been extremely expensive.
The first practicable solution was the use of multiple re-entry vehicles (MRVs), in which several RVs were placed on one missile and, like the shot from a hunting gun, were all aimed at the same target, thus increasing the chance of a kill. A variation on this theme was to aim the MRVs at several targets within the same small area, and the three MRVs on the Soviet SS-9 Mod 4, for example, were aimed to impact with the same spatial dispersion as the three silos in a US Minuteman complex.
Technology moved on quickly, and it then became possible to target each warhead independently on to separate targets. This was achieved by mounting them on a post-boost vehicle (PBV, also known as a ‘bus’), which, under computer control, dispatched its RVs one at a time according to the targeting programme. Such warheads were known as ‘multiple independently targeted re-entry vehicles’ (MIRVs), and eventually missiles were carrying as many as fourteen. Unfortunately, radar observation was able to determine how many such devices were being launched by a particular missile bus by counting the number of course alterations (known as ‘dips’), so the RVs were equipped with decoys and ‘penetration aids’ which matched the real RVs’ radar and thermal signature, to confuse the defences.
Single warheads, MRVs and MIRVs all followed ballistic trajectories, which could be rapidly and accurately predicted by the defence, but as the Cold War came to an end a new type of warhead, the Manoeuvrable Re-entry Vehicle (MaRV), was under development, although it did not attain operational status.
The maximum missile payload was termed the ‘throw weight’, and during the SALT II negotiations this was defined as the sum of the weights of the RVs, the post-boost vehicle and any anti-ballistic-missile penetration aids, including the devices to release the RVs. Throw weight was thus a function of the power of the missile’s propulsion system, and increased steadily over the years. In any one missile the amount of fuel was fixed, so the only way to alter the range was by varying the payload – i.e. by reducing the number of RVs, ‘penaids’ or decoys.
One of the significant elements of throw weight was that it showed the potential for future improvements, since existing throw weight could be fractionated to provide a greater number of smaller warheads, thus increasing the war-fighting capability. As designers became more able to reduce the size of warheads, however, throw weight became less important, and it was in any case never an indication of a system’s ability to destroy targets at the far end of the flight.
The accuracy of missile RVs is expressed as the circular error probable (CEP), which is defined as the radius of a circle, centred upon the mean point of impact, within which 50 per cent of the warheads aimed at the target will fall. The size of the CEP is determined by a combination of computer calculations and empirical data obtained from the testing programme, and is normally understood to apply to the missile’s maximum range. When fired to less than that range, the CEP reduces in proportion.[10]
Of greater importance is the distance between the mean point of impact and the target itself, which is termed bias. This is similar to the deflection of a rifle bullet by wind, and in the case of a missile is a result of the cumulative effect on the trajectory of the missile and the RV of system errors such as uneven erosion of the ablative shield[11] during re-entry and errors in components such as the on-board accelerometer, as well as unforeseeable events such as the weather over the target.
Timing was, for both sides, a critical consideration. In launching a first strike, for example, there was a host of weapons to be co-ordinated, including:
• home-based ICBMs;
• SSBNs – some close to enemy shores, some in transit and some at their bases;
• medium-range ballistic missiles (MRBMs) with shorter flight times (e.g. US Pershing MRBMs based in West Germany);
• bombers.
In addition, the USA had to consider:
• US navy carrier groups at sea;
• British and French nuclear forces;
• European-based NATO aircraft;
• airbases around the world with aircraft with roles in the US missile attack plans.
There were also many technical restrictions. It was discovered in the 1970s, for example, that an attack by several warheads on a single target – or a simultaneous attack on an entire missile field – would inevitably lead to ‘fratricide’, in which the explosion of the first warheads to arrive would either destroy the subsequent warheads or knock them off course. This effectively reduced the number of warheads that could attack any one target to two within a few seconds of each other, followed by a gap of some ten to twenty minutes before a further attack could be undertaken.
IN THE IMMEDIATE post-war years the feeling in the United States was that ballistic missiles offered the best long-term solution for strategic warfare, but that the technology of the time did not appear to make it possible to build a missile with the necessary range (9,300 km) and capable of carrying a nuclear payload, which at that time was large and heavy, weighing some 3 tonnes. The Convair company flight-tested the intercontinental-range MX-774 missile in 1948, but the newly independent US air force decided to follow the path pioneered by the German V-1 ‘flying bomb’ and to develop cruise missiles[1] instead.
The first of these was the N-69 Snark pilotless bomber, which was much larger than the V-1 and had a range of 10,200 km, cruising at a height of some 12,000 m and using a star tracker to update its inertial navigation system. Its speed of 990 km/h meant, however, that, at its extreme range, it took some eleven hours to reach the target. The nose-cone carried a 5 MT (later 20 MT) nuclear warhead, and the missile could approach the target from any direction and at any height, while its very small radar cross-section made it difficult to detect. The Snark entered service in 1957 but was retired in 1961, when the Atlas ballistic missile became operational; its main significance was that it was the first operational missile to bring one superpower within attacking range of the other.
Snark was due to be succeeded by the SM-64A Navaho, a vertically launched, winged cruise missile, which travelled at Mach 3.25 (3,500 km/h) at a height of 18,300 m. Navaho would almost certainly have proved a highly effective strategic weapon, but it never reached production, as the USAF had already transferred its attention to ICBMs.[2]
Development of long-range ballistic missiles in the United States in the immediate post-war years was erratic, to say the least. The US army had obtained the plans for the A-4 (V-2) and assembled a number of former German scientists, including Werner von Braun, at the Redstone Arsenal. Their first product was the Redstone short-range (400 km), land-mobile, liquid-fuelled, nuclear-armed missile, which was in service from 1958 to 1963. Next the army started to develop the Jupiter, which was again a land-mobile missile system, but this time with a range of 2,400 km. This was midway through development when, in late 1956, the secretary of state for defense ordered that the US air force was to assume responsibility for all missiles with a range greater than 200 nautical miles (370 km). Development was completed by the USAF, and Jupiter subsequently saw limited service with the air force.
Having been concentrating on long-range cruise missiles, the USAF now had to make up for a lot of lost ground. Despite having been handed the perfectly acceptable Jupiter by the army, it initiated a very expensive crash programme for its own IRBM, leading to the Thor. This did nothing that Jupiter could not already do, but operated from a fixed base, rather than from a mobile platform. Thor’s 2,700 km range, however, was insufficient for the missile to be launched against the USSR from the continental USA, so it was handed over to the UK air force, which deployed sixty missiles between 1959 and 1964.
The entire Thor storage-and-launch complex was above ground in unprotected shelters, and the missile had be towed out to the launch pad, raised to the vertical, fuelled, prepared, and then launched, the whole process taking fifteen minutes. This was all done in the open, on concrete hard-standing, at well-documented sites, and was very vulnerable. No cost-effective measure to reduce the reaction time could be found, so the missile was phased out after only five years of service.
Meanwhile, the USAF’s major development effort had turned to the Atlas missile, which was much larger and was a true ICBM, with a range of 14,000 km. Atlas benefited from much of the technology which had been developed for the Navaho cruise missile, and entered service in 1960.
The first USAF squadron equipped with the Atlas missile used an almost identical siting system to Thor, with six above-ground shelters and each missile having a thirty-minute launch countdown, but the next squadron’s nine missiles were in three separated groups of three, with individual shelters having a split roof, enabling the missiles to be raised to the vertical in situ, thus saving several minutes of launch time. The next three squadrons had similarly dispersed sites, but this time the missiles were housed in semi-hardened bunkers, recessed into the ground and with even greater separation. The final units were housed in hardened underground silos.
Titan I, which had a range of 10,000 km, was, like the final Atlas, located in silos and raised to the surface for launch; however, it had a new and much faster fuelling system, enabling it to be launched some twenty minutes after the countdown started. There were five Titan I sites, one with eighteen missiles and four with nine each, but the system had only a brief period of service, becoming operational in 1961 and being replaced by Titan II from 1963 onwards, the process being completed in 1966.
Despite its name, Titan II was almost totally different from Titan I, not least because of a 50 per cent increase in range, to 15,000 km. Again, the missiles were sited in squadrons consisting of three widely separated groups of three, with two squadrons at each of three bases, but the new system introduced a completely novel launch system, with the missile being launched from inside the silo. Two other advances in this missile were the use of an inertial guidance system and the use of storable liquid fuel – i.e. the fuel was already loaded in the missile, thus cutting out the time needed to fuel the earlier missiles. In combination these developments resulted in a launch time of just sixty seconds. Fifty-four missiles were deployed, being operational from 1963 to 1987.
By now, the future obviously lay with solid-fuelled missiles, which were safer and more reliable, and in simpler, cheaper and more survivable siting and launch systems. A rail-mobile system was considered for Minuteman I, but the silo option won.
The two-stage Minuteman I was deployed from 1962 onwards in individual unmanned silos, which were scattered over large areas. Ten silos were grouped into a ‘flight’, five flights in a ‘squadron’, and squadrons into ‘wings’; there were four squadrons in each of four wings, while the fifth wing had five squadrons. The overall total was 800 missiles.
Minuteman II was longer and heavier than Minuteman I, with extended range (12,500 km compared to 10,000 km) and a more accurate warhead. It entered service in 1966, and by 1969 it had replaced all Minuteman Is. Of the 450 deployed, ten were subsequently reconfigured to carry the Emergency Rocket Communications System (ERCS) and thus no longer carried nuclear warheads.[3]
Minuteman III introduced a third stage and was also the first US ICBM to carry MIRVs, but its basing and launch systems were the same as those of Minuteman II.
The Missile, Experimental (MX) programme was one of the longest and most controversial in the Cold War, with much of the argument centring on the question of basing. Indeed, MX consumed money at a prodigious rate and gave rise to an industry of its own for many years before it began to make any contribution to Western deterrence. The programme started in the early 1970s, and eventually resulted in the fielding of just fifty Peacekeeper missiles in 1986. After all the argument on different basing systems, these were placed in Minuteman III silos. Peacekeeper had a range of 9,600 km and carried ten W-87 warheads, each with a yield of 300 kT and an accuracy (CEP) of 100 m, giving them an extremely high lethality. During the Cold War these would almost inevitably have been targeted on both Soviet leadership bunkers and ‘superhardened’ ICBM silos.
The first official rocket-propulsion laboratory in the Soviet Union was opened in 1921, but attention was concentrated on short-range artillery missiles until after the Second World War, when the USSR produced a copy of the German A-4, known under the NATO system as the SS-1, ‘Scud’.[4] The SS-2, ‘Sibling’, was similar, but with Soviet advances to increase range and reliability, while the SS-3, ‘Shyster’, was the first to carry an atomic warhead.
In the 1950s the USSR found itself without a strategic bomber force to counter the B-36s, B-47s and B-52s of the USAF, and the quickest way to produce an answer was an ICBM. The technology of the time was, however, comparatively crude: warheads were heavy, and the sum total of the components, the payload and the fuel needed for intercontinental range came to well over 200 tonnes. Nevertheless, the USSR, which was never deterred by the size of a project, pressed ahead to produce the huge SS-6, ‘Sapwood’, which first flew on 3 August 1957. The necessary thrust was obtained by using a basic missile surrounded by four large strap-on boosters, the main missile and each booster having a 102,00 kgf thrust rocket motor. Thus, the device had a launch weight of no less than 300 tonnes, but was powered by motors with a total thrust of 510,000 kgf.
As a strategic weapon the SS-6 was less than successful: it had a poor reaction time, due to the need to load huge quantities of cryogenic fuel,[5] it was far too big to be put in a silo, its electronics were crude and unreliable, and it was very inaccurate, with a CEP of some 8 km. The knowledge that the USSR had such a powerful launch vehicle had a major psychological impact on the USA, but no more than four SS-6s were ever deployed operationally as ICBMs. The SS-6 was, however, used for space launches for many years, since it could lift the heavy weights needed for programmes such as Sputnik, Luna, Vostok, Voshkod, Mars and Venera.
The first really successful Soviet ICBM was the SS-7, ‘Saddler’, of which 186 were deployed from 1961 until it was withdrawn in 1979 under the terms of SALT I. The SS-7 was the first Soviet missile to enter service using storable liquid fuel. It had two stages giving it a range of some 11,500 km, and was therefore the first Soviet ICBM to pose a realistic threat to the continental USA, although its relative inaccuracy (it had a CEP of 2.8 km) restricted it to counter-value targets.
It was long a feature of Soviet military philosophy that an ambitious programme was backed up by a much less demanding and technically safer system, which in this case was the SS-8, ‘Sasin’. Only twenty-three SS-8s were ever deployed, and they had a limited life from 1965 to 1977.
The SS-9, ‘Scarp’, was the first of the second generation of Soviet ICBMs: a heavy, silo-based missile which became operational in 1966. Numbers peaked at 313 in 1970, remaining at this level until 1975, when retirements began, the last of the type being withdrawn in 1979. Four versions were known: the first to enter service was Mod 1, which had a 20 MT warhead, while Mod 2, the principal production version, had a 25 MT warhead – by far the most powerful warhead ever to achieve operational status in any country. The Mod 3 was a special version which was used to test the Fractional Orbital Bombardment System (FOBS), which was designed to attack the USA from the south-east; it caused considerable concern in the Pentagon. Mod 4 carried three MRVs, which impacted with the same spread as a typical USAF Minuteman missile complex, although it never actually entered service, the mission being allocated to the SS-11 Mod 3 instead.
The SS-10, ‘Scrag’, was the insurance against the failure of the SS-9. This huge missile, which used cryogenic fuels, was shown at the 1968 Red Square parade but never entered service.
The two-stage SS-11, ‘Sego’, used storable liquid propellant and entered service in 1966, eventually serving in three principal variants. Mod 1 had a single 950 kT warhead, Mod 2 had increased range and throw weight, as well as penetration aids and a more accurate warhead, while Mod 3 carried three 200 kT MRVs, the first such system to be fielded by the USSR, with a foot-print virtually identical with that of Minuteman silos. The SS-11 had a long life, with just over half being replaced by the SS-17 and SS-19 in the late 1970s, while the balance of 420 remained until 1987, when they were replaced progressively by the road-mobile SS-25.
Developed concurrently with the SS-11, the SS-13, ‘Savage’, was the first solid-fuel Soviet ICBM, and had an unusual construction with three stages linked by open Warren-girder trusses – a configuration matched only by the earlier SS-10. There were claims in the early 1970s that the SS-13 was being used in a mobile role, but these were never substantiated. The USSR claimed that the SS-25 was a modified version of the SS-13 (which was permitted under SALT II), and flew two missiles in 1986 to demonstrate that this was the case to the USA. Only sixty SS-13s entered service, and the production and maintenance of such a small number must have been very expensive. However, it must be assumed that it played a useful role in the Soviet nuclear force, as the SS-13 remained in service from 1972 until past the end of the Cold War.
The SS-17, ‘Spanker’, which used storable liquid propellant, was developed in parallel with the SS-19 as a replacement for the SS-11 and was in service from 1975 to 1990. It was the first Soviet ICBM to be launched by using a gas generator to blow the missile out of the silo, with ignition taking place only when the missile was well clear. Known as the ‘cold-launch technique’, this method minimized damage to the silo and enabled it to be reused. This caused considerable alarm in the United States, as it was seen to indicate a plan for a nuclear war lasting several days, if not weeks. The second innovation was that several versions carried MIRVs, the first operational Soviet ICBMs to do so: Mods 1 and 3 carried four 200 kT MIRVs, but the Soviets, as always, hedged their bets, and the SS-17 Mod 2 carried a single 3.6 MT warhead.
The SS-18, ‘Satan’, the successor to the SS-9, was by far the largest ICBM to be fielded by either of the two superpowers, and its throw weight of 8,800 kg was the greatest of any Cold War missile. Starting in 1975, it was deployed in former SS-9 silos, which were modified and upgraded to take the new missile. Mods 1 and 3 both had a single large 20 MT warhead, while Mods 2 and 4 each had ten 500 kT MIRVs. The SS-18 was described by the USA as ‘extremely accurate’ and ‘designed to attack hard targets, such as US ICBM silos’. Also, according to US sources, the SS-18 force was capable of destroying ‘65–80% of the US ICBM force, using two warheads against each. Even after such an attack, there would still be over 1,000 SS-18 warheads available for further strikes against US targets.’{1}
The SS-19, ‘Stiletto’, was developed in parallel to the SS-17 and entered service in 1971, with a peak deployment of 360; it was the most widely used Soviet ICBM of its generation. It was a hot-launch missile, although it was housed in a canister which reduced silo damage. Various versions of the missile were developed, but the service version was the Mod 3, with six 550 kT MIRVs, each with a CEP of 400 m, which, again according to US sources, meant that ‘while less accurate than the SS-18, [it had] significant capability against all but hardened silos. It could also be used against targets in Eurasia.’{2} It would therefore appear safe to assume that the SS-19 was targeted against counter-force targets, such as reasonably hardened military targets, but not against ICBM silos, which were the task of the SS-18.
The SS-24, ‘Scalpel’, was fielded in two launch modes, the Mod 1 being rail-mobile, while Mod 2 was silo-based. The actual missiles in each variant were virtually identical, being ten 500 kT MIRVS with a range of 10,000 km and a CEP of 200 m. Mod 1 was deployed in trains with three launchers each, with three rail garrisons, all in Russia; there were four trains each at Kostromo and Krasnoyarsk and three trains at Bershet. Fifty-six of the silo-launched version (Mod 2) were deployed, split between one site in Russia (ten silos) and one site in the Ukraine (forty-six silos).
The SS-25, ‘Sickle’, was the last Soviet ICBM to be fielded during the Cold War. It was a single-warhead missile, carrying one highly accurate 550 kT warhead, and entered service in 1985. At the end of the Cold War 288 missiles were split between nine sites, with further missiles being deployed up to 1994. The missile was road-mobile, but was normally housed in a garage with a sliding roof which could be opened for an emergency launch. Given the necessary warning, however, the fourteen-wheel TELs were deployed to pre-surveyed sites in forests, where they were raised on jacks for stability during launch.
The SS-25 missile was contained in a large cylindrical canister, and the system was reloadable, highly survivable and capable of rapid retargeting. This led US sources to speculate that it was designed for use in a protracted nuclear war as a reserve weapon, when it would ride out the first wave of US attacks on the Soviet nuclear arsenal and then retaliate against surviving targets, which could be selected and set into the warhead at the time. It was during the flight testing of the SS-25 that the Soviets first used encryption on their telemetry down-links, which caused the US to claim that they were acting in contravention of the SALT II agreement.
The original German A-4 missile employed a brilliantly simple road-mobile system, in which the missile was carried on a four-wheeled trailer known as a Meillerwagen. When the missile was to be launched, the Meillerwagen raised it to the vertical and then lowered it on to a small launch platform. Each site had a crew of 136 men, with many more men and vehicles in the logistics chain.
The Germans also gave active consideration to launching the A-4 missile from a train. According to a 1944 plan, each train would carry six ready-to-use missiles, and include an erector–launcher car, seven fuel-tanker cars, a generator car, a workshop, a spares car and several cars for the crew. On top of this, however, the train would also carry all the vehicles normally associated with a missile battery, in order that the unit could dismount from the train and operate independently of it, which brought the whole battery up to the unwieldy total of seventy to eighty freight cars, probably requiring at least two separate trains. Separate logistic trains were planned to bring further supplies of fuel and missiles. Prototype trains were running before the end of the war, but the system was not a practicable proposition in view of the air supremacy of the Allies, for whom all trains were a high-priority target.{3}
ICBM forces were originally built to threaten the opponent’s civil population, which in itself was not a difficult task: the warheads were relatively inaccurate, but the cities were large and the warheads powerful. It was obviously highly desirable, from both political and military viewpoints, to defend the population from this threat, in the same way that bombers had been opposed by a mixture of fighters and anti-aircraft guns during the recent war. It was not feasible at the time to intercept incoming ICBMs, so the only defence was to attack the ICBMs at their source, which could be done only by conducting a pre-emptive strike with other ICBMs. Thus the position was rapidly reached where the ICBMs’ principal target was the other side’s ICBMs, moving on to other missions only when that first battle had been decided. It was therefore necessary to optimize the attacking potential of one’s own missiles while ensuring their survivability in the face of an opponent’s first strike. There were four possibilities:
• superhardened silos, which would withstand even the most powerful incoming warhead;
• using a greater number of silos than missiles, so that the opponent would waste warheads on empty silos;
• making the missiles mobile, as the Germans did, so that the enemy could not locate them;
• using anti-ballistic-missile (ABM) defences.
The essence of the problem can be illustrated by a simplified example in which the aggressor (A) has 100 ICBMs, each with ten warheads, while the other side (B) has 500 ICBMs, each with three warheads. (For the purpose of this example, all missiles and warheads are perfectly available and reliable, and each warhead will kill one silo.) Thus A is capable of destroying 1,000 silos, and if he carries out a pre-emptive strike he requires to use only fifty missiles, leaving B with no missiles. A still has fifty missiles and is clearly the winner. If, however, B builds another 500 silos, but no more missiles, and spreads his 500 ICBMs randomly among the 1,000 silos, A, not knowing which silos are occupied, must attack all 1,000. Both sides then end up with zero ICBMs, which is a better outcome for B than the first, but is unsatisfactory from a military point of view. But if B now builds a total of 2,000 silos, half his missiles (i.e. 250) must survive the attack.
The first missiles, such as the early Atlas and Thor, were located in a shed, primarily for protection from the weather, and were taken out to enable them to be raised to the vertical for fuelling and launch. The missiles were also located close to each other. Both factors together made the missiles extremely vulnerable to incoming missiles, which did not need to be too accurate to achieve a kill.[6]
The next step was to place the missiles in semi-hardened shelters and to separate these shelters so that one incoming warhead could not destroy more than one missile. In addition, the shelters had split roofs, so that the missile could be raised, fuelled and launched without wasting time moving it out on to a launch pad. As the perception of the threat increased, the spacing between individual missiles increased yet further and the shelters became bunkers, recessed into the ground.
The next step was to mount the missile vertically rather than horizontally, and to put it in a hole in the ground. The USAF, however, adopted a ‘halfway’ system with the Atlas and Titan I missiles, in which the missile stood upright in a silo which, in the case of Atlas, was some 53 m deep and 16 m in diameter, resting on the launch platform, which was counterbalanced by a 150 tonne weight. The launch procedure involved fuelling the missile in the silo and then using hydraulic rams to raise the entire launch platform and missile to the surface, where the missile was then launched. Titan I had a super-fast fuelling system and a high-speed elevator which reduced reaction time to approximately twenty minutes, while the silo and all associated facilities were hardened to withstand an overpressure of 20 kgf/cm2.
A completely new launch system was introduced with Titan II, in which the missile was launched direct from the silo. There was, however, considerable concern about the effects of the rocket efflux on the missile during the few seconds that the missile was still inside the silo, so the missile rested on a large flame deflector, which directed the efflux into two large ducts exhausting to the atmosphere a short distance from the silo. Each missile complex was 45 m deep and 17 m wide and occupied nine levels, which housed electrical power, air conditioning, ventilation, and environmental protection, as well as hazard sensors and the associated corrective devices. At the centre was the launch duct, in which the missile was suspended in an environmentally controlled atmosphere. A walkway extended from the missile silo to a blast lock which provided controlled access between the silo and the tunnels leading upward to the above-ground access and laterally to the launch-control centre (LCC). The LCC was a three-level, shock-isolated cage suspended from a reinforced-concrete dome and housed two officers and two enlisted men. As with the Titan I silo, the Titan II silo was hardened to 20 kgf/cm2.
When it learned that the Soviets were launching direct from the silo, the USAF followed suit and the Minuteman I missile became the first US missile to use the ‘hot launch’, in which the missile rose from the silo surrounded by the flames and smoke from the rocket motor. The next Soviet innovation was the ‘cold launch’, in which a gas generator within the silo produced a pressure sufficient to propel the missile some 20–30 m clear of the silo before its first-stage motor fired. This protected the silo from damage, enabling it to be reused within a fairly short space of time. It was used by the Soviets from the SS-17 onwards, and by the USAF in Peacekeeper (MX).
Following their introduction in the mid-1960s, underground silos became increasingly complicated and expensive structures. Ideally they were located at a relatively high altitude, to improve the missiles’ range, and in springy ground, to absorb as much as possible of the shock waves from incoming warheads. The silo was a vertical, steel/reinforced-concrete tube, housing an elaborate suspension and shock-isolation system which supported the missile as well as providing further insulation to minimize the transfer of shock motion from the walls and floor of the silo to the missile. The top third of the silo housed maintenance and launch facilities, which were known as the ‘head works’ in USAF parlance. Finally, the missile tube was capped by a massive sliding door, which provided protection against overpressure by transmitting the shock caused by the explosion of an incoming warhead to the cover supports rather than to the vertical tube containing the missile; it also provided protection against radiation and EMP effects. The door was designed to sweep the area as it opened, to prevent debris falling into the silo tube and possibly interfering with the launch process.
Individual silos were grouped together for control purposes, but were sited sufficiently far apart to ensure that one incoming warhead could not destroy more than one missile. Control was exercised by an underground command centre, manned by a small crew of watchkeepers, whose functions included operating the dual-key safety system in which launch could be authorized only by two officers acting independently. This command centre was linked to its superior headquarters and to the individual silos under its control by telecommunications and by systems-monitoring links. This introduced a further problem: the vulnerability of these links to blast and, in particular, to electromagnetic pulses (EMP). Making these links survivable against the perceived threats (known as ‘nuclear hardening’) became an increasingly complex and expensive undertaking as the Cold War progressed.
The protection factor (‘hardness’) of a silo was measured by its ability to withstand the overpressure resulting from the blast effects of a nuclear explosion, and was expressed in kilograms-force per square centimetre (kgf/cm2) or pounds per square inch (psi) (1 kgf/cm2≈14.2 psi). In the USA, the Atlas, Titan I and Titan II silos were constructed with a hardness of 20 kgf/cm2 (300 psi), while the Minuteman I silos (mid-1960s) were built with a hardness of some 85 kgf/cm2 (1,200 psi). Finally, in the 1970s, Minuteman III/Peacekeeper silos were built with a hardness of 140 kgf/cm2 (2,000 psi). By this time, however, the silos were so expensive that, despite reports that the Soviets were ‘superhardening’ their silos to resist overpressures of 425 kgf/cm2 (6,000 psi), Congress repeatedly refused to authorize any further hardening of US silos.
The Soviet programme of silo building, refurbishment and hardening was more successful. The earliest silos, built before 1969, were hardened to withstand an overpressure of some 7 kgf/cm2 (100 psi), with the next generation built to 20 kgf/cm2 (300 psi). Those built in the early 1970s for the SS-18 could withstand 425 kgf/cm2 (6,000 psi), which was achieved using concrete reinforced by concentric steel rings.
Although most of their ICBMs were always sited in silos, both the USA and the USSR repeatedly examined alternatives, both to increase survivability and, perhaps of greater importance in the USA than in the USSR, to reduce costs. In the USA, environmental factors also became an increasingly important consideration.
One of the US schemes was called Multiple Protective Structures (MPS) and consisted of a number of ‘racetracks’, each about 45 km in circumference and equipped with twenty-three hardened shelters. One mobile ICBM, mounted on a large wheeled TEL, would have moved around each racetrack at night in a random fashion, with decoy TELs and missiles adding to the adversary’s uncertainties. Basic MPS involved 200 missiles moving between 4,600 shelters covering an area of some 12,800 km2, but a more grandiose version envisaged 300 missiles moving around 8,500 shelters.[7]
An enhanced version of MPS was proposed in the early 1980s, in which a new Small ICBM (SICBM) would have been deployed in fixed, hardened silos distributed randomly among the 200 racetracks of the MPS system, thus adding to the aiming points for the Soviet ICBM force. It was intended that the SICBM would be 11.6 m long and weigh 9,980 kg, have a range of 12,000 km, and carry a single 500 kT warhead; it would have been launched by an airborne launch-control centre. SICBM would have been housed in a tight-fitting container placed in a vertical silo hardened to approximately 530 kgf/cm2, and it would have required an exceptionally accurate incoming warhead to destroy such a target. Various other launch methods were also considered for SICBM, including a road vehicle, normal silos, airborne launch from a transport aircraft, and (possibly the only time this was ever considered for an ICBM) from a helicopter.
Another scheme was based on the racetrack principle of MPS, but this time with the TELs running inside shallow tunnels, 4 m in diameter. The TELs would simply have kept moving, thus avoiding the need for shelters, and would have had large plugs fore and aft to protect against nuclear blast within the tunnel. If required to launch, the TEL would have halted and used hydraulic jacks to drive the armoured roof upwards, breaking through the surface until the missile was raised to the vertical.
Deep Basing (DB) involved placing the ICBMs either singly or in groups deep underground, where they would ride out an attack and then emerge to carry out a retaliatory strike. One of the major DB schemes was the ‘mesa concept’, in which the missiles, crews and equipment were to be placed in interconnecting tunnels some 760–915 m deep under a mesa or similar geological formation.[8] Following an enemy nuclear strike, the crews would have used special machines to dig a tunnel to the surface and then brought the launcher to the open to initiate a retaliatory strike. This scheme’s disadvantage lay in its poor reaction time and the difficulty it posed for arms-control verification. From the practical point of view it would have been necessary to find rock which was both fault-free and sufficiently strong to resist a Soviet nuclear attack, but which could nevertheless be drilled through in an acceptable time and without the machinery becoming jammed by debris. On top of all that, a second incoming nuclear strike when the drilling machine was near to the surface would have caused irreparable damage. A related project (Project Brimstone) examined existing deep mines, but also proved unworkable.
A totally different approach, known as Closely Based Spacing or ‘Dense Pack’, was also considered. This suggested that, instead of spacing missile silos sufficiently far apart to ensure that not more than one could be destroyed by one incoming warhead, 100 MX missiles should be sited in superhardened silos placed deliberately close together. The idea was that this would take advantage of the ‘fratricide’ effect in which incoming warheads would be deflected or destroyed by the nuclear explosions of the previous warheads. A spacing of the order of 550 m was suggested, and it was claimed that in such a scheme between 50 and 70 per cent of the ICBMs would have survived.
All the basing methods discussed above were either static or involved limited movement in a closed circuit, but the question of mobile basing was often considered as well. As described earlier, the German A-4 was designed as a road-mobile system, but an alternative rail-based option was also considered, and a similar scheme was designed and tested during the development phase of the Minuteman I. The plan was to have fifty trains, each of some fourteen vehicles, which would have included up to five TEL cars, each carrying a single missile, together with command-and-control, living-accommodation, and power facilities. The scheme was examined in great detail, and a prototype ‘Mobile Minuteman’ train was tested on the public railway. Although the scheme proved feasible, it was dropped in favour of silo deployment.
A similar proposal was considered during the long development of the Peacekeeper (MX) system, and very nearly became operational. This version would have consisted of twenty-five missile trains, each carrying two missiles. Each train would have consisted of the locomotive and six cars: two missile launch cars; a launch-control car, a maintenance car, and two security cars. In peacetime the trains would have been located in a ‘rail garrison’ sited on an existing Strategic Air Command base, which would have contained four or five shelters (known as ‘igloos’), each housing one train. These garrisons would each have covered an area of some 18–20 hectares, with tracks leading to the USA’s 240,000 km national rail network. On receipt of strategic warning the trains would have deployed on to this national network, where they would have rapidly attained a high degree of survivability. This scheme was under active development from 1989 until its cancellation in 1991.
As we have seen, the Soviet SS-24 Mod 1 was actually fielded in the rail-mobile mode. There were three rail garrisons, all in Russia, with four trains at two sites and three trains at the third. The trains had one launcher each, with two further cars for launch control, maintenance, and power supply.
The Soviets also fielded a road-mobile ICBM, the SS-25, which was also the last Soviet ICBM to enter service during the Cold War. This single-warhead missile was carried on a fourteen-wheeled TEL, which was raised on jacks for stability during the launch. The TEL and its missile were normally housed in a garage with a sliding roof which would be opened for an emergency launch. Given the necessary warning, however, the TELs deployed to pre-surveyed sites in forests.
One US proposal was the ‘continuous patrol aircraft’, in which a packaged missile was carried inside a large, fuel-efficient aircraft. On receipt of verified launch instructions, the missile would have been extracted by a drogue parachute, and once it was descending vertically its engine would have fired automatically, enabling the missile to climb away on a normal trajectory. Tests were carried out using a Minuteman I missile transported by a C-5 Galaxy and were completely successful. Large numbers of aircraft would have been needed to maintain the number required on simultaneous patrol. It would have been very difficult for a potential enemy to track them and even more difficult to guarantee the destruction of every airborne aircraft in a pre-emptive strike, but the main weaknesses of the scheme were the vulnerability of the airfields, the enormous operating costs, and, to a lesser degree, the decreased accuracy of the missile.
DURING THE SECOND World War German submarines in the Atlantic brought the United Kingdom very close to collapse before they were ultimately defeated, while US submarines in the Pacific achieved a mastery which played a significant part in Japan’s defeat. However, only a tiny handful of people foresaw a potential marriage between submarines and the newly developed missiles, and once again this occurred in Germany. The original suggestion came from a visitor to the German rocket-development site at Peenemünde, who proposed that the A-4 (V-2) missile, in addition to being launched from land, might also be launched from a submersible barge towed by a submarine. With such a device, he suggested, the Germans would be able to bombard New York. The suggestion was seized upon by the staff at Peenemünde, but the land-based missile was given higher priority and only one barge was completed before the surrender in May 1945. A separate proposal to mount V-1 cruise missiles in submarines for use against New York was considered in 1943, but was rejected due to a lack of suitable submarines.
The nuclear-powered missile submarine (submarine, ballistic, nuclear – SSBN) and its weapon, the submarine-launched ballistic missile (SLBM), formed a truly innovative weapon system. It was in essence a missile base, but with the immense advantage over land-based ICBMs that not only was it mobile, but it could use that mobility to hide in the vastness of the oceans.[1]
Some of the V-1 and A-4 missiles obtained by the US forces in 1945 were allocated to the US navy, together with a number of the German scientists who had been involved in their development. These missiles were immediately seen as having a seaborne role against land targets, and, of the two, the V-1 cruise missile seemed to offer the greater promise in the short term. As a result, two fleet submarines were converted by installing a watertight hangar abaft the sail with a stern-facing take-off ramp – an installation similar to that used by the Japanese navy for its aircraft-carrying submarines, of which the US navy captured a number in 1945. The submarines had to surface to launch the missiles, and the first of many test flights took place in February 1947. The navy also conducted trials with the A-4, including the first launch of a ballistic missile at sea, from the flight deck of the aircraft carrier USS Midway on 6 September 1947. Numerous tests were conducted with both types of missile until the programme ended in 1950, but it was a start.
Meanwhile, two exceptionally far-sighted submarine-launched cruise-missile programmes were initiated, one for Rigel in 1947 and the second for Triton in 1952, although both were eventually cancelled. A less ambitious cruise-missile programme named Regulus did, however, reach service. Powered by a turbojet, the subsonic Regulus I had swept wings, and served operationally aboard submarines from 1954 to 1964. It was armed with a nuclear warhead, but was relatively inaccurate and was targeted against large cities within 650 km of its submarine launch position, such as Beijing. A second cruise missile, Regulus II, was greatly superior to Regulus I and carried a nuclear warhead at speeds in excess of Mach 2 to ranges of 1,610 km. Although it was proving very successful, the programme was cancelled in 1959, as the concurrent Polaris programme held out greater promise.
The United States’ first SLBM and SSBN programme – known collectively by the missile’s name, Polaris – was one of the most successful defence projects ever undertaken. It was a huge undertaking, which incorporated an astonishing range of innovations in two parallel but interlocking programmes. On the missile side, these included solid-fuel propulsion, cold-gas launch from a submerged submarine, lightweight ablative re-entry vehicles, and small nuclear warheads. Alongside this was the submarine programme, which involved cutting a nuclear-propelled attack submarine under construction in two and inserting a 39.6 m ‘plug’ containing sixteen vertical missile tubes. The submarine system also involved new launch-control and communications systems, as well as novel systems for submarine navigation. This very ambitious programme was steered to completion by Rear-Admiral William Raborn of the US navy.
When the first Polaris submarine entered service, in 1960, it revolutionized strategic warfare. The Polaris A-1 missile carried a single 500 kT warhead over a range of 2,600 km and, using inertial guidance, achieved a CEP of some 1,830 m. Polaris A-2 also had single warhead, but this was both more powerful (800 kT) and more accurate (CEP = 1.2 km), while Polaris A-3 carried three RVs, each with a 200 kT yield and a CEP of 850 m. The Polaris A-3 also became the first (and so far the only) SLBM to be supplied to a foreign nation, when it was sold to the United Kingdom to arm that country’s Resolution-class SSBNs.
The Poseidon C-3 two-stage missile started life as an evolutionary development of the earlier missile (its initial designation was Polaris B-3) and, although having a greater diameter, it was able to use the same launch tubes by eliminating the guide-rings used on Polaris. The first Poseidon was launched in August 1968, and the system entered service in 1971. The most important innovation was that it was armed with MIRV warheads, of which a maximum of fourteen could be carried, though this was limited to ten 100 kT warheads under the SALT I agreement with the USSR. The potential accuracy of the MIRVs could have given them a counter-force (hard-target) capability, but, since this ran counter to contemporary US strategists’ view of SLBMs as a survivable, second-strike, counter-value (i.e. anti-city) system, the proposed high-precision stellar-inertial navigation system was not authorized by the Department of Defense.
At its peak Poseidon armed thirty-one SSBNs. Conversion of twelve of these boats to carry the Trident missile started in 1984, however, and by 1990 only ten Poseidon boats remained in service.
Development of Trident I began in 1972, the missile being essentially a Poseidon C-3 with a third-stage motor added to give a greatly increased range of 7,400 km – that range enabling the SSBNs to obtain more sea room. The Trident design was a much more efficient design than earlier SLBMs, maximising its use of the volume available, and making use of all the fuel. The designers were also able to include the stellar navigation package which had been forbidden Poseidon, thus enabling the warhead to be extremely accurate, with a CEP of 463 m. Trident I (C-4) was put into production even though it was known that Trident II (D-5) would become the definitive system, and it armed twelve SSBNs which had originally carried Poseidon as well as the first eight Ohio-class SSBNs.
Next came Trident II (D-5), which was the same diameter as Trident I but 3.6 m longer, giving it a range of 12,000 km and nearly double the throw weight of the earlier missile. As the Cold War ended, Trident II was coming into service aboard the twenty-four-missile Ohio-class SSBNs. Trident II was fitted with NAVSTAR satellite receivers, giving mid-course navigational updates to the inertial system, resulting in a CEP of 90 m, making this a genuine hard-target attack system, with a range enabling it to hit any target in the world from anywhere in any ocean.
In the late 1980s the US navy introduced the Tomahawk cruise missile into service, thus turning the wheel full circle, since the navy had started its Cold War development with a cruise missile – the Regulus – some forty years earlier. This missile was, however, much superior in performance, range and accuracy, delivering a 200 kT warhead to a maximum range of 2,500 km with an accuracy of 280 m. It was also smaller and lighter, being capable of being launched from a standard 533 mm diameter torpedo tube.
One of the keys to success of the US SLBMs was the use of a gas-operated system which blew the missile out of the launch tube towards the surface, thus avoiding the rocket-motor ignition taking place in the tube, with its attendant dangers to the submarine. In some missiles the first-stage motor to drive the missile up into the atmosphere fired below the surface, while in others (e.g. Trident) it fired when clear of the surface. The missiles were launched in sequence, Poseidon missiles being launched at a rate of one every fifty seconds.
The original submarines used by the US navy in the 1946–7 V-1 programme were standard Second World War diesel-electric fleet submarines with large cylindrical hangars abaft the sail, with a short, sloping launching rail. The next step was the Regulus I and II programmes, which involved five submarines. The first two of these were converted fleet submarines with cylindrical aft-facing hangars, but the other three were purpose-built, with the missiles stored in a large hangar in the bows, two of them being diesel-electric-powered and the third, Halibut, nuclear-powered. All ceased to operate Regulus when the system was discontinued in 1964 and were then employed on different missions.
Led by Rear-Admiral Raborn, the Fleet Ballistic Missile System (FBMS) programme started in the mid-1950s, and the first submarine, George Washington, complete with sixteen operational Polaris A-1 missiles, entered service on 15 November 1960 – an astonishing technical, manufacturing and managerial achievement.
To save time, the George Washington class was created by taking five Skipjack-class attack-submarine hulls currently under construction, cutting them in two, and adding a missile section containing sixteen vertical tubes abaft the sail. There were, of course, many minor changes, including the addition of missile control and launch systems, special navigation systems, and new communications. The system introduced many new concepts which subsequently became standard practice, including the sixty-day operational cycle, using two crews, designated Blue and Gold, one of which was at sea, the other ashore on rest, leave, training and, finally, preparing to take over for the next operational cruise.
The George Washington class was very quickly followed by five Ethan Allen-class boats, completed between 1961 and 1963, which were very similar to the George Washington class, but with the advantage of being designed as SSBNs from the start.
The range of Polaris (A-1 – 2,600 km; A-2 – 2,800 km; A-3 – 4,630 km) meant that all these SSBNs had to operate relatively close to Russian shores to meet the requirement to hit Moscow. So, in order to reduce transit times, the boats were forward based at Holy Loch (Scotland), Rota (Spain) and Apra Harbor (Guam). None of these ten SSBNs could be converted to take the Poseidon missile, and in 1980–81 all were either converted to nuclear-powered attack submarines (SSNs) by deactivating the missile tubes or were decommissioned.
The first of the Ethan Allen class had not even been completed before the next class was being laid down, and thirty-one Lafayette-class SSBNs joined the fleet between 1963 and 1967.[2] All thirty-one entered service with Polaris missiles (the first eight with Polaris A-2, the remainder with A-3), and a further four were planned to bring the grand total of Polaris-armed boats to forty-five. These last boats were never built, and the thirty-one Lafayette-class were converted in 1970–78 to take the Poseidon missile. Twelve were later converted yet again to take Trident C-4 (1978–83), with the first of these, Francis Scott Key, sailing on its first patrol on 20 October 1979.
Finally came the Ohio class, the largest US submarine and the most powerful single weapons platform ever built – 171 m long, displacing 16,964 tonnes and carrying twenty-four missiles. Like most other strategic programmes, the Ohio-class programme was surrounded by doubts, and in particular by concern over its costs, but eventually the first submarine sailed on its initial patrol on 11 November 1981. The first eight, which entered service between 1981 and 1986, were armed with Trident I (C-4) missiles, and the remaining ten (completed in 1988–97) with Trident II (D-5).
When the first SSBN was being designed there was a major investigation into the optimum number of missiles. The minimum cost-effective number was twelve, but the maximum depended on the money available. The number of sixteen was simply the number that fitted in the largest submarine the US navy felt that it could persuade the Pentagon and Congress to pay for, and the majority of SSBNs subsequently built for both the US and foreign navies have been equipped with this number of tubes. There is, however, nothing magic about the figure of sixteen, and SSBNs have been built with twelve tubes (Soviet Yankee class), twenty tubes (Soviet Typhoon class) and twenty-four tubes (US Ohio class).
Availability of the later missiles aboard SSBNs remains classified, but in a US navy Polaris submarine fourteen missiles were available for 100 per cent of the time, while all sixteen were available for 95 per cent of the time.
Typical of its generation, the US navy’s Lafayette class usually spent sixty-eight days on patrol with the Blue crew, followed by a thirty-two-day refit before starting the next patrol with the Gold crew. There was also a sixteen-month yard overhaul every six years, giving an overall availability for each hull of 55 per cent. The Ohio class, however, offered a considerable increase in availability, with seventy-day patrols, followed by twenty-five-day refits, and with a twelve-month yard refit every nine years, increasing overall availability to 66 per cent.
As with land-based missiles, there were repeated attempts in the USA to discover a form of sea-borne basing that was either less expensive or more survivable – or, preferably, both. Designs took a variety of forms.
In the immediate post-war period the USA examined the German plan to launch A-4s from submersible barges, and carried out some tests, using ex-German A-4s and US-built barges. The result was always that the rocket efflux destroyed the barge, resulting in a somewhat erratic launch. Nevertheless, the idea was re-examined in 1961–5 as a possible alternative to Polaris, under the code-name Project Hydra, and was looked at yet again in the early 1980s as an alternative to both Trident and the Peacekeeper (MX) ICBM. Project Hydra showed that the technique was perfectly feasible, although it found that the most effective way of launching was simply to put the missile in the water without any form of protective container. The missiles needed to be waterproofed, and those with a specific gravity greater than 1.0 needed a flotation collar to make them float, the collar being shed on launch. The plan was for such missiles to be taken to sea aboard a converted merchant ship and lowered into the water, where they would be left until they were activated and the launch command was signalled from a headquarters ashore.
The 1970s plan was for thirty fast merchant ships, each capable of rapid changes in appearance, to operate out of two bases, one on the Atlantic and one on the Pacific. Each ship would have carried ten missiles, and two plans were considered: one to offload the missiles into the sea in peacetime, the other to offload them only in a crisis. In fact the project foundered on the deployment issue, as the system was judged to be far too vulnerable and susceptible to accidents, but there was never any doubt as to its technical feasibility.[3]
There were a number of proposals in the late 1970s to use small diesel-electric submarines, operating on or near the continental shelf. One proposal involved a design displacing some 450 tonnes, based on the West German-designed Type 209; another was for a larger boat displacing between 500 and 1,000 tonnes. Such submarines would have carried two (or, in some proposals, three) Minuteman III missiles in external, horizontally mounted containers, from which the missile would have been floated out, brought upright by its ballasted rear end, and then ‘wet launched’ as with Project Hydra. Force levels varied between 100 and 138, with manning figures ranging between five and fifteen men per submarine. The most serious drawbacks were that, being diesel-electric powered, slow and with relatively short range, the submarines would have needed protection by a strong ASW force, while if they operated within the limits of the continental shelf they were vulnerable to attack by a relatively small number of Soviet missiles.
The Hydra plan was for surface ships to place missiles in the sea for a water launch, but there were other plans to use the surface ships themselves as launch platforms. The most serious of these was the ‘Multi-Lateral Force’ (MLF) proposed by President John F. Kennedy in 1961. This proposal was for a fleet of twenty-five surface ships to be built in west-European yards, each armed with eight Polaris A-3 missiles, supplied by the United States. Both ships and missiles would have been jointly owned by the nations concerned and jointly manned (as, for example, happened later for the E-3 Airborne Warning and Control System (AWACS) force).
One curious event, possibly linked to the MLF proposal, was associated with the Italian cruiser Giuseppe Garibaldi. This ship underwent a major refit in the early 1960s and emerged in 1962 as a guided-missile cruiser, its principal weapons being US-supplied Terrier anti-aircraft missiles. It was, however, also equipped with four vertical launch tubes for Polaris A-3 missiles. Dummies were successfully tested, but real missiles were never embarked, nor were live Polaris missiles ever made available to the Italian navy.{1}
The most significant feature of the MLF proposal was that the warheads would have been under NATO control, with release authorized by a NATO body to be set up for that purpose, and signalled over a NATO-owned ‘permissive link’ to the ships. The MLF never came about, but the question of NATO control over nuclear weapons led to the setting up of the Nuclear Planning Group.
There was also a proposal for a NATO-operated ballistic-missile submarine force. This was, however, quickly scotched, since the US would not reveal its nuclear-propulsion secrets and a diesel-electric submarine would have lacked the essential stealth.
On capturing German material in 1945, Soviet leaders were quick to see the potential importance of sea-borne long-range missiles, and their first attempt was to develop a towed-container system.{2} Several hundred were built in the late 1940s, but the system does not appear to have become operational and attention soon switched to launching missiles from the submarine itself. Soviet army SS-1 (NATO = ‘Scud’) missiles were converted for naval use, and a Zulu-class diesel-electric submarine was adapted to house a single missile in a tube which stretched from the keel to the top of the sail. The first successful launch took place on 16 September 1955, and this system, designated SS-N-1 by NATO, entered service in 1959; its range was a meagre 150 km. Two missiles were carried in each of five converted Zulu-class submarines (Zulu V), and may also have been carried for a short time by the newly built Golf-class submarines, as well.
With a range of 150 km and an anti-ship role, SS-N-1 was not, however, a strategic missile; its significance here is as a ‘proof-of-concept’ system leading to strategic missiles.
Having proved the concept, the Soviet navy was quick to follow up with the more advanced SS-N-4 missile, which first went to sea in 1961. The system replaced the SS-N-1 aboard the Zulu V, but its principal platforms were the Golf-class diesel-electric and the Hotel-class nuclear (SSBN) submarines, which carried three missiles each. The SS-N-4 was a large missile for its time, with a launch weight of 13,750 kg, and carried a single 1 MT warhead, although contemporary reports credited it with a 5 MT warhead. Its range was 650 km. This was a surface-launched missile, and the submarine could travel at up to 15 knots and in conditions up to Sea State 5, although the submarine had to be on an even keel at the moment of launch.
The SS-N-5, ‘Sark’, which was deployed aboard later Golf- and Hotel-class submarines, was the first Soviet SLBM which could be launched while the submarine was submerged, the limits being a maximum depth of 60 m and surface conditions not exceeding Sea State 5. Of even greater significance was the SS-N-6, ‘Serb’, which enabled Soviet designers to switch from a few sail-mounted missiles to the same sixteen-tube, internally mounted layout as in Western SSBNs. It entered service in 1967 embarked aboard Yankee I-class SSBNs. The SS-N-6 had a relatively short range (2,400 km for Mod 1 and 3,000 km for Mods 2 and 3), which meant that the submarines had to deploy close to the Atlantic and Pacific coastlines of the USA. This made them vulnerable to US home-based anti-submarine measures, but, on the other hand, they threatened very rapid attacks on targets such as US ICBM fields – a threat which caused serious concern to US strategic planners.
The pace of Soviet naval missile development was maintained by the SS-N-8, ‘Sawfly’, which started test flights in 1971, demonstrating a range of 7,800 km. This caused considerable alarm in the West, as it exceeded, by a very considerable margin, the range of any other US or Soviet SLBM, and the alarm only increased when the Mod 2 version went on to demonstrate a range of 9,100 km. The long range was necessary because the SS-N-8 was designed for deployment aboard the new Delta-class submarines, which would operate from ‘SSBN bastions’ in Soviet-dominated waters (see here). Accuracy was improved by using a stellar-inertial navigation system, although later reports suggested that this was frequently much less accurate than was believed in the West at the time.
The SS-N-17, ‘Snipe’, was embarked in one submarine only (the sole Yankee II), which was in service from 1977. It was the first Soviet navy SLBM to be powered by solid fuel, and also the first to carry a post-boost vehicle – in this case used for only a single re-entry vehicle. This system demonstrated a Soviet practice which tended to confuse Western observers, where a ‘one-off’ system was put into extended operational service – something which almost never happened in the West, as such a practice was very expensive in terms of procurement, training and logistic support. Even if, as was suggested at the time, the SS-N-17 might serve some special strategic purpose, there were inevitably protracted periods when the submarine was in refit, when the entire system was unavailable.
The SS-N-18, ‘Stingray’, which entered service in 1977, was a direct development of the SS-N-8 and was the first Soviet SLBM to carry MIRVs. It was installed in the Delta III-class SSBNs, which, owing to the missile’s greater length, had an even higher ‘hump’ abaft the sail than in the Delta I and II. The SS-N-18 continued the Soviet preference for storable-liquid propulsion.
The SS-N-20, ‘Sturgeon’, was specifically developed for use aboard the Typhoon-class SSBN and carried up to ten 100 kT MIRVs with a CEP of 500 m. This gave them a relatively low lethality (by nuclear standards), but was sufficient for the Typhoons’ wartime second-strike role (see below). Although it was the second Soviet SLBM to use solid fuel, it was the first such to be produced in quantity. The SS-N-20 entered service with the Typhoon in 1982, and was deployed only in that class of SSBN.
The SS-N-23, ‘Skiff’, was the successor to the SS-N-18 and became operational with the Delta IV class in 1985. Unlike the solid-fuelled SS-N-20, it used storable-liquid propulsion, possibly because the Soviet navy had found such a system preferable to solid fuel over many years of service. The SS-N-23 was originally thought to be operating with ten MIRV warheads, but was later learned to have only four. The US also expected that it would be retrofitted into Delta IIIs, but this did not happen.
Zulu-class diesel-electric submarines were built in the early 1950s and, after one had been used to launch an SS-N-1 missile, five were converted and were then known to NATO as Zulu V, fitted first with two SS-N-1s and later with two SS-N-4s. The launching procedure was complicated, to say the least. The missile was fuelled and prepared while the submarine was submerged and, when all was ready, the submarine then surfaced and the two missiles were raised by lifts until they were clear of the sail, where they were held in position by four brackets. The missiles were then aligned with the target, the motors were started, and (presumably using nice judgement) the missiles were launched when the submarine was upright.
The Zulu class was followed by two classes of purpose-built missile submarines, but, with typical Soviet caution, one class was diesel-electric-powered, while the other had nuclear propulsion. Fifteen of the diesel-electric boats – designated Golf class by NATO – entered service between 1959 and 1962 fitted with three sail-mounted SS-N-4s, using the same surface-launch techniques as the Zulu V. Thirteen of these were later converted to take the SS-N-5, which was launched submerged. The Hotel-class nuclear-powered submarines were developed concurrently with the Golf class and had very similar missile arrangements, with three SS-N-4s mounted vertically in the sail.
An important development came in 1967, when the Yankee I-class SSBNs entered service. These were the first Soviet SSBNs with sixteen missile tubes and the first to house the tubes in the pressure hull, as with the US Polaris submarines. Thirty-four were built between 1969 and 1972. Like the earlier classes, these boats patrolled off the US coast, but the greater range of the SS-N-6 missile enabled them to threaten targets much deeper inland. One boat, the sole Yankee II, was built to test the SS-N-17 missile, and a number of Yankee Is were converted as cruise-missile carriers.
The Delta class proved to be a very successful project for the Soviet navy, and the design remained in production from the late 1960s in four major versions: Delta I (eighteen built), Delta II (four built), Delta III (fourteen built) and Delta IV (seven built). The Delta I was built around the SS-N-8 missile and made maximum use of the well-proven Yankee design, enabling the Soviet navy to get it into service quickly, although, since the SS-N-8 was considerably larger than the SS-N-6, the ‘hump’ was higher and only twelve missiles could be accommodated. The Delta II, however, was longer, to enable the number of missiles to be increased to sixteen to match Western SSBNs. The fourteen Delta IIIs were the only Soviet SSBNs to carry the SS-N-18 missile, which was even longer than SS-N-8, thus requiring an even higher ‘hump’. Last of the class were the Delta IVs, commissioned between 1985 and 1992, which carried sixteen SS-N-23 SLBMs. All four Delta classes were designed to operate in the two Soviet ‘SSBN bastions’, their probable role being to deliver the first wave in a second strike.
The first Typhoon hull was laid down in 1977, and when it was first revealed in the West in the early 1980s it caused a greater stir than almost any other weapon system in the Cold War. Western intelligence had become aware of something unusual three years previously, when First Secretary Leonid Brezhnev told President Gerald Ford that he would go ahead with Project Typhoon if the US would not agree to drop the Trident programme. Later, US reconnaissance satellites took pictures of components being assembled at Severodvinsk which were so large that it was assumed that they were for another long-awaited project, an aircraft carrier. What eventually appeared, however, was the largest submarine the world has ever seen: its submerged displacement of 25,000 tonnes far exceeds that of the US navy’s Ohio-class SSBN (16,964 tonnes), while its length of 171 m is a little greater than that of a US navy Ticonderoga-class cruiser.
The Typhoon was innovative in many ways apart from its sheer size. The outer casing conceals no less than five interconnected pressure hulls, and the twenty SS-N-20 missiles are mounted forward of the sail – a feature unique among SSBNs.
The Typhoon was designed to provide a platform which would spend most of its very long patrols lying on the seabed beneath the Arctic ice cap. It would sit out a nuclear exchange and surface through the ice to launch its missiles only when the adversary was taking the first steps towards post-nuclear recovery. In the original concept it was planned that each Typhoon would spend as much as a year on patrol, and one of the reasons for its huge size was the need to provide good habitability and adequate recreation possibilities for the crew. Internally, the Typhoon is exceptionally spacious, with extensive facilities including saunas and a swimming pool, all designed to ease the burden of protracted periods at sea. Six of these unique submarines were built between 1977 and 1989.
As far as is known, the sole Soviet alternative to SLBMs was a 1.5 m diameter torpedo developed in the late 1940s, which would have been launched from a single bow tube at a range of some 30–40 km from the target, usually a port. The missile travelled at approximately 55 km/h, and with a payload of some 3.6 tonnes it would have delivered a nuclear warhead with a yield of approximately 1 MT.[4]
In the early years of the Cold War the Soviet Union found itself in a position where US missile and airbases, some operated by the USA and others by NATO allies, directly threatened the Soviet land mass. On the other hand, the Soviet Union did not have a long-range air capability equivalent to the USAF’s Strategic Air Command with which to pose a corresponding threat to the USA, and it thus turned to missile-armed submarines as the quickest way of obtaining such a capability. The early missiles had a short range (650 km for the SS-N-4, for example) and the submarines would have been vulnerable to very active ASW activity by the USA. In particular, submarines armed with the surface-launched missiles (SS-N-1 and SS-N-4) would have been extremely vulnerable during their lengthy launch preparations.
At that time the primary purpose of the nuclear force was to pose an anti-city threat, and there were large numbers of important urban concentrations down the east and west coasts of the USA within the range of those missiles. When the Yankee SSBNs first started to patrol off the US Pacific and Atlantic coasts in the late 1960s, armed with their counter-value SS-N-6s, they too were targeted at large area targets, such as cities, government facilities, military bases and airfields. All these early SSBNs – including the Yankees – also brought another factor to the threat to the USA, since their missiles would have had a very short time of flight (possibly between four and five minutes), compared to the thirty minutes’ warning the USA expected to receive of a trans-polar missile attack. For the Soviet navy, these new types of submarine and missile also had the advantage that, apart from increasing the capability of the navy, they also helped to increase the experience of its officers and ratings.
The Delta-I/SS-N-8 combination, however, represented a complete change in strategy, since the long range of the missiles enabled the submarines to operate in what came to be known as the ‘SSBN bastions’. There were two of these – the Barents Sea in the west and the Sea of Okhotsk in the east – where the SSBNs had plenty of room for submerged patrols, while the sea around them and the airspace above them were patrolled and defended by Soviet naval and air forces. In particular, the Soviet SSBNs were defended against attacks by US and British SSNs, one of whose primary roles was to try to destroy Soviet SSBNs before they could launch their missiles. One consequence of this strategy was that Soviet war plans allocated increasing surface and air forces to the defence of the bastions, which reduced the assets they could assign to attacking NATO naval forces elsewhere.
Delta-II/SS-N-8 and Delta-III/SS-N-18 continued this pattern, but the Delta-IV/SS-N-23 and Typhoon/SS-N-20 combinations, which were produced simultaneously in the 1980s, introduced a new dimension. They were intended for different missions, the Delta IV being intended for use early in a nuclear campaign, possibly even in the first strike, but from the Arctic region, rising though relatively thin ice to fire its missiles from the surface. Typhoon, on the other hand, was intended to submerge under the deep ice cap for a protracted period, possibly as long as a year, and then break through thicker ice in order to carry out a final strike on the USA as it attempted to recover from the effects of a nuclear war.
Both sides considered it necessary to be aware of the movements of the other side’s SSBNs, first to establish routine patterns and then to detect any variations from the routine – such as, for example, an increase in the number of SSBNs at sea, which might indicate possible preparation for war. The start points for all SSBN missions – their bases – were well known to both sides, and the most vulnerable part of an SSBN’s voyage was its departure from its base.
The bases were closely monitored by satellite and, at least in the case of the Western bases, visually as well, but there were also more covert means of surveillance. Knowledge of the submarines’ operational cycles enabled the sailing and return dates of SSBNs to be predicted with a fair degree of accuracy, and in the early days the other side’s SSNs would wait outside bases to monitor SSBN movements using their on-board sensors. This was countered by giving departing SSBNs an ASW escort of aircraft, surface ships and SSNs, which in its turn was countered by using attack submarines to place sensors on the seabed. The British, for example, built a specialized and very complex ship, Challenger, at very considerable expense, specifically to locate and remove such devices from the approaches to the nuclear-submarine base in the Clyde.[5]
Once at sea, the SSBNs would make fairly rapid, but careful, transits to their operational area, where they would then cruise at about 3 knots, varying their depth to take maximum advantage of oceanic conditions, to make detection as difficult as possible.
STRATEGIC BOMBERS EXERCISED a major influence over the first half of the Cold War, principally because in the 1940s and 1950s they were the only practicable means of delivering the very heavy atomic and hydrogen weapons over intercontinental ranges.[1] Allied to this, bombers had played a major role in the recently concluded Second World War, with the Allied bombing campaigns against Germany and Japan giving the appearance of a war-winning strategy. Indeed, the war had been brought to a close by the two USAAF (United States Army Air Force) B-29 bombers which dropped atomic bombs on Hiroshima and Nagasaki.
There were also bureaucratic reasons for the fierce advocacy of the bomber, however. The US air force finally became independent of the US army in 1947 and was extremely keen to prove itself to be the war-winning arm in the Cold War. In the UK, which found itself facing the reality that it was now only the second most powerful nation in the West, membership of the exclusive ‘nuclear club’ appeared to be the only way to retain superpower status, and, in the short term, bombers were the only feasible way of achieving that. On the Soviet side, the air force realized that it had never produced a bomber force to match those of the USA and UK, and was desperate to rectify this. Thus, from 1945 into the mid-1960s, the strategic bomber armed with nuclear weapons was the symbol of global power.[2]
The original atomic bomber – and, after fifty years, still the only aircraft to have dropped atomic bombs operationally – was the piston-engined Boeing B-29 Superfortress, which entered service in 1943 and was the USA’s frontline bomber in the last year of the war against Japan and, with Strategic Air Command (SAC), in the early years of the Cold War. The early atomic bombs were large and very heavy, and the B-29 carried two, but with a range of 5,250 km it could not reach all parts of the USSR from bases in the United States. Thus, in the early Cold War period it was regularly deployed overseas, particularly in the UK, Okinawa and Guam. The B-29 was also provided to the UK air force from 1950 to 1958 (as the Washington B.1), albeit only as a conventional bomber. The B-29 was replaced in US (but not in British) service by an upgraded and more capable version, the B-50.
The Convair B-36 was the largest bomber ever to enter service. Its design had started in 1939–40, when it appeared possible that the UK would be overrun by the Germans and there was a perceived requirement to bomb targets in western Europe from bases in North America. Once it became clear that the UK would survive, however, the B-36 was given a lower priority, and it did not enter service until 1948. It was powered by six piston and four turbojet engines, which gave it the unprecedented unrefuelled range of 13,000 km.
The first major all-jet bomber was the Boeing B-47, which entered service in 1950; by the end of the decade, 1,260 B-47s were in front-line service with twenty-eight SAC bombing wings. At that time the traditional bomber was large, slow, powered by four piston engines, manned by a crew of ten to twelve men, and defended by numerous gun turrets, but the B-47 completely changed all that. It had swept wings and tail, was as fast as contemporary fighters, was powered by six jet engines in neat pods under the wings, carried a crew of three, operated 3,000 m higher than previous types, and was defended only by a single, remotely controlled turret in the tail. The problem was its relatively short range of 5,800 km, which again was partially compensated by forward deployment (e.g. to the UK) and partly by the large-scale introduction of air-to-air refuelling.
The mainstay of SAC’s bomber force for most of the Cold War was the Boeing B-52, which was designed in the late 1940s, entered service in 1955, and was still in front-line service at the end of the Cold War. When it entered service the B-52 set new standards for strategic bombers in almost every respect, including the carriage of eight nuclear bombs or up to 40,000 kg of conventional bombs over ranges of up to 12,900 km. In all, 744 were built, many of which were rebuilt several times to keep the force up to date. Although the B-52 started its career as a nuclear bomber, it changed from a high-level to a low-level role, while from the mid-1980s onwards it became a missile launch platform – a less demanding role and more suited to the venerable age of the airframes.
The most dramatic bomber to serve with SAC was the tailless, delta-winged Convair B-58, with a Mach 2 speed and 8,250 km range. Air-to-air refuelling enabled the B-58 to undertake long flights (e.g. from Tokyo to London), loudly advertising its wartime capabilities. The aircraft used a unique system in which a large pod under the fuselage housed both the nuclear weapon and the fuel for the outward flight; it was dropped complete, enabling the aircraft to make a very rapid getaway before returning to base on its internal fuel supply. Although generally successful, the B-58 was very expensive to operate, even by US standards, and was retired after just ten years’ service, without replacement.
Every development after the B-52 proved to be controversial, and the FB-111 was no exception. The original concept, known as the Tactical Fighter Experimental (TFX), was for one basic design which would meet the needs of the US air force, navy and Marines, as well as selling widely to US allies. In the end only the US and Australian air forces bought it, although the UK’s nearly did so, after the cancellation of its own strike bomber, the TSR-2. Almost inevitably, the widely disparate requirements could never be satisfied, although a very effective low-level strategic bomber was eventually produced, with 437 of various marks entering service. FB-111s could carry a maximum of six Short-Range Attack Missiles, each with a 200 kT nuclear warhead, or six gravity nuclear bombs. The greatest significance of the FB-111 was its ability to operate at very low levels at high speeds, and aircraft based in the UK were targeted on heavily defended, large area targets in the western USSR.
One of several abortive attempts in the 1960s to produce a new strategic bomber was the XB-70 Valkyrie, a six-engined behemoth and the largest bomber ever built. The B-70 was intended to fly for long periods at Mach 3 at high altitude, but its extraordinary performance was paralleled by its enormous costs, and after a spectacular crash in which one of the two prototypes was destroyed the whole project was cancelled.
The US air force’s final Cold War bomber was the B-1, which had a very protracted gestation, its official designation of AMSA (Advanced Manned Strategic Aircraft) being misinterpreted by cynics as ‘America’s Most Studied Aircraft’. One particularly strong argument in the early 1960s against the project was simply to question the need for a new manned bomber at all, since vast sums were already being spent on ICBMs, on upgrading B-52s for the air force, and on building new SSBNs and SLBMs for the navy. Even those who supported the need for a new bomber could not agree on what sort of aircraft was needed, but in 1971 the air force placed an order for an initial quantity of four B-1As, a four-engined, swing-wing aircraft, capable of Mach 2 at high altitudes. The first prototype flew in 1974, but when the Carter administration assumed power in January 1977 it gave high priority to an antagonistic examination of the project, which led to its cancellation, virtually in its entirety, that June. When the Reagan administration took over in 1981, however, the air force proposed a new version of the aircraft, optimized for low-level, stealthy penetration, which emerged as the B-1B. An order for 100 was placed, and they entered service from 1985 onwards.
In the low-level penetration role the B-1B flew at Mach 0.85 at a height of about 60 m. The B-1B defended itself partly through very sophisticated electronic-warfare equipment, but also through ‘stealth’ design, it being claimed that the B-1A had a radar cross-section (RCS) one-tenth that of a B-52, while the B-1B had an RCS one-tenth that of the B-1A. Payload comprised various combinations of Air-Launched Cruise Missiles, Short-Range Attack Missiles and nuclear gravity bombs.
At the start of the Cold War the Soviet Union saw itself as threatened by the long-range bombers of the USA but without any effective means of retaliation. For some years the Soviet air force depended upon a copy of the B-29, which had been reverse-engineered (i.e. copied) from three USAAF aircraft which had landed and been interned at Soviet airbases during the Second World War. Designated the Tupolev Tu-4 (NATO = ‘Bull’), large numbers served with the Soviet air force and thirteen were passed to the Chinese air force, which also used them for a time as nuclear bombers. The Tupolev bureau designed improved and larger versions of the Tu-4, but the Soviet leadership decided not to develop piston-engined bombers any further and to concentrate on the development of turboprop and turbojet designs.
The first Soviet design to enter service, in 1954, was the Tupolev Tu-16 (NATO = ‘Badger’), which was similar in capability to the US B-47 and the British Valiant ( see Chapter 12), but with only two engines. Unlike those American and British designs, however, the Tu-16 remained in service for many years, with over 2,000 being built, of which the majority were still in service at the end of the Cold War. There were numerous versions, but the nuclear version carried two nuclear bombs in an internal bomb bay, and 287 of this strategic-bomber version remained in service as late as 1987. The Tu-16 was capable of carrying its maximum load of two nuclear weapons over a range of some 4,800 km at a speed of 780 km/h, which enabled it to threaten targets in Europe, Alaska and Japan, but not in the continental USA.
As the US progressed to the Convair B-58, so too did the Soviet air force develop supersonic bombers: the Myasishchev M-4 (NATO = ‘Bounder’), which progressed no further than the prototype stage, and the Tupolev Tu-22 (NATO = ‘Blinder’). The Tu-22 was a large and sophisticated aircraft, with highly swept wings and two massive turbojets in the tail, giving it a dash speed of Mach 1.4 at 12,000 m. Payload was either two nuclear gravity bombs carried internally or a ‘Kitchen’ cruise missile. Combat radius at high altitude was 2,250 km, with a 400 km supersonic dash over the target (or less at low level).
The Tupolev design bureau also produced the Tupolev Tu-20 (NATO = ‘Bear’) – a remarkable design, which first flew in 1955 and entered service in 1956. To the astonishment of Western observers, this aircraft combined swept wings with turboprop engines, and, despite its undoubted success, it remains the only aircraft to combine these two features. The Tu-20 had the immense range, without air-to-air refuelling, of 14,800 km with a payload of at least four nuclear bombs. It was regularly underrated by Western observers, especially in the Pentagon, despite regular non-stop flights by both military and civil versions from the USSR to Cuba. A variety of versions were still in wide-scale service at the end of the Cold War.
Strategic versions were the Bear-A bomber, carrying two nuclear gravity bombs, the missile-carrying Tu-95 Bear-B, carrying a huge AS-3 (NATO = ‘Kangaroo’) cruise missile with an 800 kT nuclear warhead and a range of some 680 km, and the Bear-H attack version, which carried four AS-15 ‘Kent’ cruise missiles, with a range of 3,000 km. All types of Bear regularly carried out training missions against NATO countries, approaching to within some 80 km of the US and British coasts. On an operational sortie against the USA, however, it would have had to fly at medium and high altitudes to obtain maximum range, which would have made it vulnerable to US and allied fighters.
Contemporary with the US B-52 was the Myasishchev M-6 (NATO = ‘Bison’), a large swept-wing strategic bomber, powered by four turbojets, rather than the Tu-20’s turboprops. The large number of M-6s dreaded by the West never materialized, as their performance – particularly the range and the size of the bomb bay – never quite met the operational requirement, and the M-6 was then used for reconnaissance and electronic-intelligence tasks; however, it was symptomatic of the atmosphere of the time that its appearance in 1955 caused much excitement in the United States and led to a great increase in the production rate of the B-52. Like the Bear, the M-6 would have had to approach the USA at medium to high altitudes.
In 1969 US satellites began to return photographs of a new Soviet bomber on the apron at the new aircraft factory at Kazan. This turned out to be a swing-wing version of the Tupolev Tu-22, designated Tu-22M (NATO = ‘Backfire’). Subsequently, a virtually new aircraft with some external similarities to the Tu-22M appeared and was put into production as the Tu-26 (NATO = ‘Backfire-B’). (The relationship between the Tu-22M and the Tu-26 was probably similar to that between the American B-1A and B-1B.)
Three versions of the Tu-26 entered service, one of which carried nuclear weapons for use in the land-attack role. There were, however, repeated arguments between the United States and the Soviet Union over the role of this bomber, with the former stating and the latter denying that it was a strategic bomber. This became a major issue in the SALT II negotiations, and President Brezhnev eventually ordered that the aircraft’s flight-refuelling probes be removed to prove that it did not have the ability to reach the USA, although since these could have been replaced in less than thirty minutes this was only a token gesture. The Tu-26 entered service in the mid-1970s and was produced at the rate agreed under SALT II – thirty per year – with service numbers peaking at about 220.
Finally came the Tupolev Tu-160 (NATO = ‘Blackjack’), which flew for the first time in 1981 and just eighteen entered service from 1987 onwards. With a maximum take-off weight of 275,000 kg this was the heaviest combat aircraft ever built, and it carried a payload of 16,330 kg. The Tu-160 was fitted with swing wings and powered by four very powerful turbojets, giving it a range of 14,000 km at a height of 18,300 m, with a cruising speed of 850 km/h and a dash speed of Mach 1.9. The Tu-160 was also capable of low-level attack. Two large bomb bays could house nuclear gravity bombs, short-range missiles, or air-launched cruise missiles. Ironically, this remarkable aircraft – one of the finest bombers ever built, which at long last gave the Soviet Union the strategic bombing capability it had always sought – appeared just as the Cold War came to an end.
Bomber designers and the tacticians fought an unending war against the potential defenders in an effort to ensure that the bomber would get through to its targets. In the late 1940s the major threat came from radar-directed anti-aircraft guns, which had reached a considerable degree of sophistication, and the bombers’ first response was simply to fly higher than the effective ceiling of the guns. The next threat was air-defence fighters, and here again the bombers responded by flying higher and faster – there were numerous reports of British and US reconnaissance flights over the USSR in the early 1950s in which the Soviet fighters simply could not reach the same altitude as the intruder.
Second World War bombers were fitted with machine-guns in a variety of positions – including the nose, the waist, above and below the fuselage, and the tail – but these were rapidly reduced to just the tail, the elimination of the others saving considerable weight and enabling the aircraft to fly higher and faster. Also in the Second World War, bombers had been escorted by fighters, particularly on the USAAF’s daylight raids; but the strategic ranges now being flown were far in excess of anything a fighter could undertake. So in the 1950s the US air force trialled the idea of the B-36 bomber taking a fighter with it, with the latter being carried on a retractable cradle from which it could be launched in mid-air to deal with enemy fighters, then being recovered for the return to base. A special miniature fighter, the McDonnell XF-85 Goblin, was tested, as was the RF-84K, a modified version of the full-size F-84 Thunderjet fighter, but, although launching proved feasible, recovery did not, and the idea was not pursued.
Electronic countermeasures (ECM) were always used, becoming increasingly sophisticated as time passed. Thus electronic jamming was used to confuse enemy radars, as was ‘chaff’ (strips of metal foil cut to the wavelength of the radar), which was dropped in large quantities, either by the bomber or by specialized escorting aircraft.
One of the earliest devices to help the bomber get through was the US air force’s ADM-20 Quail, which resembled a miniature unmanned aircraft and was dropped over enemy territory, where it flew for some 400 km, using its on-board ECM devices to confuse the enemy as to the strength, direction and probable targets of the incoming bomber force. A maximum of three Quails could be carried by a B-52, and the device was in service from 1962 to 1979.
The main emphasis then turned to stand-off missiles – a concept which, like so many others, had its genesis in Germany, where V-1 missiles had been launched from Heinkel He-111 bombers in 1944–5. The Cold War missiles carried a nuclear warhead and were designed to be launched from the bomber while still outside the range of the enemy air defences. One of the first was the US Hound Dog – a slim missile with small delta wings, and powered by a turbojet – which entered service in 1961. Two Hound Dogs, each with a 1 MT nuclear warhead, were carried beneath the wings of a B-52. The missile could be set to fly at any height between about 50 m and 16,000 m, and had a range at high level of 1,140 km, less at low level. The guidance system was capable of high- or low-level approach, with dog-legs and jinxes to confuse the defence.
Next came the unhappy saga of Skybolt, which was an attempt to use a bomber to launch a ballistic missile, which would have given longer range and, of greater importance, a much shorter flight time. The UK air force joined the project, but the incoming Kennedy administration unilaterally cancelled it in December 1961 – greatly to the indignation of the British, who used the issue as a lever to obtain Polaris missiles and SSBN technology to replace its V-force bombers (see Chapter 12).
The Short-Range Attack Missile (SRAM), which entered service in 1972, was a rocket-propelled missile with a 170 kT nuclear warhead and a speed of Mach 3. SRAMs could fly either a semi-ballistic, a terrain-following or an ‘under-the-radar’ flight profile, the latter terminating in a pull-up and high-angle dive on to the target. The range depended on the height, and was from 56 km at low level to 170 km at high level. B-52s normally carried twenty SRAMs, while the FB-111A carried six and the B-1B twenty-four.
The Air-Launched Cruise Missile (ALCM) entered service with the US air force in 1982. This weapon had folding wings which extended when it was dropped from the carrier aircraft, and was powered by a small turbojet engine. Designed exclusively for low-level flight, the ALCM used a radar altimeter to maintain height and a map-matching process known as terrain comparison (TerCom) to give very precise navigation. The nuclear-armed version (AGM-96B) had a 200 kT warhead, a CEP of 30 m and a range of some 2,500 km. The AGM-96C was conventionally armed, with a high-explosive warhead, and this version demonstrated its effectiveness and accuracy when thirty-five were launched by B-52s during the Gulf War. B-52s could carry up to twelve and B-1Bs twenty-four.
Soviet stand-off missile development followed a similar pattern and time-scale, although in the early stages of the Cold War the missiles tended to be much larger and less effective than their US counterparts. Indeed, the first missile designed for use by strategic bombers, the AS-3 (NATO = ‘Kangaroo’) remains the largest air-launched missile to go into service, with a length of some 15 m, a wingspan of 9 m and a weight of 11,000 kg; only one could be carried by a Tu-95 (Bear-B). It did, however, have a useful range (650 km) and a high speed (Mach 2), and with an 800 kT warhead it was targeted against large area targets such as cities and ports.
The AS-15 (NATO = ‘Kent’) was much smaller and generally similar in size, performance and role to the US Tomahawk; sixteen could be carried by the Tu-95 Bear-B and twelve by the Tu-160 Blackjack. It carried a 200 kT nuclear warhead and flew at high subsonic speeds over a range of some 3,000 km at a height of 200 m, with an accuracy (CEP) of 150 m.
Manned aircraft offered certain unique advantages. First, they possessed inherent flexibility, in that they could be launched on receipt of strategic warning and then be held in the air, diverted to airfields outside the threatened area, or recalled to base. The fact that men were aboard and in control meant that targets could be changed during flight, that moving targets or even targets of opportunity could be engaged, and that orders could be altered or countermanded. Also, unlike with SSBNs, there were excellent communications between the command centres and the airfields, and between the ground and the aircraft. Finally, the bomber-delivered gravity bomb was the most accurate of any nuclear delivery system.
Among their disadvantages, however, was the bombers’ vulnerability to air defences and their absolute dependence on airfields with large runways and extensive maintenance facilities. Every airfield capable of taking strategic aircraft was known to both sides throughout the Cold War, and there can be no doubt that they were primary targets for both conventional and nuclear strikes.
At the start of the Cold War all that the strategic bomber had to do was to fly high and reasonably fast to reach its target, and even if it was picked up by enemy radar there was little that the enemy could do about it. Thus, throughout the late 1940s and most of the 1950s, bombers of Strategic Air Command could quite safely overfly almost anywhere on earth, since anti-aircraft guns could not fire high enough and contemporary fighters’ ceilings were too low to threaten them. That changed, however, in the mid/late 1950s as the performance of Soviet fighters improved, and in particular when they were fitted with airborne radar, enabling them to find and track targets in the dark and in bad weather. At first, bombers sought to counter this by flying even higher and faster, but then yet better fighters and in particular the fielding of air-defence missile systems caused different solutions to be sought.
The advantages offered by bombers over missiles depended upon the aircraft getting airborne in the first place, and in the worst-case situation of an ‘out-of-the-blue’ missile attack the bombers might only receive some seventeen minutes’ warning in the USA (less if the missiles were launched from Yankee-class SSBNs off the US coast) and four minutes in western Europe. Western bombers were therefore placed on a high-readiness status, known as Quick Reaction Alert (QRA). In the UK’s V-force, for example, this was introduced in early 1962 and involved one aircraft in each squadron being at fifteen minutes’ notice twenty-four hours per day, 365 days per year. Bomber Command stipulated that, apart from the aircraft on QRA, 30 per cent of the available aircraft (i.e. those not on major servicing or overseas) should be ready to deploy after four hours, rising to 100 per cent after twenty hours.{1}
Bomber fleets are almost always listed by total numbers, but this is misleading and nothing like that number would have reached the target in an unexpected crisis. A proportion would always have been in deep maintenance or rebuild, while others would have been simply unserviceable at the time they were required. In addition, it was not unknown for major problems to be discovered en route to the holding position which would prevent the aircraft proceeding to its target. Finally, at least some would have been either shot down or damaged by air-defence missiles, fighters and, on low-level missions, anti-aircraft artillery.
SAC’s bomber force was for a long period the most powerful single strategic military force in the world, with vast numbers of the most modern bombers deployed at bases across the continental United States. The first overseas protracted deployment was to the UK in July 1948, in response to the Berlin crisis (see Chapter 32), when three British airfields were made available to six squadrons of SAC’s B-29s, although these were not, as was reported at the time, atomic bomb carriers (which were known as ‘Silver Plate’). What was originally described as a temporary deployment rapidly became permanent, and, when the NATO Treaty was signed, the number of SAC bases in the UK increased from three to seven, then to eight, and Silver Plate B-29s arrived for the first time. Their targets at that time were in the southern USSR, their routing being over France and then along the northern Mediterranean and across the Black Sea and into the Ukraine and southern Russia. Other SAC bases were in Alaska, the Azores, Guam, Libya, Morocco, Okinawa and the Philippines, although SAC aircraft also made temporary deployments to many other friendly countries.
For all their advantages, strategic bombers inevitably took many hours to reach their targets. This was not a serious drawback when they were the only means of attacking the enemy, but when ICBMs and SLBMs entered the nuclear plan, with their flight times of approximately thirty minutes, bombers were perforce relegated to the second wave. Their missions could include non-time-urgent targets or simply ‘filling in the gaps’ which malfunctioning missiles or warheads left in the missile targeting plan.
A map of planned US strategic attacks on the Soviet Union which was prepared in the early 1950s as part of Operation Dropshot shows SAC bombers attacking from bases in the continental USA, Alaska, Okinawa, Guam, Egypt, Aden and the UK. The mission was to:
initiate, as soon as possible after D-day, strategic air attacks with atomic and conventional bombs: against Soviet facilities for the assembly and delivery of weapons of mass destruction; against LOCs [lines of communication], supply bases and troop concentrations in the USSR, in satellite countries and in overrun areas, which would blunt Soviet offensives; and against petroleum, electric power and steel target systems in the USSR.{2}
One of the most significant developments was the introduction of air-to-air refuelling, which extended the bombers’ range very considerably. The Boeing KC-97E tanker entered service in the early 1950s, and the Soviets, British and French all subsequently introduced similar systems.[3] The US and French air forces used a ‘flying-boom’ system, in which an operator in the tanker steered a boom into a receptacle on the upper surface of the receiving aircraft. The British, however, used a ‘probe-and-drogue’ system, in which the tanker streamed a rubber hose from a drum and the pilot of the receiving aircraft manoeuvred until the probe on his aircraft engaged in the drogue at the end of the hose. The Soviets initially used a third method on their Tu-16 Badgers, which involved connecting a hose between the wing-tips of the two aircraft, but this was later replaced by the ‘probe-and-drogue’ method.
Various aircraft were pressed into use as tankers. The US air force policy was to manufacture tanker versions of civil airliners, with the KC-97 Stratotanker being based on the Boeing Stratocruiser, the KC-135 on the Boeing 707, and the KC-10 Extender on the Douglas DC-10. The British, who came to the tanker scene a little later than the Americans, tended to convert service or civil aircraft which had been made redundant from their existing tasks. The first two (Valiant and Victor) were converted from bombers, while the latter two (VC-10 and Tristar) were converted from airliners.
Strategic bombers did not necessarily have to return to the bases from which they had been launched, and, in order to obtain the maximum range, many nuclear missions were planned in which the aircraft would have recovered to a distant base. Thus, for example, a bomber which took off from the continental United States might have flown over the Arctic, launched its missiles or dropped its bombs on targets in the USSR, and then carried on to land in Turkey or Pakistan.
There were, however, frequent (but never confirmed) reports that at least some missions were planned as ‘one-way’, with the best that the crew could hope for being a parachute drop into enemy territory. The respected aviation author Bill Gunston, writing about French Mirage IV bomber, states that: ‘Even with tanker support, many missions have been planned on a no-return basis…’{3} There were similar reports about RAF Canberra bombers based in Germany.
Although perhaps not typical of a nuclear attack, the bomber raids carried out by US forces on North Vietnam during Operation Linebacker II give an illustration of the ‘state of the art’ in the early 1970s. The USA made great use of air power throughout the Vietnam War, and particularly of its large force of B-52s, which were in the inventory for nuclear operations, but also had a very effective conventional capability. The North Vietnamese developed a very sophisticated air-defence system, using mostly Soviet radars, guns, missiles and aircraft, but with some Chinese equipment as well.
Operation Linebacker II took place when President Richard Nixon decided to use air power as a reprisal when the North Vietnamese abandoned the Paris peace talks on 13 December 1972. In the first raid, on 18 December, 121 B-52s attacked targets in and around Hanoi, supported by ECM aircraft, F-111s attacking North Vietnamese fighter bases, and F-4 Phantoms sowing chaff corridors. The North Vietnamese launched over 200 surface-to-air missiles (SAMs), fired much anti-aircraft ammunition, and flew fighter sorties, bringing down three B-52s and damaging two others. The following night no US aircraft were lost, but on 20 December six B-52s were downed. US tactics were then amended, reducing losses on the next four days, and there was then a thirty-six-hour ‘Christmas truce’ before 113 B-52s in seven waves struck targets in and around Hanoi, Haiphong and Thai Nguyen during a fifteen-minute period. The defences were overwhelmed, and only two B-52s were shot down. The operation continued for another three days, and then the North Vietnamese signified their willingness to return to the negotiating table.
During the eleven days of Linebacker II 729 B-52 missions were flown and 49,000 bombs (13,605 tonnes) were dropped on thiry-four discrete targets. Fifteen B-52s were lost and nine damaged, all to SAMs.
IN THE IMMEDIATE post-war years, the evidence of Hiroshima and Nagasaki was clear for all to see: the most powerful weapons in the world were the new atomic bombs, and only those who possessed them would be in the ‘top league’ of strategic powers. The corollary was that a non-nuclear power would be helpless if threatened by a nuclear power. Faced by this inescapable logic, the efforts to restrict the spread of nuclear weapons has never proved successful.
The history of the British V-bombers is worth studying in some detail, since it shows the complex issues faced by a smaller power in obtaining a viable nuclear force, and the never-ending effort and expense in keeping it operationally viable.[1]
For over a century the British were the most powerful single power in the world, but at the end of the Second World War they found themselves in a very weak position. The UK was virtually bankrupt, owed vast sums to the United States, and faced a major problem in rebuilding both industry and society at home. To complicate matters, it still had major overseas commitments in continental Europe, as well as responsibilities around the world with its colonial territories. On top of all this was the looming Soviet threat and a continuing desire to remain in the ‘top league’.
It thus became inevitable that the British would develop their own atomic bomb, although their programme was seriously hindered for a while by the refusal of the United States to make atomic information available to the United Kingdom, under the terms of the McMahon Act. This was something which the British found especially galling as they had assisted very substantially in the US Manhattan Project. Nevertheless, after much high-level consideration, the British programme was eventually given Cabinet approval on 8 January 1947,{1} and, after brief consideration of ballistic and cruise missiles, it was concluded that the programme must be based upon delivery by long-range manned bombers.
The UK was thus faced with setting up a very large programme. First was the work on the bomb itself, which included the full range of development activity and the construction of a wide range of facilities, including testing establishments, factories to produce the weapons, and storage sites once they had been completed. Second was the delivery system, which had been established as a manned bomber, powered by the then new turbojet engines. Third came the organization in both the government and the UK air force to operate, store, maintain and, in the ultimate, to use the weapons, which required new headquarters, procedures and communications systems.
Despite the complexity and expense, this was all achieved, and the first British atomic device was exploded on the Pacific island of Trimouille on 3 October 1952 and the first atomic bombs were delivered to the air force in November 1953. Meanwhile, technology had progressed from the atomic (A-bomb) to the thermonuclear (H-bomb) weapon, and the British development programme continued, resulting in the first British thermonuclear explosion, a bomb which was dropped from a Valiant bomber over Malden Island in the Pacific on 28 April 1958.
The British programme proceeded through a series of exotically named weapons, starting with Blue Danube, the original British A-bomb, with a 20 kT yield. This was followed by Violet Club, just five of which were produced and which served very briefly in order to give the air force a ‘megaton’ capability at the earliest opportunity. Violet Club was, however, described as a ‘rather delicate’ weapon; it had to be assembled on the bomber base itself by staff from the Atomic Warfare Research Establishment, and once assembled it could be transported only between the assembly point, the storage building and the aircraft. Doubtless all concerned were very relieved when the definitive weapon, Yellow Sun Mk 1, entered service in 1960.
Britain developed its own nuclear weapons to overcome the ban on information from the USA, and it was therefore somewhat contradictory that one of the consequences of that development was that the USA then felt able to release both information and weapons to the UK. Thus, in a programme known as ‘Project E’, the USA supplied a number of nuclear weapons to meet the air force’s requirements until such time as sufficient British ‘megaton weapons’ were available; these US weapons reached the UK air force in October 1958 and remained operational until 1962. The weapons were stored on British air bases, but, by US law, had to protected and maintained by US air-force personnel, and could be transferred to British custody only on direct orders from the US president. The British found that the US custodial arrangements created many complications, especially as the survivability of the V-bombers required them to be deployed rapidly to dispersal airfields in the face of an imminent threat – a factor which the inflexible US custodial and release procedures were not designed to cope with. There was therefore considerable relief when the British-made weapons became operational, enabling the remaining ‘Project E’ weapons for the V-force to be returned to the USA. (US weapons for British aircraft assigned to SACEUR remained until 1968, however.)
The British aimed to field a force of 144 V-bombers in the ‘Medium Bomber Force’, and, in a move which even today causes surprise, they developed four, radically different, designs, of which three actually entered service. During the early years the mainstay of this force was the Vickers Valiant, of which nine squadrons were formed between 1955 and 1957. The Valiant was superseded in Bomber Command by Avro Vulcans and Handley-Page Victors, although the Valiant continued in service as a bomber assigned to SACEUR, and as a strategic reconnaissance and tanker aircraft.
Having worked hard to get the V-force into service, the British then had to work as hard to keep it up to date. The aircraft were designed to meet a requirement for dropping gravity bombs from a high level, out of range of a defender’s anti-aircraft artillery; they were thus optimized for cruising and bombing at 12,000 m. The rapid development of Soviet missile defences, however, made it clear that such high-flying aircraft were extremely vulnerable, and the V-bombers had to be re-roled to a low-level approach, which, because of the resulting increased fuel consumption, had the immediate effect of restricting their radius of action, in turn reducing the number of potential targets. It also increased the loads on the airframes, as was discovered when Valiants were found to be suffering from metal fatigue, which led to the abrupt grounding of the entire fleet in December 1984 and its early retirement a month later.
Meanwhile the front line was maintained by the Vulcans and Victors. The delta-winged Vulcan became operational in March 1957, armed with Blue Danube, with twelve aircraft converted for a short time (1958–9) to carry the ‘interim megaton weapon’ (Violet Club). All Vulcans then carried Yellow Sun or Red Beard nuclear gravity bombs, until, finally, thirty-three were converted to take the Blue Steel stand-off weapon. The Victor, which featured a ‘crescent’ wing, entered service in 1958, and, like Vulcan, carried first Blue Danube and later Yellow Sun or Red Beard (but not Violet Club). Then, too, twenty-three were converted to take Blue Steel.
Blue Steel, which entered service in 1962, represented a different way to solve the problem of countering the enemy air defences. Carrying a 1 MT warhead and flying at Mach 2, it was originally designed for high-level delivery, at which it had a range of 280 km, but when converted to the low-level role this was reduced to 35–42 km.
Several attempts were made to extend the effectiveness of the V-force, the main one being purchase of the US air force’s proposed Skybolt air-launched ballistic missile. This was intended for launch from Vulcans, and would have had a range of 1,760 km if launched from 12,000 m and of 460 km if launched from 300 m. The missiles would have been fitted with British nuclear warheads, but, to the intense embarrassment of the British, the project was abruptly terminated by the USA in December 1962. In the end, the vaunted V-force was replaced by the British navy’s Polaris submarines on 30 June 1969.
When the British air-force nuclear deterrent became operational there was an obvious need for co-ordination with the Americans, so the British held discussions with the US Strategic Air Command (not, significantly, with the Joint Service Targeting Staff). At the initial meetings in 1957 it was discovered that every British target was also covered by the SAC’s list, and, in addition, that both air forces had ‘doubled up’ their intended strikes, to ensure success.{2} This was resolved by a combined plan in which the British were allocated 106 targets, including sixty-nine cities of governmental or military significance, seventeen Soviet air-force airfields with nuclear roles, and twenty elements of the Soviet air-defence system. Full tactical co-ordination was achieved by joint planning of routes, timing and ECM tactics.
For the British, however, there was a separate consideration, in that the V-force was an ‘independent deterrent’: its purpose was to be used not only in allied operations with US and NATO forces, but also, as a last resort, in national plans. As a result, once the co-ordinated plan with SAC had been devised, a second national targeting plan was prepared which listed ‘131 Soviet cities whose population exceeded 100,000; from these 131 cities, ninety-eight were chosen which lay within about 3,000 km of the UK and they were graded in order of priority according to population, administrative importance, economic importance and transportation’.{3} This British national list became operational in November 1957, and was updated in June 1958.
The British also started to develop an IRBM. Designated Blue Streak, this was a liquid-fuelled ballistic missile, with a range of 2,800 km – the same as that required of the V-bombers – and a 3 MT warhead. Blue Streak was designed to be emplaced in an underground silo, but raised to the surface for fuelling (which took twenty minutes) and launch. The project was started in 1955 but was abruptly cancelled in 1960, just before the (successful) first flight.
Sixty US-owned Thor IRBMs were deployed by the British air force between 1958 and 1963, each armed with a 1 MT warhead. These missiles were treated as part of the V-force, and their targeting was controlled by the Bomber Command Operations Centre, although since the warheads were supplied and controlled by the USA it is to be presumed that their targeting was fully integrated with US plans. Further, like the ‘Project E’ weapons supplied for use by the V-bombers, they were not available for UK national strike plans.
When President John F. Kennedy and Prime Minister Harold Macmillan met in Bermuda in December 1962, one of the subjects discussed was the replacement of the US Skybolt missile, which had just been cancelled by the USA. Prime Minister Macmillan managed to persuade the president to allow the British to participate in the Polaris programme. Since the British navy had traditionally worked very closely with that of USA, the programme went remarkably smoothly, being completed on schedule, with HMS Resolution, the first British SSBN, concluding its first patrol in June 1968. The submarine was created by inserting a sixteen-missile plug into a Valiant-class attack-submarine design, while the Polaris A-3 missiles were designed and built in the USA but had British warheads and re-entry vehicles. The number of missiles was set at sixteen simply in order to ensure maximum commonality with the US Lafayette design.
The British originally planned to build five Resolution-class SSBNs, but, although the Labour government which took power after the 1964 general election decided to continue the programme, it reduced the overall numbers to four boats. With one boat always in refit, one working-up and one in port, the British could only guarantee to have one submarine at sea at a time, with two for some of the time; the average was 1.44.
The general British philosophy of counter-value strikes was carried over from the bomber era to the submarines. The general principles were spelled out by the British admiral Sir Ian Easton in discussing the British purchase of the Trident SLBM system:
The nuclear destruction of a number – say, some dozen – of Soviet cities with a population of over 100,000 would be a traumatic blow to the Soviet Union. Among these cities might be Moscow, Leningrad, Kiev, Kharkov, Gorky and Stalingrad. The enormous loss of population and industry, the disruption of services critical to the life of the country, and the likely destruction of a proportion of the central bureaucracy of a centrally-organized state, could be expected to markedly weaken the vitality of the nation and the will of its people, and, perhaps, of its armies.{4}
The original British SLBM was the Polaris A–3, whose British ‘front end’ carried three 200 kT MRVs. These were all aimed around the same target, with a spread between impact points of some 16 km. When the advent of Soviet ABM defences around Moscow using the Galosh missile called the effectiveness of the MRVs into question, the USA offered to supply Poseidon, whose MIRVs were designed to outwit such defences. The British, however, opted for a programme of their own, Project Chevaline, which was based in outline on a US programme called Antelope. In Chevaline, the two warheads and a large number of penetration aids were mounted on a manoeuvrable penetration-aid carrier which deployed the various elements of its payload on separate trajectories, all of which were aimed at the same target, and was designed so that, having dispensed its payload, it then appeared to be and acted like a warhead itself. There were two warheads and three dummy warheads, all of which were enclosed in metallic balloons, with, to confuse the defences even further, a number of empty balloons as well. As the balloons entered the atmosphere they burned away, and the six objects then began a series of planned manoeuvres designed to mislead enemy ABM defences, before all impacting in the same general area. Thus, in effect, Chevaline depended upon disguising the warheads as dummies during the space phase, and disguising dummies as warheads during re-entry. Submarines began patrols with Chevaline in 1982.
In addition to the front end, the main Polaris missile was the subject of several refurbishment programmes. Most noteworthy was the replacement of the engines, which was carried out by the manufacturer in the United States, although the technology was by then so dated that the company had to re-employ retired workers, since the skills required were no longer available.
The Resolution-class submarines were designed to last for twenty years (i.e. 1968–89), but this was subsequently extended to twenty-five years and later to thirty years. In the event this was not achieved, and towards the end of their lives they were showing distinct signs of age, with reports of cracking in the coolant circuits, while Resolution’s final refit lasted five years – two years longer than had been taken to build it in the first place. Fortunately for the UK, these problems occurred at the end of the Cold War. The Polaris force served until past the end of the Cold War, being replaced by a force of four new submarines armed with Trident II (D-4) missiles in the 1990s.
Like those of other countries, the first French atomic weapons were carried by a bomber, in this case the Mirage IVA. This was created by scaling up the very successful Mirage III fighter, adding an extra seat for a navigator/systems officer, and replacing the single engine by two more powerful ones. The first prototype flew in June 1959 and the complete system became operational in 1964, the twenty-four-hour nuclear alert actually starting on 1 October 1964. In its original form, Mirage IVA was a supersonic, high-level bomber carrying the AN 11 gravity bomb, but from 1967 onwards it was converted to the low-level role, using an AN 22 retarded bomb.[4]
The original deployment consisted of thirty-six front-line aircraft, together with an integral force of Boeing KC-135F tankers which were located at nine widely separated bases, but in 1976 this was changed to thirty-two aircraft at six bases, with the KC-135Fs concentrated rather than dispersed. The number of Mirage IVAs gradually reduced, until the last squadron was disbanded in 1988.
Meanwhile, the Mirage IVP (P = Pénétration) entered service in 1986, at the same time as updated tankers (now designated KC-135FR) were being received. Eighteen Mirage IVAs were reworked to Mirage IVP standard, with improved navigation and electronic equipment to enable them to operate the ASMP (Air–Sol Moyenne Portée), a Mach 2.5 missile with a range of 300 km and a single 300 kT thermonuclear warhead, which was intended for stand-off attacks against heavily defended targets such as airfields and command-and-control centres. The Mirage IVP served through the end of the Cold War, until 1997.
The unrefuelled range of the Mirage IVA/P was insufficient for it to attack targets in the Soviet Union and return to airbases in France, and so the plan was for it to be refuelled over the Baltic or the North Sea, increasing the range from 2,500 km to some 3,800 km. This had two consequences. First, while the ability of the Mirage IVA/P to scramble was excellent, the critical factor was actually how long it took the heavily laden and much slower KC-135F/FR tankers to get to the first refuelling point. Second, the two aircraft were acutely vulnerable while they were refuelling, which limited how closely they could approach Warsaw Pact-dominated airspace. Nevertheless, at least some of the force should have got through to attack targets as far east as Moscow, although how many might have returned was open to question.
In establishing its strategic forces, France determined that they should parallel, in concept if not in size, those of the USA and the Soviet Union by consisting of a triad of land-, sea- and air-based systems. Thus, work began in the 1960s on a Sol–Sol Balistique Stratégique (surface-to-surface ballistic strategic missile – SSBS) system. Originally it was intended to deploy fifty-four missiles, but this was reduced first to twenty-seven and then to the eighteen which were actually deployed. Each missile was located in a hardened silo, with at least 3 km between silos, on the Plateau d’Albion in Haute-Provence in south-east France, which was selected for the nature of its soil, its sparse population and its height (some 1,000 m), which enhanced the missiles’ range. Each nine-missile site had its own command post (each of which could also launch the missiles at the other site). In Condition Blue all missiles could be launched within five minutes of the order being issued, while in Condition Red this was reduced to one minute.
The original missile was the SSBS S2, a two-stage missile with a range of about 3,300 km and carrying a single 120 kT warhead. This was in service from August 1971, but in 1980 it began to be replaced by the SSBS S3D (D = durci: hardened), with a range of 3,500 km and a 1 MT warhead hardened against the effects of EMP. One group of nine S2 missiles was replaced in June 1980, the second in January 1983.
It was very easy for any potential enemy to locate each of France’s eighteen SSBS silos and thus to target them precisely, which made them very vulnerable to a first strike. Indeed, some pragmatic French politicians made a virtue of necessity, postulating that an enemy would be compelled to make its intentions obvious by attacking the SSBS sites, thus giving France justification to launch its other strategic weapons.
The third leg of the French strategic triad was the ballistic-missile submarine, designated Sous-Marin Lance Engins (SNLE) in French service, the first of which became operational in 1972. Unlike the first SSBN designs in the USA and the UK, the French SNLEs were designed as such from the start and were not created by cutting an SSN in two and inserting a missile section. The British, faced with similar problems to the French, built a force of four SSBNs of which one was guaranteed to be on patrol at all times, whereas the French built five boats, of which two were guaranteed to be at sea, and then in 1983 they increased the at-sea figure to three. Availability increased yet further when the sixth SNLE, of an improved design, joined the fleet in 1985.
The first French SLBMs, the two-stage, solid fuel MSBS M1 and M2,[5] had ranges of 2,500 km and 3,000 km respectively, and carried a single 500 kT warhead with a CEP of approximately 1,000 m. The M1 entered service in 1971 and was in service until 1974, when the M2 took its place. In 1977 the M2 was itself replaced by the M20, which carried a single 1 MT warhead, together with penetration aids and decoys, to a range of 3,000 km. The final Cold War missile was the three-stage M4, which entered service in two variants: M4A, with a range of 4,000 km, and M4B, with a range of 5,000 km. Both carried six MIRVs (six TN 70s on the M4A and six TN 71s on the M4B), and one set of sixteen M4As and three sets of sixteen M4Bs were rotated between five SSBNs.
In March 1989 (i.e. close to the end of the Cold War) the French navy completed its two-hundredth deterrent patrol. Each of these had lasted seventy days, with a twenty-one-day break between patrols for cleaning, minor servicing and crew changeover at the SNLE base on the Île de Longue, off the port of Brest.
The original SNLEs were restricted by the range of the M1 and M2, and thus probably carried out their deterrent patrols in the Norwegian Sea. The increased range of the M20 enabled them to operate from the east Mediterranean and north Atlantic, while the M4 enabled them to operate from most parts of the north Atlantic, including close to the French coast, where they could take advantage of protection, particularly against Soviet ASW forces, by shore-based ASW aircraft.
The general French position towards deterrence was given in 1964 by President De Gaulle, who stated that:
But, once reaching a certain nuclear capability and as far as one’s own direct defence is concerned, the proportion of respective means has no absolute value. In fact, since a man and a country can die but once, deterrence exists as soon as one can mortally wound the potential aggressor and is fully resolved to do so, and he is well convinced of it.{5}
The declared French policy throughout the Cold War was to target Soviet cities, even when the increasing accuracy of French warheads seemed to make a counter-force strike a possibility. Thus the possibilities with the MSBS M4 were: to concentrate all six MIRVs on one target, to use three each against two targets, to use two each against three targets, or to use one on each of six targets. French officials even argued against increasing the SNLE force to fifteen, as proposed by the Gaullists, because that would have created spare capacity in the anti-city targeting, thus enabling military targets to be engaged and, in effect, ‘diluting’ the French deterrent.
The People’s Republic of China (PRC) was not a direct participant in the Cold War, but its nuclear forces became an increasingly important factor in both Soviet and US calculations of the strategic balance. As in France and the UK, the government of the PRC which took power in 1949 quickly decided that a nuclear armoury would be essential if the country was to achieve the world status it deserved. The correctness of this decision was supported in Chinese eyes by the various crises involving the PRC and the USA in which the latter, either implicitly or, in some cases, explicitly, threatened the use of nuclear weapons against the PRC.
The Chinese programme appears to have started in 1955, and during the following five years Soviet scientists and military officers played a major role in helping the PRC to establish a nuclear research, development and production infrastructure. This massive help – unprecedented between a nuclear and a non-nuclear power – ceased abruptly with the political rift between the two countries in 1960, which set the Chinese programme back several years. Even so, the first atomic-bomb test took place on 16 October 1964, when a 22 kT device was exploded, followed by a second in May 1965 and a third in May 1966, while in October 1966 a missile was launched carrying a nuclear warhead which successfully detonated on arrival at the Lop Nor test site. The first H-bomb was successfully tested in June 1967, less than three years after the first atomic-bomb test – a considerably shorter gap than has been achieved by any other country.
Despite the Soviet help in the 1950s, the PRC’s rapid ascent to the status of a nuclear power was a truly remarkable achievement. It must be remembered that at the time of the Communist takeover in 1949 China was, in industrial terms, a very backward country, with very little modern infrastructure, and most of the little that did exist had been damaged in either the Second World War or the Civil War. On top of that, there were no established aircraft-construction or shipbuilding industries, no electronics industry, and only a limited weapons industry. Almost everything, therefore, had to be created from nothing.[6]
The only aircraft with a strategic capability to enter service with the Chinese air force was the Hong 6, a licence-produced version of the Soviet Tupolev Tu-16 Badger. An elderly twin-jet design, it could carry a single nuclear weapon to a range of some 3,000 km. Some 120 were produced and, at least at the end of the Cold War, there were no known plans to produce a successor.
The Soviet Union supplied the PRC with two SS-1 missiles in 1956. These were direct copies of the German A-4, and were followed by fourteen of the improved SS-2 missile between 1957 and 1960. The latter was placed in production as the Dong Feng 1 (DF-1; ‘Dong Feng’ means ‘East Wind’) and carried a high-explosive warhead; it was primitive, but it gave the People’s Liberation Army experience of working with missiles. Meanwhile, a serious domestic research-and-development programme had been set in train, with the intention of producing a family of land-based missiles for use against US targets. The DF-2 was the first and was based on the Soviet SS-3, ‘Shyster’. The missile was road-mobile and was launched from an erector–launcher, although its liquid fuel required a long and hazardous preparation time.
Next came the DF-3, which was much larger, but still road-mobile, although the use of storable liquid propellant resulted in a much reduced preparation time. Deployment peaked at some 120 in the early 1980s but reduced to approximately seventy by the late 1980s. The DF-3 carried a 3 MT thermonuclear warhead and had a range of 2,650 km, enabling it to threaten the US bases then located in the Philippines. A number of DF-3s, reported to be thirty-six, were exported to Saudi Arabia, although it is claimed that these were armed with high-explosive and not nuclear warheads.
The series continued with DF-4, in which a DF-3 first stage was mated to a new second stage; fuel was again storable liquid. With a range of 4,500 km, the DF-4 could attack the US facilities on Guam with a 3 MT warhead, and some fifteen to twenty were deployed. The final missile in this series was the DF-5, which in its DF-5A version delivered a 5 MT warhead over a 13,000 km range.
The PRC has used a wide variety of basing methods for its ICBMs. Both the DF-2 and the DF-3 were road-mobile, but their successor, the DF-4, was originally planned to be silo-based, although once the vulnerability of such a scheme had been appreciated alternative basing methods were sought. A rail-mobile scheme was considered and tested in 1975, but it was finally decided to install part of the DF-4 force in silos and part in caves. The silos are similar those used by the first-generation US ICBMs, with the missiles sitting in the silos atop large elevators which raise them to the surface for fuelling, final preparation and launch (as with the US Atlas missiles in the 1960s). The remaining missiles are mounted on mobile erectors located inside modified caves with blast-proof doors; the missiles would be brought out and erected before launching.
A range of further possibilities was considered for the DF-5s, including rail-mobility and imaginative schemes such as false bridge towers, narrow gorges, mock civilian houses and even barges on the Yangtze river.{6} In the end it was decided to base them in hardened underground silos among a large number of dummy silos.
The effectiveness of the Chinese basing policy was endorsed by the US Joint Chiefs-of-Staff, who stated that:
China views its strategic missile force as an effective nuclear deterrent because its deployment strategy of mobility, hardening, and concealment poses targeting problems for any potential aggressor. This strategy enhances the survivability of some portion of the missile force for a significant retaliatory strike.{7}
Indeed, even after the end of the Cold War it was generally admitted that not even US or Soviet satellites had been able to identify anything approaching all the Chinese missile sites; thus China had achieved what neither the USA nor the USSR had ever been able to do.
Just before their split, the Soviet Union supplied the PRC with the plans and components for a Golf-class, diesel-electric-powered, ballistic-missile submarine, which was completed at Lüda in 1964. This was originally fitted with three vertical launch tubes in the sail, as in the Soviet original, but in 1974 it was modified by removing all three launch tubes and replacing them with two of greater diameter to enable it to test Chinese SLBMs.
The submarine element of the force was the Daqingyu-class SSBN, one of which was launched in 1981 and completed in 1987. This was powered by a single pressurized-water nuclear reactor and was armed with twelve Ju Lang 1 SLBMs.
The Ju Lang 1 (‘Ju Lang’ means ‘Great Wave’), like the US navy’s Polaris, used solid fuel, rather than the liquid fuel of the land-based ICBMs. The first launch was from a submerged barge in April 1982, followed by a launch from the Golf-class trials submarine on 12 October 1982 and from a Daqingyu-class SSBN in 1988. The missile carried a single 250 kT warhead to a range of 1,700 km, and by the end of the Cold War it served in one twelve-missile SSBN.
The PRC’s initial intention was to target US military facilities in the Far East and, eventually, the USA itself. Thus the DF-2 was intended for US facilities in Japan, the DF-3 for US bases at Subic Bay and Clark Field in the Philippines, the DF-4 for the airbase on Guam island, and the DF-5 for Hawaii and the west coast of the continental USA. With the deterioration of the relationship with the USSR and, in particular, the border clashes in 1969, the PRC completely reoriented its strategic force to target the Soviet Union – the only example of such a move during the entire Cold War. Thus the DF-2 and the DF-3 were retargeted against Soviet cities in the Far East and Central Asia, while the DF-4 brought Moscow and the large cities and military–industrial facilities in the Urals and Siberia within range. The DF-5, however, could reach any target in the Soviet Union and western Europe.
One of the unusual aspects of the Chinese nuclear forces is that they have been fielded in remarkably small numbers: the maximum numbers of land-based missiles to be deployed, for example, were 120 DF-3s, twenty DF-4s and four DF-5s. The capacity undoubtedly existed to produce and deploy many more, but the Chinese leadership appears to have taken the view that its strategic needs would be adequately met by possessing a nuclear force capable of delivering an effective retaliatory strike if attacked by nuclear weapons – i.e. an assured and effective second-strike capability against population and military–industrial centres.
THE ULTIMATE THREAT that each side in the Cold War posed to the other was to the civil population, but, despite this, governments’ attitudes to protecting their own populations were rather ambivalent. In most countries, policies seemed to follow a seven- to ten-year cycle, varying from, at worst, almost total uninterest to, at best, a grudging and lukewarm enthusiasm. The figures speak for themselves: as a proportion of the defence budget, the USSR spent just under 1 per cent on civil defence, while the USA spent approximately 0.1 per cent, and the figure in most other countries was even less.
The difficulty was that, if it was to be taken seriously, the scale of the problem was huge and the costs were enormous. Further, the measures could, of necessity, only be passive: protective shelters for the population to take refuge in, respirators and protective suits to resist biological and chemical attack, fire engines to extinguish fires, and a proper organization to make it all work. Very few countries proved willing to undertake such measures on the necessary scale, particularly if they were achievable only at the expense of cuts in the more active part of the national defence budget.
It was generally accepted that even a counter-force strike (i.e. against military targets such as ICBM silos, airfields, naval ports and nuclear command-and-control centres) would result in massive civilian casualties – the so-called ‘collateral damage’. One major study suggested that both the USA and the USSR would suffer casualties of the order of 12–27 million deaths from a counter-force strike, while the estimated deaths from a counter-value strike (i.e. against cities and industrial complexes) would be 25–66 million in the USA and 45–77 million in the USSR.[1] In both cases (i.e. counter-force and counter-value), further large numbers would have suffered longer-term radiation-caused cancers. The study report also stated that, in addition, there would have been many further deaths and injuries from indirect consequences of the nuclear attacks, such as riots, sickness, disease and starvation, whose numbers were impossible to calculate.{1}
Civil-defence measures potentially consisted of four elements: a system to detect an incoming attack and warn the civil population; a policy for the orderly evacuation of urban areas; the construction of shelters; and plans to achieve national survival after an attack. Different countries gave differing emphases to these, although towards the end of the Cold War there appeared to be a growing consensus that even the most all-embracing and expensive civil-defence policies would be of little use in the face of a heavy, all-out, counter-city attack. After all, ran one argument, what value would there be in surviving in a shelter only to emerge to a world that had been totally destroyed?
Warning systems were designed to enable the general population to seek protection against heat flash, blast and, to a certain extent, fallout. The USA and the USSR would normally have received between seventeen and thirty minutes’ warning of approaching missiles, but, like countries in western Europe, could have received as little as four minutes, which would have given very little time for the public to be alerted.
To be effective, an evacuation plan would have had to be implemented well in advance of an attack, but evacuation was a course fraught with difficulty. A general evacuation of the big cities would bring national life and much of industry to a standstill, and could not be sustained for a long period. Evacuation of a large city would be a lengthy, complicated and difficult operation, and if the missiles arrived while vast convoys of trains, buses and cars were stuck on the railroads and highways the nation concerned would actually suffer the worst of both worlds. Also, it would be difficult to predict in advance which refuge areas would be safe, and the arrival of large groups of townspeople in rural areas would cause enormous feeding, accommodation, medical, health, sanitation and morale problems. Finally, the opposing side might interpret the evacuation as a sign that the country concerned was conducting it as a prelude to launching its own strike, and use this as a pretext to strike first.
The USSR treated civil defence more seriously than most other countries, with the central headquarters being an integral part of the Ministry of Defence. One of the deputy ministers of defence was specifically responsible for civil defence, and there was a chain of command running through the council of ministers in each of the fifteen republics of the USSR down to full-time officials at town and large-factory level. The Ministry of Defence also controlled a nationwide network of civil-defence schools, where training courses were run for both military and civilian personnel.
The civilian organizations were backed up by a military Civil Defence Corps, some 50,000 strong, which was trained in basic military skills as well as civil-defence skills such as operating engineering equipment, traffic direction, and first aid. Other Ministry of Defence bodies, such as the Construction Troops, the Railway and Road Construction Troops and the Transport Organization Service, were also called upon to perform civil-defence tasks, including building shelters.
In and near the major cities, the USSR constructed hardened command posts which were designed to accommodate approximately 100,000 people in what was termed the ‘leadership category’ – which, by definition, meant Communist Party officials and military officers. There was also a shelter programme for the people, and by 1981 there were some 20,000 shelters, capable of accommodating approximately 13 million people, which amounted to approximately 10 per cent of the population of cities with over 20,000 inhabitants. The rate of building continued for several years after that, but it failed even to keep pace with the increase in population numbers and by the late 1980s the programme was moribund.
The remainder of the urban population would have had to rely on evacuation, and the occasional small-scale exercise was conducted. Whether the system would have coped with transporting, housing and feeding millions of city-dwellers eager to reach the countryside, particularly in the depths of a Russian winter, can only be a matter for conjecture.
The other Warsaw Pact countries had generally similar organizations, with a department in the national ministry of defence, usually headed by a lieutenant-general, responsible for civil defence.
In the USA the responsibility for civil defence originally lay with the Department of Defense, but it was passed to the newly established Federal Emergency Management Agency (FEMA) in 1979. FEMA plans assumed that the primary Soviet strategic mission in a first strike would be against counter-force targets, and its crisis relocation plans were based on the high degree of mobility inherent in the United States, with an extensive highway system and widespread automobile ownership. Whether the gasoline would have been available for such a mass movement, whether the huge numbers of travellers would have been amenable to control, and whether the rural areas could have accepted and sustained the numbers involved was never put to the test.
The British system made the civil authorities[2] responsible for civil defence, with the military in support. A large Civil Defence Corps was established in the early 1950s, consisting mainly of volunteers, backed up by a small cadre of full-time staff. This corps was trained and equipped for both heavy and light rescue, and operated in conjunction with the police, the fire services and the military, but it was disbanded in the early 1960s.
On several occasions during the Cold War the British government considered the idea of a large-scale shelter programme for the general population, but the idea was always rejected on the grounds of the enormous cost, a 1980s assessment putting the price at some £1,300 per head. As a result, the actual plan – known as the ‘Stay-Put Policy’ – depended upon providing a warning system and the use of TV, radio, newspapers and mailshots to tell the population to remain where they were in the event of war. The education of the population in protective measures would have been implemented only when war appeared inevitable.
Actual warning of a nuclear attack and reporting post-strike developments was the responsibility of the UK Warning and Monitoring Organization (UKWMO), which consisted of a very small number of full-time officials and some 10,000 men and women volunteers of the Royal Observer Corps.[3] The national nuclear-attack warning was disseminated using a cascade system, which originated with the detection of an incoming strike at the Ballistic Missiles Early Warning Station (BMEWS) at Fylingdales, Yorkshire. BMEWS passed the warning to the UK Regional Air Operations Centre, where an UKWMO cell activated some 250 carrier control points (CCPs) located throughout the UK in major police stations. On receipt of the signal, these CCPs would, in their turn, pass the warning to some 11,000 lower-level warning points (selected industrial premises, smaller police stations, fire stations and UKWMO monitoring posts) as well as activating some 7,000 powered sirens to alert the general public. In the post-strike period, UKWMO was responsible for plotting the national fallout patterns, using input from its network of some 870 three-person monitoring posts spread across the whole of the UK.
The UK governmental organization for the aftermath of nuclear war involved setting up a network of ‘regions’, each divided into a number of sub-regions, which were themselves divided into a number of counties. There was a headquarters at each level, consisting of elected representatives, civil servants, and officials of the military, police and fire services, together with support staff. Each command level, down to and including counties, had a purpose-built, heavily protected bunker; these bunkers, together with a small number of central-government and military bunkers, comprised the total national stock of nuclear-proof accommodation. There were also extensive preparations for a post-strike, military-run, country-wide communications system, which would have provided government communications until the civil system had been restored.
Finally, there was a Home Defence College, run by the Home Office, whose task was to provide training in civil-defence duties for officials and elected members. The government also maintained stockpiles of strategic commodities such as fuel, sugar, salt and flour.
Other NATO countries’ policies were generally similar. The NATO policy was that ‘The deterrent posture of the strategic concept of flexible response can only be fully realized if military preparedness is complemented by credible civil preparedness.’{2} Civil emergency planning was essentially a national function, but NATO policy was co-ordinated by the Senior Civil Emergency Planning Committee, which met in Brussels twice a year in peacetime but would have gone into permanent session in war.
Shelter policies were debated in most countries throughout the Cold War, as it was clear that shelters would provide protection from most of the effects of a nuclear war. In West Germany, legislation ensured that all new housing included a cellar built to government specifications. The Swedish system potentially housed some 70 per cent of the population, the Swiss some 90 per cent.
Norway was one of the NATO countries to take civil defence very seriously, with a civil-defence organization run by the Ministry of Justice. The civil-defence force had a permanent staff of 500 and a mobilized strength of 70,000, with some 33,000 more in an industrial-defence organization. In 1990, with a population of approximately 4.2 million, the country had sufficient shelters to accommodate 2.6 million people (62 per cent of the population), of whom about 2.3 million would have been in private shelters built to government standards and about 276,000 in public shelters. The government also had plans to evacuate some 500,000 people from cities, towns and areas close to military installations.{3}
There was no doubt that the two measures which might have been effective were shelters and evacuation. The former would, however, have been enormously costly, while the latter would have involved major problems of control and reception arrangements. There was also the major question of whether the general population, faced by the prospect of imminent nuclear attack, would actually have been amenable either to reason or even to a degree of coercion. It is certainly arguable whether the inhabitants of major cities such as London, New York, Washington DC, Paris, Cologne, Moscow or Leningrad, knowing that their cities must be on the enemy’s target list, would have remained in their homes, and there must have been at least a possibility that a fairly large number would have fled, probably with increasing degrees of panic, to the countryside.
The general picture, however, was one of governments doing the bare minimum for the civil population and begrudging any expenditure on preparations for civil defence. Curiously, many countries did this against a background of a network of government bunkers which would have ensured that those making the ‘no evacuation/no shelter’ policies would themselves have dispersed and survived.
FOR CIVILIANS, THE media and academics, the most obvious way of assessing the nuclear balance was by simple numerical comparison of missiles, warheads, bombers, bombs, submarines and so on. Known as ‘static’ measures, these could be very misleading, but they were (and still are) all that was possible without access to the full range of facts and to computers with the processing power necessary to run the comparisons.
The raw yield of a nuclear weapon is expressed in terms of its equivalence to the energy released by high explosive (TNT). Raw yield is, however, not an accurate expression of the weapon’s effect, and to compare the total raw yields of weapons held by different nations is virtually meaningless. The first refined expression of war fighting performance is therefore equivalent megatonnage (EMT), which reflects a weapon’s potential to damage ‘soft’ or area targets.
For yields of 200 kT and above: EMT = yield2/3
For yields of less than 200 kT: EMT = yield1/2
Table 14.1 shows a ‘law of diminishing returns’ operating, where, for example, a tenfold increase in raw yield from 1 MT to 10 MT results in less than a fivefold increase in EMT.
EMT does not, however, make any allowance for accuracy (CEP), which is an important consideration when attacking pinpoint targets such as missile silos. This requires a more sophisticated measure to assess weapon lethality or counter-military potential (CMP):
where n = 2/3 for yields of 200 kT and above
n = 4/5 for yields of less than 200 kT
and CEP is measured in nautical miles.
Thus the greater the accuracy (i.e. the smaller the CEP), the greater will be the CMP; in fact the lethality increases much more rapidly with accuracy than it does with yield.
It follows from this that the ability of a country to destroy an opponent’s missiles in their silos is the product of the CMP and the total number of warheads:
i.e. total CMP = CMP × number of warheads
The simple fact that a missile existed was not, however, the whole story, and two further factors came into play in assessing whether or not a missile was likely to achieve its purpose: availability and reliability.
Availability was an assessment of whether or not a weapons system would be ‘ready to go’ at the moment it was required, and was a function of factors such as a missile’s having been taken ‘off-line’ for maintenance, or removed altogether either to be modernized or for the silo to be rebuilt to meet greater hardness criteria, and so on. If a missile was unavailable, then so too were its warheads, making a difference of one potential target in the case of a single-warhead missile, but of up to ten or even fourteen where the missile was fitted with multiple warheads (MRVs or MIRVs).
One factor which could have increased the number of missiles available was the use of at least some of the apparently non-operational stocks. Thus an SSBN in port undergoing a short refit between patrols might have been brought up to operational status within forty-eight hours and could either have put to sea rapidly or, in the worst case, have fired its missiles while still lying alongside. Some navies also operated trials submarines (e.g. the French Gymnote and the single Chinese Golf-class) which could have launched missiles in a wartime emergency.
Land-based test centres existed to test prototypes and, at least in the US case, were also used for routine testing of operational missiles. They thus obviously had full-scale launch facilities which could be used to generate additional missile launches in war. The US Vandenberg Air Force Base in California, for example, was capable of launching up to sixteen missiles,{1} while there were launch facilities for Chinese DF-4s at various test centres in China, for Russian missiles at similar sites in the USSR, and for four French SSBS S3D missiles at the Centre d’Essais des Landes (CEL) test facility for land-based missiles, in south-west France.
Reliability, on the other hand, was an assessment of the probability that available systems would function correctly from the moment of issuing the launch instruction to the arrival of a warhead at the target. The general approach to determining the probability of success for a complex operation was to break it down into a sequence of discrete events and to determine the probability of the successful outcome of each one, normally expressed as a percentage. All these probabilities were then multiplied together to give the overall probability of success – i.e. the probability that the missile would accomplish its mission. Thus, for example, a missile with ten discrete functions (first stage motor fires, missile leaves silo, second stage motor fires, second stage separates from first stage, and so on), each with a 98 per cent probability of success, has an overall reliability factor of (98 ÷ 100)10 = 82 per cent.
One problem with the reliability equation was that it was impracticable to test the missiles on their operational flight paths. US test flights, for example, were in either south-easterly or south-westerly directions or due south, whereas the operational flights would have been to the north, north-west or north-east. Similarly, Soviet test flights were not in the direction required for operational flights, although the huge land mass of the USSR enabled Soviet strategic rocket forces to carry out regular missile testing using live missiles fired from their operational silos, whereas US ICBMs had to be taken to Vandenberg Air Force Base. Thus it was possible that, had they ever been launched in anger, missile guidance systems might have been influenced by some unexpected factor, such as a minor variation in the earth’s magnetic field, for which no allowance had been made. This might well not have had a significant effect on a counter-value mission, but could have caused just sufficient variation in a counter-force mission to make the difference between success and failure.
Although SSBNs frequently launched SLBMs with inert heads, there was only one known example of a fully operational SSBN/SLBM launch with a nuclear warhead. Designated ‘Frigate Bird’, this took place in the Pacific Ocean on 6 May 1962, when USS Ethan Allen (SSBN-608) launched a Polaris A-2 missile with a W47 warhead. The test, which was successful, involved a flight of 1,890 km from the submerged launch, culminating in an airburst over Christmas Island.
Numerous operational examples occurred to show that missiles were neither as available nor as reliable as may have been thought. The US navy’s Poseidon C-3 had severe reliability problems in the early 1970s, and there were several reports that missiles had failed to fire during routine tests. A significant number of Poseidons’ W68 warheads were also reported to have been defective, due to degradation of the high-explosive element, one effect of which could have been the failure of the detonator.{2} Although unconfirmed at the time, the US navy subsequently tacitly endorsed these reports by stating that the Trident II C-4 had a ‘much better reliability record’ than the Poseidon C-3.
In an incident in 1986, an unarmed Soviet navy SS-N-8 was test-fired by a Delta II SSBN in the Barents Sea and aimed at the missile test range on the Kamchatka Peninsula, but landed near the Amur river on the Sino-Soviet border, some 2,400 km from its target. Since missile are always carefully checked and prepared for test flights, such a major deviation from the intended flight path caused considerable concern at the time.{3} Another incident, much publicized at the time, occurred in October 1986, when a Soviet Yankee-class SSBN suffered serious structural damage as a result of an explosion in one of the missile tubes, presumably involving the highly volatile liquid fuels. In another incident a Soviet Delta IV-class SSBN attempted to launch sixteen SS-N-23 missiles one after another in the White Sea on 7 December 1989. The third missile failed very soon after launch and fell back on to the submarine (which presumably was on the surface) and thirteen men were injured.
According to Russian sources, the SS-N-4 SLBM was in service between 1961 and 1973, and during that time 311 test launches were made. Of those launches, there were 38 missile failures, 38 failures due to other known causes, and 10 due to unknown causes. In other words, only 72.3 per cent (225) of the missiles were successful, and that was without a live warhead, which would have introduced yet another element of uncertainty.{4}
Single-shot kill probability (SSKP) is an expression of the probability that one warhead of specified reliability will destroy a hardened target. Thus it can be calculated that a single warhead of 0.5 MT yield, a CEP of 260 m and a reliability of 85 per cent, attacking a target capable of withstanding an overpressure of 146 kgf/cm2, would have an SSKP of 54 per cent. In other words, the warhead has a marginally better than ‘evens’ chance of success.
As explained earlier, a true assessment of the nuclear balance would require a detailed analysis of a vast array of variable factors, and would need to include allowances for factors such as availability, reliability, differing practices in SSBN sea-time, the weather at both launch sites and targets, and so on. It would also need to take account of each side’s targeting plans, including how many warheads might be allocated to the first and second strikes, how many might be classified as ‘withholds’, how many might be retained as strategic and ‘nth-country reserves’, and so on. Determining such a balance would also require individual missile systems to be split between counter-value and counter-force targets.
With so many factors to be taken into account, a powerful computer would be needed to calculate the final result, which would need to be accompanied by long and detailed explanations. However, Tables 14.2 to 14.4 show a general picture of the situation by taking three ‘snapshots’ of the situation in 1970, at which time the missile race was really under way, in 1990, at the end of the Cold War, and in 1980, halfway between the two.*
Table 14.2 shows the balance in numbers of missiles. The number of ICBMs shows the total number of land-based missiles, which (ignoring dummy silos) was also the majority of targets the enemy would have needed to destroy in a pre-emptive strike (the others being a relatively small number of command-and-control sites). In ICBMs, both sides showed a fairly steady figure, the 1990 reduction in US ICBMs being due to the retirement without replacement of the Titan II. In SLBM numbers, the USA had already peaked in numbers by 1970 and retained a reasonably steady state thereafter, while the USSR grew rapidly from 35 per cent of the US figure in 1970 to 163 per cent a decade later, as the many Delta-class SSBNs entered service.
Table 14.3 shows the number of warheads on the missiles, and is an indication (ignoring dummy silos) of how many targets could have been attacked, bearing in mind that pinpoint targets would probably have been targeted by two warheads. When compared with Table 14.2 it shows that, whereas there was little prospect of a successful pre-emptive strike in 1970, both sides had obtained such a capability by 1980. The table also shows that the increase in numbers of warheads on the US side was greatest on SLBMs, while in the USSR the growth was greater on ICBMs, and that by 1990 the Soviet Union possessed a marked advantage in overall warhead numbers.
Table 14.4 takes the balance a stage further and compares the CMP of the two forces. From this it is clear that the power of the missile forces of both sides increased greatly during the period: by a factor of twenty for the USA and of forty-two for the USSR. The table also shows that the Soviet counter-military potential was concentrated in its ICBM force, leaving the counter-value role to the submarine-based missiles. It is also clear, however, that neither side had an advantage over the other.