1


IMPERIALISMS, OLD AND NEW

What was the name of that river which Julius Caesar crossed? Was it not called the Rubicon? Yesterday, Mr. Bush may have crossed the very same river.

ROBERT FISK,


Middle East correspondent for the London Independent,


reporting from the United Nations, September 13, 2002


American leaders now like to compare themselves to imperial Romans, even though they do not know much Roman history. The main lesson for the United States ought to be how the Roman Republic evolved into an empire, in the process destroying its system of elections for its two consuls (its chief executives), rendering the Roman senate impotent, ending forever the occasional popular assemblies and legislative comitia that were at the heart of republican life, and ushering in permanent military dictatorship.

Much like the United States today, the Roman Republic had slowly acquired an empire through military conquest. By the first century BC, it dominated all of Gaul, most of Iberia, the coast of North Africa, Greece, the Balkans, and parts of Asia Minor. As the Canadian essayist Manuel Miles observes, “There is no historical law prohibiting a republic from possessing an empire. There is a trend toward autocratic takeovers of imperial republics, however, especially after they reach a certain stage of growth. Even now this process is underway in the USA—the President, like the first Roman emperors, decides when and where to wage war, and his Senate rubber stamps and extorts the funding for his imperial adventures, just as the original came to do in the time of Caesar and Octavian.”1

The Roman senate, much like Congress, worked well enough for two centuries. But by the first century BC, the size of the empire and the armies its maintenance required overwhelmed the capacities of the senate and the consuls. In 49 BC, Julius Caesar violated Roman law by bringing his army across the small stream called the Rubicon in northern Italy and plunged the country into civil war among the imperators, the generals of Rome’s large armies. After the Battle of Actium in 31 BC, Octavian emerged as the most powerful of the generals and assumed dictatorial powers in order to end the military civil wars. In 27 BC, the senate passed most of its power on to him, giving him the name of Augustus. As the first emperor, he reigned from 27 BC to AD 14. Within a few decades, the Roman senate had grown to over a thousand members, while being reduced to little more than a club of the old aristocratic and military families. Rome ruled all of the known world except for China, but in the process Roman democracy was supplanted by dictatorship, and eventually the Romans were overwhelmed by the world of enemies they had created. To the very end Roman armies pretended to speak for “the senate and the Roman people” and paraded under banners emblazoned with the Latin initials SPQR (Senatus Populusque Romanus). But the days when the senate mattered were long past; empire had become an end in itself.

As the Roman republic was disintegrating, not all of its citizens quietly acquiesced in the loss of their democratic rights. In Shakespeare’s famous version of the politics of those days, one citizen, Cassius, asks Brutus, “Upon what meat doth this our Caesar feed that he is grown so great?” In a sense this book is an attempt to answer that question in the context of an American imperium. To start, consider just one proposition on which today’s imperialists—poisoned by false pride and self-glorifying assumptions—have fattened. I am referring to the dangerously misleading conclusion that the United States caused the Soviet Union’s collapse and therefore “won” the Cold War. The mind-set that produced this conclusion offers clues to how the United States, like ancient Rome, embarked on the path toward militarism and empire.

Among American triumphalists, devoted fans of Ronald Reagan, and old star-wars enthusiasts, there is a myth that President Reagan’s sponsorship of what he called the strategic defense initiative (SDI)—a never-completed, never-deployable, largely space-based defense against intercontinental ballistic missiles—set off a competition with the USSR over defense spending that ultimately caused the latter’s downfall. The triumphalists allege that even though Reagan’s star-wars proposals never came within light-years of working, they forced the USSR into an arms race that broke its back economically. Reagan’s “evil empire” speech, his support of anti-Soviet guerrillas in Afghanistan, and his illegal support of “counterrevolutionaries” (contras) against the elected government of Nicaragua—so this argument goes—created a climate in which SDI was decisive. Thus, despite an almost unbroken record of mistaken assessments and misplaced advice about the strength and problems of the USSR during its final decade, Robert Gates, George H. W. Bush’s CIA director, still concludes in his memoir, “In my view, it was the broad resurgence of the West—symbolized by SDI—that convinced even some of the conservative members of the Soviet leadership that major internal changes were needed in the USSR. That decision, once made, set the stage for the dramatic events inside the Soviet Union of the next several years.”2

Yet according to Anatoly Dobrynin, the long-serving Soviet ambassador to Washington, as early as February 1986 Russian president Mikhail Gorbachev had concluded that “the United States is counting on our readiness to build the same kind of costly system [as SDI], hoping meanwhile that they will win this race using their technological superiority. But our scientists tell me that if we want to destroy or neutralize the American SDI system, we only would have to spend 10 percent of what the Americans plan to spend.”3 Among Gorbachev’s scientific advisers, none was more important than Andrei Sakharov, who participated in the creation of the Soviet Union’s hydrogen bomb and later became a brave critic of his country’s human rights record and the winner of the 1975 Nobel Peace Prize.

On December 23, 1986, Gorbachev ordered Sakharov and his wife, Yelena Bonner, released from internal exile in the city of Gorky, where they had been sent by the Politburo for criticizing the Soviet invasion of Afghanistan. The freeing of Sakharov was one of Gorbachev’s earliest and most important acts of glasnost, or “openness,” which ultimately led to the unraveling of the Soviet system, but he also wanted Sakharov’s advice on SDI. Given in secret meetings in Moscow in February 1987, Sakharov’s analysis was unequivocal: “An SDI system would never be militarily effective against a well-armed opponent; rather, it would be a kind of ‘Maginot line in space’—expensive and vulnerable to counter-measures. It would not serve as a population defense, or as a shield behind which a first strike could be launched, because it could be easily defeated. Possibly SDI proponents in the United States were counting on an accelerated arms race to ruin the Soviet economy, but if so they were mistaken, for the development of counter-measures would not be expensive.”4

Rather than hiking investments in new weaponry, the Soviets actually were in the process of cutting back. In the mid-1980s, revised CIA estimates of Soviet spending on weapons procurement indicated that the actual rate of increase had been a measly 1.3 percent a year, not the 4 to 5 percent the CIA had previously reported to the president, and that Russian appropriations for offensive strategic weapons had actually declined by 40 percent. Such estimates were ideologically unacceptable to Secretary of Defense Caspar Weinberger, who sent them back to the CIA. There Director Gates “ordered SOVA [the CIA’s Office of Soviet Analysis] to send Weinberger a memo focusing on Soviet economic strengths.”5

In fact, U.S. intelligence agencies did not see the crisis of the Soviet Union coming and never gave our political leaders an accurate assessment of the initiatives undertaken by Mikhail Gorbachev. On August 19, 1991, the USSR finally succumbed to a domestic coup d’état thanks to an internal process of delegitimization that Gorbachev himself had initiated. The United States had little or nothing to do with it.

While Gorbachev was attempting internal perestroika (economic restructuring) and glasnost (the end of secrecy and the release of political prisoners), the defining event that made clear to the world how far the process of reform had gone occurred on the night of November 9, 1989. The Berlin Wall fell. Here again, the crucial acts were not American but West German. In his scholarly dissection (commissioned by the German Bundestag) of what he calls “one of the biggest paternity disputes ever,” Hans-Hermann Hertle explains: “Following a secret agreement with Bonn, they [the Hungarians] opened the border to Austria for GDR [German Democratic Republic, i.e., East German] citizens on 10 September [1989]. In return, the Federal Republic gave Hungary credit in the amount of DM 500 million and promised to make up losses that Hungary might suffer from retaliatory measures by the GDR. Tens of thousands of East Germans traveled to the Federal Republic via Austria in the days and weeks that followed. The GDR experienced its largest wave of departures since the construction of the Berlin Wall in 1961. This mass exodus demonstrated the weakness of the SED [Socialist Unity Party of Germany, i.e., the Communist Party] leadership on this issue and undermined the regime’s authority in an unprecedented manner.”6

It is a commonplace in the teaching of international relations that empires do not give up their dominions voluntarily. The USSR was a rare exception to this generalization. Inspired by Gorbachev’s idealism and a desire to become members of the “common European house” and to gain international recognition as a “normal” state, some reformers in the Soviet elite believed that rapprochement with Western European countries could help Russia resume its stalled process of modernization. As the Russian historian Vladislav Zubok has observed, “At certain points, ... Soviet political ties to France and West Germany became more important and perhaps warmer on a personal level than relations with some members of the Warsaw Pact.”7 Much like the Hungarian Communist Party chief Imre Nagy in the 1956 anti-Soviet uprising in Budapest and Czech Communist Party first secretary Alexander Dubcek in the 1968 Prague revolt, Gorbachev had turned against the imperial-revolutionary conception of the Soviet Union inherited from Stalin. He willingly gave up the Soviet empire in Eastern Europe as the price for reinvigorating the Soviet Union’s economic system.

The American leadership did not have either the information or the imagination to grasp what was happening. Totally mesmerized by academic “realist” thought, it missed one of the grandest developments of modern history and drew almost totally wrong conclusions from it. At one point after the Berlin Wall had come down, the U.S. ambassador to the Soviet Union actually suggested that the Soviets might have to intervene militarily in Eastern Europe to preserve the region’s “stability.”8

After some hesitation the American government and military decided that, although the Cold War in Europe had indeed ended, they would not allow the equally virulent cold wars in East Asia and Latin America to come to an end.9 Instead of the Soviet Union, the “menace” of China, Fidel Castro, drug lords, “instability,” and more recently, terrorism, weapons of mass destruction, and the “axis of evil”—Iran, Iraq, and North Korea—would have to do as new enemies. In the meantime, the United States did its best to shore up old Cold War structures and alliances, even without the Soviet threat, expanding the NATO alliance into Eastern Europe and using it to attack Serbia, a former Communist country. The Pentagon, in turn, demanded that military spending be maintained at essentially Cold War levels and sought a new, longer-term rationale for its global activities.

Slow as Washington was to catch on to what was happening in the Soviet Union—as late as March 1989 senior figures on the National Security Council were warning against “overestimating Soviet weakness” and the dangers of “Gorbymania”—the leadership moved with remarkable speed to ensure that the collapse would not affect the Pentagon’s budget or our “strategic position” on the globe we had garrisoned in the name of anti-Communism. Bare moments after the Berlin Wall went down and even as the Soviet Union was unraveling, Pentagon chief Dick Cheney urged increased military spending. Describing the new defense budget in January 1990, Michael R. Gordon, military correspondent of the New York Times, reported that “in Cheney’s view, which is shared by President [George H. W.] Bush, the United States will continue to need a large Navy [and interventionist forces generally] to deal with brushfire conflicts and threats to American interests in places like Latin America and Asia.” Two months later, when the White House unveiled a new National Security Strategy before Congress, it described the Third World as a likely focus of conflict: “In a new era, we foresee that our military power will remain an essential underpinning of the global balance, but less prominently and in different ways. We see that the more likely demands for the use of our military forces may not involve the Soviet Union and may be in the Third World, where new capabilities and approaches may be required.”10 It should be noted that the Pentagon and the White House presented these military plans well before Iraq’s incursion into Kuwait and the ensuing crisis that resulted in the Persian Gulf War of 1991.

The National Security Strategy of 1990 also foresaw the country’s needing “to reinforce our units forward deployed or to project power into areas where we have no permanent presence,” particularly in the Middle East, because of “the free world’s reliance on energy supplies from this pivotal region.” The United States would also need to be prepared for “low-intensity conflict” involving “lower-order threats like terrorism, subversion, insurgency, and drug trafficking [that] are menacing the United States, its citizenry, and its interests in new ways.... Low-intensity conflict involves the struggle of competing principles and ideologies below the level of conventional war.” Our military forces, it continued, “must be capable of dealing effectively with the full range of threats, including insurgency and terrorism.” Through such self-fulfilling prophecies, the military establishment sought to confront the end of the Cold War by embarking on a grandiose new project to police the world.

At the same time, American ideologists managed to convince the public that the demise of the Soviet Union was evidence of a great American victory. This triumphalism, in turn, generated a subtle shift in the stance the United States had maintained throughout the Cold War. The United States no longer portrayed itself as a defensive power, seeking only to ensure its security and that of allied nations in the face of potential Soviet or Communist aggression. Without a superpower enemy, the first hints of the openly—proudly—imperial role it would take on in the new century emerged, as the Pentagon, rather than declaring victory and demobilizing, began to test the waters in a variety of new capacities, some of which would be expanded and some discarded in the years to come.

The United States now assumed, slowly and by degrees, responsibilities for humanitarian intervention, the spread of American-style “market democracy” via globalization, open warfare against Latin American drug cartels and indigenous political reform movements, the quarantining of “rogue states,” leadership of an endless “war on terrorism,” and finally “preventive” intervention against any potentially unfriendly power anywhere that threatened to possess the kinds of weapons of mass destruction the United States first developed and still wished to monopolize. Within a decade of the end of the Cold War in Europe, the United States’s position in the world underwent a fundamental transformation. In the view of William A. Galston, deputy assistant to President Bill Clinton for domestic policy from 1993 to 1995, “Rather than continuing to serve as first among equals in the postwar international system, the United States would act as a law unto itself, creating new rules of international engagement without agreement by other nations.”11 The United States no longer seemed to care how many enemies it made.

The period between the fall of the Berlin Wall and the first anniversary of the 9/11 attacks in the United States encompasses thirteen years and three presidents. From 1989 to 2002, there was a revolution in America’s relations with the rest of the world. At the beginning of that period, the conduct of foreign policy was still largely a civilian operation, carried out by men and women steeped in diplomacy, accustomed to defending American actions in terms of international law, and based on longstanding alliances with other democratic nations. There had always been a military component to the traditional conduct of foreign policy, and men from a military background often played prominent roles as civilian statesmen. From time to time militarists went well beyond what the public expected of them—as in the secret support for and illegal financing of right-wing armies in Central America during the Reagan administration. But, in general, a balance was maintained in favor of constitutional restraints on the armed forces and their use. By 2002, all this had changed. The United States no longer had a “foreign policy.” Instead it had a military empire.

With the end of the Cold War the huge Eurasian territory between the Balkans and Pakistan, formerly off-limits as the sphere of influence of the Soviet Union, opened up for imperial expansion. America quickly deployed military forces into this critical region and prepared to fight wars with regimes that stood in the way. During this period of little more than a decade, a vast complex of interests, commitments, and projects was woven together until a new political culture paralleling civil society came into existence. This complex, which I am calling an empire, has a definite—even defining—physical geography, much of it acquired during World War II and the Cold War but not recognized for what it was because the rationale of containing the Soviet Union disguised it. It consists of permanent naval bases, military airfields, army garrisons, espionage listening posts, and strategic enclaves on every continent of the globe.

Of course, military bases or colonies have been common features of imperial regimes since ancient times, but in the past they were always there to secure or defend conquered territories and to exploit them economically. The United States began like a traditional empire. We occupied and colonized the North American continent and established military outposts, called forts—Fort Apache, Fort Leavenworth, Sutter’s Fort, Fort Sam Houston, Fort Laramie, Fort Osage—from coast to coast. But in more modern times, unlike many other empires, we did not annex territories at all. Instead we took (or sometimes merely leased) exclusive military zones within territories, creating not an empire of colonies but an empire of bases. These bases, linked through a chain of command and supervised by the Pentagon without any significant civilian oversight, were tied into our developing military-industrial complex and deeply affected the surrounding indigenous cultures, almost invariably for the worse. They have helped turn us into a new kind of military empire—a consumerist Sparta, a warrior culture that flaunts the air-conditioned housing, movie theaters, supermarkets, golf courses, and swimming pools of its legionnaires. Another crucial characteristic that distinguishes the American empire from empires of the past is that the bases are not needed to fight wars but are instead pure manifestations of militarism and imperialism.

The distinction between the military and militarism is crucial. By military I mean all the activities, qualities, and institutions required by a nation to fight a war in its defense. A military should be concerned with ensuring national independence, a sine qua non for the maintenance of personal freedom. But having a military by no means has to lead to militarism, the phenomenon by which a nation’s armed services come to put their institutional preservation ahead of achieving national security or even a commitment to the integrity of the governmental structure of which they are a part. As the great historian of militarism Alfred Vagts comments, “The standing army in peacetime is the greatest of all militaristic institutions.”12 Moreover, when a military is transformed into an institution of militarism, it naturally begins to displace all other institutions within a government devoted to conducting relations with other nations. One sign of the advent of militarism is the assumption by a nation’s armed forces of numerous tasks that should be reserved for civilians.

Overseas bases, of which the Defense Department acknowledges some 725, come within the scope of the peacetime standing army and constitute a permanent claim on the nation’s resources while being almost invariably inadequate for actually fighting a war. The great enclaves of bases, such as those in Okinawa or Germany, have not been involved in combat since World War II and are not really intended to contribute to war-fighting capabilities. They are the headquarters for our proconsuls, visible manifestations of our imperial reach. During the second Iraq war, for example, the United States did not use its Persian Gulf and Central Asian bases except to launch bombers against Iraqi cities—an activity more akin to a training exercise, given American air superiority, than to anything that might be called combat. Virtually all of the actual fighting forces came from the “homeland”—the Third Infantry Division from Fort Stewart, Georgia; the Fourth Infantry Division from Fort Hood, Texas; the First Marine Division from Camp Pendleton, California; and the 101st Airborne Division from Fort Campbell, Kentucky. The bases in Qatar, Saudi Arabia, Bahrain, the United Arab Emirates, Oman, and elsewhere served primarily as high-ranking officers’ watering spots and comfortable sites for their remote-control command posts. The American network of bases is a sign not of military preparedness but of militarism, the inescapable companion of imperialism.

A major problem for that network is financing. Most empires of the past paid for themselves or at least attempted to do so. The Spanish, Dutch, and British Empires all enriched their homelands through colonial exploitation. Not so the empire of bases. Militarized and unilateral, it tends to subvert commerce and globalization because it weakens international law and the norms of reciprocity on which trade depends. It thereby adds enormously to the indirect economic burdens of our imperium, a subject to which I shall return later in this book. Occasionally, our empire of bases makes money because, like the gangsters of the 1930s who forced the people and businesses under their sway to pay protection money, the United States pressures foreign governments to pay for its imperial projects. During the first Iraq war, the United States extracted $13 billion from the Japanese and later boasted that it had even made a small net profit from the conflict. But the more open and assertive we become in our claims to dominate the world, the less appealing the old “mutual security” schemes become for other rich but militarily impotent countries. A contraction of trade, capital transfers, and direct subsidies will undermine the U.S. empire of bases much faster than was the case for the older, self-financing empires.

Life in our empire is in certain ways reminiscent of the British Raj, with its military rituals, racism, rivalries, snobbery, and class structure. Once on their bases, America’s modern proconsuls and their sous-warriors never have to mix with either “natives” or American civilians. Just as they did for young nineteenth-century Englishmen and Frenchmen, these military city-states teach American youths arrogance and racism, instilling in them the basic ingredients of racial superiority. The base amenities include ever-expanding military equivalents of Disneyland and Club Med reserved for the exclusive use of active-duty men and women, together with housing, athletic facilities, churches, and schools provided at no cost or at low fixed prices. These installations form a more or less secret global network many parts of which once may have had temporary strategic uses but have long since evolved into permanent outposts. All of this has come about informally and, at least as far as the broad public is concerned, unintentionally. If empire is mentioned at all, it is in terms of American soldiers liberating Afghan women from Islamic fundamentalists, or helping victims of a natural disaster in the Philippines, or protecting Bosnians, Kosavars, or Iraqi Kurds (but not Rwandans, Turkish Kurds, or Palestinians) from campaigns of “ethnic cleansing.”

Whatever the original reason the United States entered a country and set up a base, it remains there for imperial reasons—regional and global hegemony, denial of the territory to rivals, providing access for American companies, maintenance of “stability” or “credibility” as a military force, and simple inertia. For some people our bases validate the American way of life and our “victory” in the Cold War. Whether the United States can afford to be everywhere forever is not considered an appropriate subject for national discussion; nor is it, in the propagandistic atmosphere that has enveloped the country in the new millennium, appropriate to dwell on what empires cost or how they end.

The new empire is not just a physical entity. It is also a cherished object of analysis and adulation by a new army of self-designated “strategic thinkers” working in modern patriotic monasteries called think tanks. It is the focus of interest groups both old and new—such as those concerned with the supply and price of oil and those who profit from constructing and maintaining military garrisons in unlikely places. There are so many interests other than those of the military officials who live off the empire that its existence is distinctly overdetermined—so much so that it is hard to imagine the United States ever voluntarily getting out of the empire business. In addition to its military and their families, the empire supports the military-industrial complex, university research and development centers, petroleum refiners and distributors, innumerable foreign officer corps whom it has trained, manufacturers of sport utility vehicles and small-arms ammunition, multinational corporations and the cheap labor they use to make their products, investment banks, hedge funds and speculators of all varieties, and advocates of “globalization,” meaning theorists who want to force all nations to open themselves up to American exploitation and American-style capitalism. The empire’s values and institutions include military machismo, sexual orthodoxy, socialized medicine for the chosen few, cradle-to-grave security, low pay, stressful family relationships (including the murder of spouses), political conservatism, and an endless harping on behaving like a warrior even though many of the wars fought in the last decade or more have borne less resemblance to traditional physical combat than to arcade computer games.

Among the thousands of pages of propaganda distributed by the Pentagon to celebrate its victory over the Taliban in Afghanistan was a story about a female air force captain sitting at a command post in Pakistan monitoring an unmanned Predator drone over Afghanistan. Suddenly, she spotted a group of Afghan men milling around a Toyota SUV and concluded they were “terrorists.” She ordered in a navy plane armed with a conventional bomb to which a device had been attached that, via a satellite-based global positioning system and inertial guidance, was programmed to hit within thirty to forty-five feet of its target. As the navy pilot dropped his bomb, she could not help crying out to the unsuspecting figures on her computer screen, “Run. Get out of the way! You are going to be killed!” A few seconds later they were indeed dead. Perhaps this story was distributed to demonstrate the innate humanity of our new breed of warriors even though they may fight from hundreds of miles away or from 35,000 feet in stealth bombers. But M. Franklin Rose, a specialist on robotics working for the army, does not think such twinges of empathy will last very long: “So many of these young soldiers grew up on video games and computers, they grew up trusting machines.”13

Death as antiseptic as in any video game is now de rigueur in the operations of our high-tech armed forces—and is commonly unrestrained by international or domestic law of any kind. For example, on November 4, 2002, the government acknowledged that it had initiated a strike in Yemen similar to the one described above in Afghanistan. A Predator unmanned surveillance aircraft, in this case monitored by CIA operatives based at a French military facility in Djibouti and at CIA headquarters in Virginia, fired a missile that destroyed an SUV said to contain a senior al-Qaeda terrorist.14 Not only was the vehicle so completely vaporized that this claim cannot be verified but the nature of the strike itself—coming after the Yemeni government reportedly refused to act on information passed to it by the CIA—must give pause to other governments. Why could a Hellfire missile released from a remote-controlled drone not destroy reputed terrorists in the Philippines, in Singapore, or in Germany, whatever a local government might think or wish?

During the post-Cold War period, a new set of managers took the helm of the military establishment. They were more interested than their predecessors in warfare employing weapons launched from great heights, or from over the horizon, or from outer space. They were determined to avoid casualties among their own ranks, both to make service in the volunteer armed forces more attractive and to not alarm the citizens who supply the manpower and pay for the military’s activities and lifestyle. This mode of warfare continues the World War II practice of bombing residential areas and cannot avoid, despite the touting of “precision” weaponry, the indiscriminate killing of nonbelligerents and innocent bystanders. There is nothing new about this. The Romans killed or enslaved their captives, plundered and destroyed their enemies’ cities, and slaughtered entire populations without distinguishing between combatants and noncombatants. Twentieth-century “total war,” associated above all with air power, was known in medieval times as “Roman war.” In general, writes Sven Lindquist in his history of bombing, “the laws of war protect enemies of the same race, class, and culture. The laws of war leave the foreign and the alien without protection.”15 Hiroshima and Nagasaki exemplify the latter. The novel aspect today is our hypocrisy about our “precision-guided” munitions. American propaganda resolutely ignores the carnage our high-tech military imposes on civilian populations, declaring that our intentions are by definition good and that such killings and maimings are merely “collateral damage.” Such obfuscation is intrinsic to the world of imperialism and its handmaiden, militarism.

Imperialism is hard to define but easily recognized. In the words of the early-twentieth-century English political economist John Hobson, imperialists are “parasites upon patriotism.”16 They are the people who anticipate “profitable businesses and lucrative employment” in the course of creating and exploiting an empire. They hold military and civilian posts in the imperial power, trade with the dominated peoples on structurally favorable terms, manufacture weapons and munitions for wars and police actions, and provide and manage capital for investment in the colonies, semicolonies, and satellites that imperialism creates.

The simplest definition of imperialism is the domination and exploitation of weaker states by stronger ones. Numerous sorrows follow from this ancient and easily observable phenomenon. Imperialism is, for example, the root cause of one of the worst maladies inflicted by Western civilization on the rest of the world—namely, racism. As David Abernethy, an authority on European imperialism, observes, “It was but a short mental leap for people superior in power to infer that they were superior in intellect, morality, and civilization as well. The superiority complex served as a rationalization for colonial rule and, by reducing qualms over the rightness of dominating other people, was empowering in its own right.”17

According to a long tradition of writing about imperialism, if dominion by a stronger state does not include the weaker state’s “colonization,” then it is not imperialism. Some writers have employed the term hegemony as a substitute for imperialism without colonies, and in the post-World War II era of superpowers, hegemonism became coterminous with the idea of Eastern and Western “camps.” Always complicating matters has been a long-standing American urge to find euphemisms for imperialism that soften and disguise the U.S. version of it, at least from other Americans. Theodore Roosevelt, for example, professed to be not an imperialist but an “expansionist.” Arguing for the annexation of the Philippines, he said, “There is not an imperialist in the country.... Expansion? Yes .... Expansion has been the law of our national growth.”18

Abernethy is typical in insisting that in a real empire a stronger state must advance a formal claim over a weaker one. “Colonialism,” Abernethy writes, “is the set of formal policies, informal practices, and ideologies employed by a metropole to retain control of a colony and to benefit from control. Colonialism is the consolidation of empire, the effort to extend and deepen governance claims made in an earlier period of empire building.”19

Of course, European imperialism was indeed intimately linked to colonies and committed to fostering emigration to its possessions on a truly stupendous scale. Millions of Europeans migrated to the communities created by imperialism in North and South America, Australia, New Zealand, and South Africa. In turn, millions of Africans were transported as slaves to American and Caribbean colonies. As the Europeans expanded globally, their political leaders and colonial administrators paid millions of Chinese and Indians to emigrate or tricked them into emigrating—sometimes as indentured servants—to European and American colonies and territories in Southeast Asia, the Indian Ocean, the Caribbean, Africa, and the United States.

European nations also systematically used their colonies as dumping grounds for their criminals and political dissidents in conscious attempts to forestall domestic revolution. Governments imposed sentences of “transportation” in order to get rid of those they thought might become radicals or revolutionaries. After the 1848 workers’ uprising in Paris, the French government paid more than fifteen thousand Parisians to move to colonial Algeria. The British commonly transported Irish and other radicals to prison colonies in North America and, after the American Revolution, Australia. Against this background, Abernethy naturally argues that the very concept of imperialism makes no sense once colonialism and colonialists are removed from the picture.

But this is a historically circumscribed view. As time passed, emigration and colonialism became less frequent accompaniments of imperialism. Today imperialism manifests itself in several different and evolving forms and no particular institution—except for militarism—defines the larger phenomenon. Imperialism and militarism are inseparable—both aim at extending domination; “where the one,” in Vagts’s terms, “looks primarily for more territory, the other covets more men and more money.”20 Certainly, there are several kinds of imperialism that do not involve the attempt to create colonies. The characteristic institution of so-called neocolonialism is the multinational corporation covertly supported by an imperialist power. This form of imperialism reduces the political costs and liabilities of colonialism by maintaining a facade of nominal political independence in the exploited country. As the Cuban revolutionary Che Guevara observed, neocolonialism “is the most redoubtable form of imperialism—most redoubtable because of the disguises and deceits that it involves, and the long experience that the imperialist powers have in this type of confrontation.”21

The multinational corporation partly replicates one of the earliest institutions of imperialism, the chartered company. In such classically mercantilist organizations, the imperialist country authorized a private company to exploit and sometimes govern a foreign territory on a monopoly basis and then split the profits between government officials and private investors. The best known of these were the English East India Company, formed in 1600; the Dutch East India Company, created in 1602; the French East India Company in 1664; and the Hudson’s Bay Company in 1670. The chartered company and the modern multinational corporation differ primarily in that the former never pretended to believe in free trade whereas multinational corporations use “free trade” as their mantra.

Neither formal colonialism nor the neocolonialism of the chartered company or multinational corporation exhausts the institutional possibilities of imperialism. For example, neocolonial domination need not be economic. It can be based on a kind of international protection racket—mutual defense treaties, military advisory groups, and military forces stationed in foreign countries to “defend” against often poorly defined, overblown, or nonexistent threats. This arrangement produces “satellites”—ostensibly independent nations whose foreign relations and military preparedness revolve around an imperialist power. Such was the case during the Cold War with the East European satellites of the former Soviet Union and the East Asian satellites of the United States, which at one time included Taiwan, the Philippines, South Vietnam, and Thailand but now are more or less reduced to Japan and South Korea.

The self-governing dominion of the British Empire has been a variant of the satellite. Canada, Australia, and New Zealand have been distinguished from other British crown colonies entirely along racial lines: unlike those not given dominion status, they are populated primarily by white European emigrants. Still another variant is the client state, a dependency of an imperialist power whose resources, strategic location, or influence may sometimes offer it the leeway to dictate policy to the dominant power while still relying on it for extensive support. Examples would include Israel vis-á-vis the United States, China and Vietnam vis-á-vis the USSR before the Sino-Soviet split, and North Korea between 1960 and 1990, when it could play China and the Soviet Union against each other.

During the Cold War, the United States and the Soviet Union each claimed to be opposed to old-style European imperialism and thus not to be imperialist powers. Long before World War II, however, both countries had built empires—the United States in Latin America and the Pacific, the Russians in the Caucasus and Central Asia—and both acquired new territories in the course of fighting that war. Each, however, had to disguise its long-standing imperialist practices as something far more benign, and each, in the Cold War years, developed a set of elaborate myths about the threat of the other side and the need to maintain “forward deployed” military forces constantly ready to repel a “first strike.” The world’s two most powerful nations agreed on at least one thing—that their military presences were required on all the continents of the world in order to forestall a superpower war.

The foreign military bases of both superpowers became the characteristic institutions of a new form of imperialism. Both countries enthusiastically adopted the idea that they were in mortal danger from each other, even though they had been allies during World War II. The Cold War, and particularly the standoff in Central Europe, had conveniently defined the purpose of the approximately 1,700 U.S. military installations in about one hundred countries that existed during that period.22 The forces on these bases were all engaged in a grand project to “contain Soviet expansionism,” just as the Soviet Union’s forces were said to be thwarting “American aggression.”23 In 1989, while the Soviet Union started giving its satellites their freedom and then fell apart in the course of glasnost—of trying to explain how it had acquired them in the first place—the United States was still engaged in the brutal repression of rebellions or rebellious regimes in the small countries of Central America in the name of preventing a Soviet takeover in the New World.

The military paranoia of the Cold War promoted massive military-industrial complexes in both the United States and the USSR and helped maintain high levels of employment through “military Keynesianism”—that is, substantial governmental expenditures on munitions and war preparedness. The Cold War also promoted employment in the armed forces themselves, in huge espionage and clandestine service apparatuses, and in scientific and strategic research institutes in universities that came to serve the war machine. Both countries wasted resources at home, undercut democracy whenever it was inconvenient abroad, promoted bloody coups and interventions against anyone who resisted their plans, and savaged the environment with poorly monitored nuclear weapons production plants. Official propagandists justified the crimes and repressions of each empire by arguing that at least a cataclysmic nuclear war had been avoided and the evil intentions of the other empire thwarted or contained.

But was there ever a real threat? In 1945, at their famed meeting in Yalta in anticipation of Germany’s surrender a few months later, Roosevelt and Stalin divided Europe into “Western” and Russian spheres of influence at the Elbe River and agreed on how to apportion the spoils in East Asia after the defeat of Japan. Over the succeeding forty-five years, neither side ever showed any serious inclination to overstep the Yalta boundaries. Despite military probings in Berlin and Korea, the American decision to build a separate state in its half of occupied Germany, intense rivalry between the intelligence services of the two superpowers, bitter proxy wars in Vietnam and Afghanistan, and a single moment in 1962 when a nuclear conflagration seemed imminent, the Cold War became as much as anything a mutually acceptable explanation for why the world remained split largely where the victorious armies of World War II had stopped.

As the journalists Diana Johnstone and Ben Cramer put the matter: “If the danger [of a Soviet-American war in Europe] never really existed, then it can be argued that a primary mission of U.S. forces in Europe in reality has been to maintain the Soviet threat. So long as vast U.S. forces were arrayed in Western Europe in a position to attack (or counterattack) the Soviet Union, the Soviet Union would itself remain in a position to attack (or counterattack) the U.S. forces in Europe. The Soviet and U.S. ‘threats’ maintained each other, and thus their double military hegemony over the European continent.”24 These ideas have received a surprising post-Cold War endorsement from an unusual source—President George W. Bush and his national security adviser, Condoleezza Rice. In Bush’s National Security Statement released on September 17, 2002, he and Rice observed, “In the Cold War, especially following the Cuban missile crisis [of 1962], we faced a generally status quo, risk-averse adversary.” These are words that could not have been uttered in the White House prior to the fall of the Berlin Wall in 1989.

Both sides used the alleged menace of the other—in the case of the United States in East Asia, the “threat” of Communist China—to justify their occupation and exploitation of foreign territories. The United States applied the same kind of reasoning in Latin America, defining the democratically elected government in Guatemala in 1954, the revolutionary government of Cuba in 1959, and the Sandinista government in Nicaragua in 1979 as Communist threats. This excuse served as a cover for an ever-lengthening series of American interventions and coups against Latin American governments deemed unfriendly to American interests. From the CIA’s overthrow of the Jacobo Arbenz government in Guatemala and its catastrophic Bay of Pigs invasion of Cuba, it was only a short step to the “falling dominoes” of Southeast Asia and the ruinous intervention in Vietnam.

The initial effect of the Cold War was to justify the grip of both superpowers on numerous territories each had defended or liberated during World War II—the Soviets primarily in Central Europe, the Americans in England, the North Atlantic, Western Germany, Italy, Japan, and South Korea. In 1953, for example, the U. S. government secretly forced part of the indigenous population of Greenland, an island about three times the size of Texas and a Danish colony since 1721, to move—it gave them four days’ notice and threatened to bulldoze their houses—to make way for a vast expansion of Thule Air Force Base, a strategic expanse of some 234,022 acres disguised since World War II as a “weather station.” In fact, throughout the Cold War, the Greenland base was a refueling spot for bombers scheduled to fly routes into the Soviet Union in the event World War III broke out. (Today, it is considered a critical location for the Bush administration’s ballistic missile defense scheme.25) After more than fifty years, the air force shows no signs of leaving despite continuous protests by the Inuit of Greenland and numerous lawsuits filed in the Danish Supreme Court.

Once the military has acquired a base, it is extremely reluctant to give it up. Instead, new uses are found for it. The American presence on Okinawa, for example, was first justified by the need to mount an invasion of the main Japanese islands (made unnecessary by the atomic bombs and Japan’s surrender), then as a secure enclave for fighting the war in Korea, next as a forward base for deploying force against China, then as a B-52 bomber base and staging area for the Vietnam War, a training area for jungle warfare, and most recently a home base for troops and aircraft that might be used elsewhere in Asia or the Middle East. As Patrick Lloyd Hatcher, a historian and retired U.S. Army colonel, writes, “Foreign real estate has the same attraction for American defense planners that Nimitzclass aircraft carriers do for admirals and B-2 stealth bombers and heavy Abrams tanks do for generals.... They can never have enough.”26 In short, the imperialism of the superpowers during the Cold War centered on the deployment of military forces in other people’s countries. It took the specific form of the establishment of foreign military bases and the fostering of docile satellites in each superpower’s sphere of influence.

America’s foreign military enclaves, though structurally, legally, and conceptually different from colonies, are themselves something like micro-colonies in that they are completely beyond the jurisdiction of the occupied nation. The United States virtually always negotiates a “status of forces agreement” (SOFA) with the ostensibly independent “host” nation, a modern legacy of the nineteenth-century imperialist practice in China of “extraterritoriality”—the “right” of a foreigner charged with a crime to be turned over for trial to his own diplomatic representatives in accordance with his national law, not to a Chinese court in accordance with Chinese law. Extracted from the Chinese at gun point, the practice arose because foreigners claimed that Chinese law was barbaric and “white men” should not be forced to submit to it. Chinese law was indeed concerned more with the social consequences of crime than with establishing the individual guilt or innocence of criminals, particularly those who were uninvited guests in China. Following the Anglo-Chinese Opium War of 1839-42, the United States was the first nation to demand “extrality” for its citizens. All the other European nations then demanded the same rights as the Americans. Except for the Germans, who lost their Chinese colonies in World War I, Americans and Europeans lived an “extraterritorial” life until the Japanese ended it in 1941 and Chiang Kai-shek’s Kuomintang stopped it in 1943 in “free China.”

Rachel Cornwell and Andrew Wells, two authorities on status of forces agreements, conclude, “Most SOFAs are written so that national courts cannot exercise legal jurisdiction over U.S. military personnel who commit crimes against local people, except in special cases where the U.S. military authorities agree to transfer jurisdiction.”27 Since service members are also exempt from normal passport and immigration controls, the military often has the option of simply flying an accused rapist or murderer out of the country before local authorities can bring him to trial, a contrivance to which commanding officers of Pacific bases have often resorted. At the time of the terrorist attacks on New York and Washington in September 2001, the United States had publicly acknowledged SOFAs with ninety-three countries, though some SOFAs are so embarrassing to the host nation that they are kept secret, particularly in the Islamic world.28 Thus their true number is not publicly known.

U.S. overseas military bases are under the control not of some colonial office or ministry of foreign affairs but of the Department of Defense, the Central Intelligence Agency, the National Security Agency, the Defense Intelligence Agency, and a plethora of other official, if sometimes secret, organs of state. These agencies build, staff, and supervise the bases—fenced and defended sites on foreign soil, often constructed to mimic life at home. Since not all overseas members of the military have families or want their families to accompany them, except in Muslim countries these bases normally attract impressive arrays of bars and brothels, and the criminal elements that operate them, near their main gates. The presence of these bases unavoidably usurps, distorts, or subverts whatever institutions of democratic government may exist within the host society.

Stationing several thousand eighteen-to-twenty-four-year-old American youths in cultures that are foreign to them and about which they are utterly ignorant is a recipe for the endless series of “incidents” that plague nations that have accepted bases. American ambassadors quickly learn the protocol for visiting the host foreign office to apologize for the behavior of our troops. Even in closely allied countries where English is spoken, local residents get very tired of sexual assaults and drunken driving by foreigners. During World War II, the British satirized our troops as “overpaid, over-sexed, and over here.” Nothing has changed.

Before setting out on a tour of these bases and a look at how they grew and spread, we need briefly to consider contemporary militarist thought in the United States and its origins. The bases support the military and are its sphere of influence, but it is the military itself and its growth during and following the Cold War that have caused the definitive transformation of these bases from staging areas for various armed conflicts into permanent garrisons for policing an empire.

At the time that Caesar was camped in Ravenna and thinking of advancing south across the Rubicon in direct violation of the Roman senate’s orders, something occurred that seemed to force his hand. According to the historian and biographer Suetonius, shepherds and soldiers were lured to the riverbank by the sound of pipers. Among them were some trumpeters. One of them, for reasons that are obscure, sounded the advance. The troops took this as their cue to move aggressively to the other side of the river. Caesar is said to have remarked, “Let us go where the omens of the gods and the crimes of our enemies summon us. The die is now cast.” Similarly, it would seem, post-Cold War American militarists have cast the die and the American people have blindly marched across their own Rubicon to become an empire with global pretensions.29


Загрузка...