Almost every American of a certain age knows the photo: A grinning Harry Truman holds up an early edition of the Chicago Daily Tribune, bearing the banner headline DEWEY DEFEATS TRUMAN. No, he didn’t. Truman’s 1948 election victory, in the face of polls that seemed to guarantee a landslide for Dewey, was the greatest political upset in U.S. history.
Truman’s come-from-behind victory has become an iconic moment in American political history, along with stories of how Truman’s supporters on the campaign trail yelled, “Give ’em hell, Harry!” But I’m sure that very few Americans could tell you whom Harry was being urged to give hell, and what the hell was about. To the extent that Truman is remembered today, it’s mostly as a foreign policy leader: the man who oversaw the creation of the Marshall Plan and the strategy of containment, the man who stood up to Stalin in Berlin and in Korea, and set America on the path to eventual victory in the Cold War.
In 1948, however, foreign policy wasn’t a key campaign issue, partly because the Cold War hadn’t started in earnest, partly because Republicans—torn between fervent anticommunism and their traditional isolationism—hadn’t settled on a foreign policy position. The issue that preoccupied the electorate in 1948 was the fear that Republicans would try to undo FDR’s domestic achievements. Thomas Dewey tried to soothe the electorate by campaigning on Yogi Berra–like platitudes, including the declaration that “Your future still lies ahead of you.” But Truman turned the election into a referendum on the New Deal by focusing his attacks on the Republican-controlled Congress.
In 1948 that Congress was engaged in an attempt to roll back FDR’s New Deal. The de facto leader of the Republicans in Congress was Sen. Robert Taft, and Taft, sometimes referred to as “Mr. Republican,” was deeply opposed to the New Deal, which he regarded as “socialistic.” This was more than ideological posturing: After Republicans gained control of Congress in 1946, Taft pushed through the Taft-Hartley Act, significantly rolling back the National Labor Relations Act of 1935, which was a key ingredient in the surge in union membership and power under the New Deal. Thus in 1948 voters had good reason to believe that a Republican victory, which would give them control of both Congress and the White House, would lead to a significant U-turn in the policies that produced the Great Compression.
By 1952, when the Republicans finally did regain the White House, much less was at stake. By that time Republican leaders had, as a matter of political necessity, accepted the institutions created by the New Deal as permanent features of the American scene. “Should any political party attempt to abolish social security, unemployment insurance, and eliminate labor laws and farm programs,” wrote Dwight Eisenhower in a 1954 letter to his brother Edgar, “you would not hear of that party again in our political history. There is a tiny splinter group, of course, that believes you can do these things. Among them are H. L. Hunt (you possibly know his background), a few other Texas oil millionaires, and an occasional politician or business man from other areas. Their number is negligible and they are stupid.”[1]
How did ideas and programs that were considered dangerously radical in the 1930s become the essence of respectability in the 1950s, with only a “tiny splinter group” calling for their repeal? To answer that question we need to look both at how changes in American society altered the political environment and at how the political parties responded to the new environment.
In the 1930s the New Deal was considered very radical indeed—and the New Dealers themselves were willing to use the language of class warfare. To read, or, better yet, listen to Franklin Delano Roosevelt’s Madison Square Garden speech (the recording is available on the Web), delivered on the eve of the 1936 election, is to be reminded how cautious, how timid and well-mannered latter-day liberalism has become. Today those who want to increase the minimum wage or raise taxes on the rich take pains to reassure the public that they have nothing against wealth, that they’re not proposing class warfare. But FDR let the malefactors of great wealth have it with both barrels:
We had to struggle with the old enemies of peace—business and financial monopoly, speculation, reckless banking, class antagonism, sectionalism, war profiteering.
They had begun to consider the Government of the United States as a mere appendage to their own affairs. We know now that Government by organized money is just as dangerous as Government by organized mob.
Never before in all our history have these forces been so united against one candidate as they stand today. They are unanimous in their hate for me—and I welcome their hatred.
FDR wasn’t exaggerating when he said that the plutocrats hated him—and they had very good reasons for their hatred. As I documented in chapter 3, the New Deal imposed a heavy tax burden on corporations and the wealthy, fostered the growth of unions, and oversaw a narrowing in income inequality that included a substantial fall in after-tax incomes at the top.
But a funny thing happened over the twenty years that followed the Madison Square Garden speech. Thanks in large part to Truman’s 1948 victory, New Deal policies remained in place: unions remained powerful for several more decades, and taxes on corporations and the rich were even higher during the Eisenhower years than they had been under FDR. Yet by the mid-fifties support for the continuing existence of the policies that inspired such hatred from “organized money”—in the Madison Square Garden speech FDR singled out Social Security and unemployment insurance in particular as programs smeared by the plutocrats—had become the very definition of political moderation.
This transformation partly reflected shifts in demography and other factors that favored the continuation of the welfare state. I’ll get to those shifts in a moment. But first let me talk briefly about an enduring aspect of political economy that made the New Deal extremely hard to achieve but relatively easy to defend: the innate and generally rational conservatism of voters—not conservatism in the sense of right-wing views, but in the sense of reluctance to support big changes in government policies unless the existing policies are obviously failing. In modern times we’ve seen that type of status-quo conservatism bring projects of both Democrats and Republicans to grief: Clinton’s attempt to reform health care and Bush’s attempt to privatize Social Security both failed in large part because voters feared the unknown.
In the 1920s status-quo conservatism helped block liberal reforms. Any proposal for higher taxes on the rich and increased benefits for workers and the poor, any suggestion of changing labor law in a way that would make unionization easier, was attacked on the grounds that the would-be reformers were irresponsible people who just didn’t understand how the world worked—that their proposals, if adopted, would destroy the economy. Even FDR was to some extent a prisoner of the conventional wisdom, writing, “Too good to be true—you can’t get something for nothing” in the margin of a book that, anticipating Keynes, called for deficit spending to support the economy during recessions.[2]
Once in power—and less inclined to dismiss radical ideas—FDR was faced with the task of persuading the public to reject conventional wisdom and accept radically new policies. He was able to overcome voters’ natural conservatism thanks largely to accidents of history. First, the economic catastrophe of 1929–33 shattered the credibility of the old elite and its ideology, and the recovery that began in 1933, incomplete though it was, lent credibility to New Deal reforms. “We have always known that heedless self-interest was bad morals; now we know that it is bad economics,” declared FDR in his second inaugural address. Second, World War II created conditions under which large-scale government intervention in the economy was clearly necessary, sweeping aside skepticism about radical measures. So by the time Eisenhower wrote that letter to his brother, the New Deal institutions were no longer considered radical innovations; they were part of the normal fabric of American life.
Of course it wouldn’t have played out that way if the pre–New Deal conventional wisdom had been right—if taxing the rich, providing Social Security and unemployment benefits, and enhancing worker bargaining power had been disastrous for the economy. But the Great Compression was, in fact, followed by the greatest sustained economic boom in U.S. history. Moreover the Roosevelt administration demonstrated that one of the standard arguments against large-scale intervention in the economy—that it would inevitably lead to equally large-scale corruption—wasn’t true. In retrospect it’s startling just how clean the New Deal’s record was. FDR presided over a huge expansion of federal spending, including highly discretionary spending by the Works Progress Administration. Yet the popular image of public relief, widely regarded as corrupt before the New Deal, actually improved markedly.
The New Deal’s probity wasn’t an accident. New Deal officials made almost a fetish out of policing their programs against potential corruption. In particular FDR created a powerful “division of progress investigation” to investigate complaints of malfeasance in the WPA. This division proved so effective that a later congressional investigation couldn’t find a single serious irregularity it had overlooked.[3]
This dedication to honest government wasn’t a sign of Roosevelt’s personal virtue; rather it reflected a political imperative. FDR’s mission in office was to show that government activism works. To maintain that mission’s credibility he needed to keep his administration’s record clean. And he did.
One more thing: although the U.S. entry into World War II wasn’t planned as a gigantic demonstration of government effectiveness, it nonetheless had that effect. It became very difficult for conservatives to claim that government can’t do anything well after the U.S. government demonstrated its ability not just to fight a global war but also to oversee a vast mobilization of national resources.
By 1948, then, the idea of an active government role in the economy—a role that, in practice, had the effect of greatly reducing inequality—had become respectable. Meanwhile the old view that the government should keep its hands off, which FDR ridiculed in his 1936 Madison Square Garden speech as “the doctrine that that Government is best which is most indifferent,” had been relegated to crank status.
Winning the battle of ideas isn’t enough, however, if that victory isn’t supported by an effective political coalition. As it happened, though, the political landscape had changed in a way that shifted the center of political gravity downward, empowering those who gained from the Great Compression and had a stake in maintaining a relatively equal distribution of income.
During the Long Gilded Age one major barrier to an effective political movement on behalf of working Americans was the simple fact that many workers, especially low-wage workers, were denied the vote, either by law or in practice.
The biggest group of disenfranchised workers was the African American population of the South—a group that continued to be denied the vote for a generation after the Great Compression, and is still partly disenfranchised today. For reasons we’ll get to shortly, however, the South was a partner, albeit a troublesome one, in the coalition that supported economic equality until the 1970s.
But there was another disenfranchised population during the Long Gilded Age that had effectively disappeared by the 1950s—nonnaturalized immigrants. In 1920, 20 percent of American adults were foreign born, and half of them weren’t citizens. So only about 90 percent of adult residents of the United States were citizens, with the legal right to vote. Once the disenfranchised African Americans of the South are taken into account, in 1920 only about 80 percent of adults residing in the United States had the de facto right to vote. This disenfranchisement wasn’t politically neutral: those who lacked the right to vote were generally poor compared with the average. As we’ll see shortly relatively poor voters today tend to support Democrats in general and a strong welfare state in particular. The same would presumably have been true in the 1920s. So disenfranchisement removed part of the left side of the political spectrum, pushing American politics to the right of where they would have been if all adult residents had been able to vote.
After severe immigration restrictions were imposed in 1924, however, the fraction of adults without the right to vote steadily dropped. By 1940 immigrants were only 13 percent of the adult population, and more than 60 percent of those immigrants had been naturalized, so by 1940 some 95 percent of adult residents of the United States were citizens. By 1950 the immigrant share was down to 10 percent, three-quarters of whom had been nationalized; noncitizen adult residents of the country were down to a trivial 3 percent of the adult population.
Between 1924 and the 1950s, then, immigrants without citizenship basically disappeared from the American scene. The result was a country in which the vast majority of white blue-collar workers were enfranchised. Moreover by the fifties relatively poor whites were much more likely to actually avail themselves of their right to vote than they had been in the twenties, because they were union members or had friends or family members in unions, which raised their political awareness and motivation. The result was an electorate considerably more disposed to support the welfare state, broadly defined, than the electorate of 1920—or the electorate today.
The South is still different in many ways from the rest of the United States. But in the 1950s it was truly another country—a place of overt segregation and discrimination, with the inferior status of blacks enshrined in law and public policy and enforced with violence. Brown v. Board of Education, the Supreme Court decision that required an end to segregated school systems, didn’t come until 1954. Rosa Parks refused to move to the back of a Montgomery bus in 1955, and the Supreme Court decision ending segregation on public transportation wasn’t handed down until late 1956. Voting rights for blacks were an even longer time coming: The Voting Rights Act was enacted in 1964, the year in which three civil rights workers were murdered in Philadelphia, Mississippi, the place where Ronald Reagan would later choose to start his 1980 presidential campaign—with a speech on states’ rights.
The brutal racial politics of the South, together with its general backwardness, made it in many ways a deeply conservative region—even more so than it is today. Yet the South was also, for a long time, a key part of the New Deal coalition.
Electoral maps tell the story. On today’s maps the South is solid red. Aside from Maryland and Delaware, John Kerry carried not a single state south of the Mason-Dixon line. But in 1948 not a single Southern state went for Dewey, although several did back the segregationist candidacy of Strom Thurmond.
Why did the South support the Democrats? There’s an obvious, ugly reason why Southern whites could support Democrats in the 1950s: Although the Democratic Party had become the party of economic equality, it tacitly accepted Jim Crow. It was only when Democrats became the party of racial equality as well that the Republicans, who began as opponents of slavery but became the defenders of wealth, moved into the gap. I’ll have more to say about that exchange of places later in the book, especially when I look at how Ronald Reagan triumphed in 1980.
But why was the South Democratic in the first place? The enduring bitterness left by the Civil War was part of the story; you could say that for generations Southern Democrats won by running against Abraham Lincoln.
But the fact that the South was much poorer than the rest of the country meant that it also received a disproportionate share of benefits generated by the New Deal. Southern states are still somewhat poorer than the national average, but in the fifties the South was desperately poor. As late as 1959 per capita income in Mississippi was less than one thousand dollars a year (about five thousand dollars in today’s prices), giving it an average standard of living barely 40 percent as high as that of wealthy states like Connecticut, New York, and New Jersey. The South was also still a rural, farming region, long after the rest of America had become an urban nation. By 1950 the United States outside the South had three urban residents for every rural inhabitant—but the South was still more rural than urban.
As a result the New Deal was almost pure gain for the South. On one side, the high taxes FDR levied on the wealthy and on corporations placed little burden on the South, where there were few rich people and the corporations were mainly owned by Northerners. On the other side New Deal programs, from Social Security to unemployment insurance to rural power, were especially important for the low-wage workers who made up most of the South’s population. Even now, the fact that the South depends a lot on the welfare state makes an occasional impact on our politics: When George W. Bush tried to privatize Social Security in 2005, his handlers discovered that opposition was, if anything, more intense in the “red states” that supported him in 2004, especially in the South, than in the rest of the country.
Here’s one way to put it: Although the racial divide in the South went along with reactionary local politics, the region had so much to gain from the welfare state thanks to its poverty that at the national level it was willing to support Northern liberals—up to a point. There were, however, sharp limits to the kinds of policies the Southern whites would support. This became all too clear when Harry Truman tried to complete the New Deal, adding the element that would have created a full-fledged welfare state comparable to that of Canada or Western European nations: national health insurance.
In 1946 Truman proposed a system of national health insurance that would have created a single-payer system comparable to the Canadian system today. His chances of pushing the plan through initially looked good. Indeed, it would have been much easier to establish national health insurance in the 1940s than it would be today. Total spending on health care in 1946 was only 4.1 percent of GDP, compared with more than 16 percent of GDP now. Also, since private health insurance was still a relatively undeveloped industry in the forties, insurance companies weren’t the powerful interest group they are now. The pharmaceutical lobby wouldn’t become a major force until the 1980s. Meanwhile public opinion in 1946 was strongly in favor of guaranteed health insurance.
But Truman’s effort failed. Much of the responsibility for that failure lies with the American Medical Association, which spent $5 million opposing Truman’s plan; adjusting for the size of the economy, that’s equivalent to $200 million today. In a blatant abuse of the doctor-patient relationship, the AMA enlisted family doctors to speak to their patients in its effort to block national insurance. It ostracized doctors who supported Truman’s plan, even to the extent of urging that they be denied hospital privileges. It’s shocking even now to read how doctors were told to lecture their patients on the evils of “socialized medicine.”
But the AMA didn’t defeat Truman’s plan alone. There was also crucial opposition to national health insurance from Southern Democrats, despite the fact that the impoverished South, where many people couldn’t afford adequate medical care, would have gained a financial windfall. But Southern politicians believed that a national health insurance system would force the region to racially integrate its hospitals. (They were probably right. Medicare, a program for seniors equivalent in many ways to the system Truman wanted for everyone, was introduced in 1966—and one result was the desegregation of hospitals across the United States.) Keeping black people out of white hospitals was more important to Southern politicians than providing poor whites with the means to get medical treatment.
Truman’s failure on health care presaged the eventual collapse of the New Deal coalition. The support of Southern whites for economic equality was always ambivalent, and became more so over time. The familiar story says that the South bolted the coalition when the Democratic Party got serious about civil rights—and that’s certainly a large part of what happened. It’s also true, however, that as the South as a whole grew richer, the region had less to gain from redistributionist policies, and was set free to indulge the reactionary instincts that came from the disenfranchisement of blacks. But in the 1950s all this was far in the future.
Between 1935 and 1945 the percentage of American workers in unions rose from 12 to 35 percent; as late as 1970, 27 percent of workers were union members. And unions generally, though not always, supported Democrats. In the 1948 election, roughly three-quarters of the members of the two big union organizations, the American Federation of Labor and the Congress of Industrial Organizations, voted for Truman.
The role of unions in making the Democrats the nation’s dominant party went well beyond the tendency of union members to vote for Democratic candidates. Consider Will Rogers’s famous quip, “I am not a member of any organized political party. I’m a Democrat.” This was a fair characterization of the Democratic Party before the New Deal, as it is today. But it was much less true when organized labor was a powerful force: Unions provided the party with a ready-made organizational structure. Not only did unions provide a reliable source of campaign finance; even more important in an age before campaigns were largely conducted on TV, they provided Democrats with a standing army of campaign workers who distributed lawn signs, bumper stickers, and campaign literature, engaged in door-to-door canvassing, and mobilized for get-out-the-vote efforts on election day.
A more subtle but probably equally crucial consequence of a powerful union movement was its effect on the political awareness and voter participation rates of lower-and middle-income Americans. Those of us who follow politics closely often find it difficult to appreciate how little attention most Americans pay to the whole thing. But this apathy is understandable: Although the outcomes of elections can have large impacts on peoples’ lives, it’s very unlikely that an individual voter’s decision will affect those outcomes. Therefore people with jobs to do and children to raise have little incentive to pay close attention to political horseraces. In practice this rational lack of interest imparts an upward class bias to the political process: higher-income people are more likely to pay attention to politics, and more likely to vote, than are lower-and middle-class Americans. As a result, the typical voter has a substantially higher income than the typical person, which is one reason politicians tend to design their policies with the relatively affluent in mind.
But unions have the effect of reducing this class bias. Unions explicitly urge their members to vote; maybe more important, the discussion of politics that takes place at union meetings, the political messages in mailings to union members, and so on, tend to raise political awareness not just among union workers but among those they talk to, including spouses, friends, and family. Since people tend to associate with others of similar income, this means more political participation among lower-income Americans. One recent statistical analysis[4] estimated that if the share of unionized workers in the labor force had been as high in 2000 as it was in 1964, an additional 10 percent of adults in the lower two-thirds of the income distribution would have voted, compared with only an additional 3 percent of the top third. So the strength of the union movement lowered the economic center of gravity of U.S. politics, which greatly benefited the Democrats.
In sum, then, the political economy of the United States in the 1950s and into the 1960s was far more favorable to income-equalizing economic policies than it had been during the Long Gilded Age. The welfare state was no longer considered radical; instead, those who wanted to dismantle it were regarded as cranks. There was no longer a large class of disenfranchised immigrant workers. The South was, conditionally and temporarily, on the side of economic equality, as long as that didn’t translate into racial equality. And a powerful union movement had the effect of mobilizing lower-income voters.
Ellis G. Arnal, the former governor of Georgia, wrote a contrarian but, it turned out, highly accurate article in the October 1948 issue of the Atlantic Monthly called “The Democrats Can Win.” In it he emphasized the underlying strength of a Democratic coalition that “is described by its critics as a combination of the South, the labor unions, the city machines, and the intellectual Left. Not a wholly accurate description, it will serve.” I’ve already talked about the South and the unions. Let’s briefly consider his other two elements.
Urban political machines, based largely on the support of immigrants, predated the Roosevelt years. In fact, they had been a major source of Democratic support since the nineteenth century. And the New Deal’s policies had the effect, if anything, of undermining their power. The key to the machines’ appeal to urban voters was their ability to provide aid to families in trouble and patronage jobs; the New Deal’s expansion of the government social safety net and the rise in wages as a result of the Great Compression made these services less crucial. Nonetheless these urban machines were still powerful well into the 1960s, and their persistence helped Democrats win elections.
What about the “intellectual Left”? Obviously there have never been enough intellectuals to make them an important voting bloc for either party. But to focus on the mechanical side of things gives too little credit to the importance of message and ideas. In the 1930s the left had ideas about what to do; the right didn’t, except to preach that the economy would eventually heal itself. FDR’s success gave liberal intellectuals credibility and prestige that persisted long after the momentum of the New Deal had been largely exhausted—just as, in our own day, it remained common to assert that all the new ideas were on the right long after any real sense of innovation on the right was gone. In 1958 John Kenneth Galbraith wryly remarked that among liberals, “To proclaim the need for new ideas has served, in some measure, as a substitute for them.” But the sense that new ideas came from the left remained an advantage of the Democrats.
Meanwhile, by the 1950s the Republican Party was in many ways a shadow of its former self. Before the Great Depression and the Great Compression, Republicans had two great political advantages: money, and the perception of competence. Contributions from a wealthy elite normally gave the Republicans a large financial advantage; and people tended to assume that the GOP, the party of business, the party of take-charge men like Herbert Hoover, knew how to run the country.
But the Great Compression greatly reduced the resources of the elite, while the Great Depression shattered the nation’s belief that business knows best. Herbert Hoover became the very symbol of incompetence. And after the triumph in World War II and the great postwar boom, who could credibly claim that Democrats didn’t know how to run things?
Still, the Republican party survived—but it did so by moving toward the new political center. Eisenhower won the White House partly because of his reputation from World War II, partly because the public was fed up with the Korean War. But he was also acceptable because he preached “moderation,” and considered those who wanted to roll back the New Deal “stupid.” The Republican Party became, for several decades, a true big tent, with room both for some unrepentant small-government conservatives and for big spending, big-government types like Nelson Rockefeller of New York. To get a sense of just how un-ideological the Republicans became, it’s helpful to turn to quantitative studies of voting behavior in Congress.
The seminal work here, already mentioned in chapter 1, is that of Keith Poole of the University of California, San Diego, and Howard Rosenthal of the Russell Sage Foundation, who have developed a systematic way of locating members of Congress along a left-right spectrum. (They also identify a second dimension of politics—race—which has been crucial in the rise of movement conservatism. But let’s leave that aside for now.) The method, roughly speaking, works like this: Start with roll-call votes on a number of bills that bear on economic issues. First, make a preliminary ranking of these bills on a left-to-right political spectrum. Second, rank members of Congress from left to right based on how they voted on these bills. Third, use the ranking of legislators to refine the left-right ranking of the legislation, and repeat the process all over again. After a few rounds you’ve arrived at a consistent ranking of both bills and politicians along the left-right spectrum.[5] Poole, Rosenthal, and Nolan McCarty of Princeton University have applied this method to each Congress since the nineteenth century. What stands out from their results is just how modest the differences between Republicans and Democrats were in the fifties and sixties, compared with a huge gulf before the New Deal, and an even larger gap today.
Poole and Rosenthal measure the gap between the parties with an index of political polarization that, while highly informative, is difficult to summarize in an intuitive way. For my purposes it’s sufficient to look at two descriptive measures that behave very similarly to their index over time. One measure is what I’ll call “minority-party overlap”: the number of Democrats to the right of the leftmost Republican, when Republicans controlled Congress, or the number of Republicans to the left of the rightmost Democrat, when Democrats controlled Congress. The other measure is what I’ll call “minority-party crossover”: the number of members of the minority party who are actually on the other side of the political divide from their party—Democrats who are to the right of the median member of Congress, or Republicans to the left. In each measure more overlap indicates a less polarized political system, while the absence of overlap suggests that there isn’t a strong political center.
Table 2 shows these numbers for three Congresses: the 70th Congress, which sat in 1927–28 and 1928–29; the 85th Congress, which sat in 1957 and 1958; and the 108th Congress, which sat in 2003 and 2004. The table shows that congressional partisanship was much less intense in the 1950s than it had been before the New Deal—or than it is today. In the 70th Congress, in which Republicans controlled the House of Representatives, there was hardly any minority party overlap: only two Democrats were to the right of the leftmost Republican. And there was no minority party crossover: all Democrats were left of center. The situation was even more extreme in the 108th Congress, which was also controlled by Republicans: Every Democrat was to the left of the leftmost Republican, and needless to say there was no crossover. In the 85th Congress, however, which was controlled by Democrats, there were many Republicans to the left of the rightmost Democrat (largely because there were a number of quite conservative Southern Democrats.) More amazingly, nine Republican members of the House were literally left of center—that is, voted to the left of the median Congressman. That’s a situation that would be inconceivable today. For one thing, a twenty-first century Republican who took a genuinely left-of-center position would never get through the primary process, because movement conservatives would make sure that he faced a lavishly funded challenger, and because Republican primary voters, skewed sharply to the right, would surely support that challenger. In the fifties, however, Republicans couldn’t afford to enforce ideological purity if they wanted to win elections. As a result, actual liberals like Nelson Rockefeller and Jacob Javits, who would have been summarily excommunicated today, remained party members in good standing.
Table 2. Measures of Similarity Between the Parties | ||
---|---|---|
Minority Party Overlap | Minority Party Crossover | |
70th Congress, 1927–29 | 2 | 0 |
85th Congress, 1957–58 | 112 | 9 |
108th Congress, 2003–4 | 0 | 0 |
Source: www.library.unt.edu/govinfo/usfed/years.html.
The relative absence of difference between the parties’ positions on economic policy meant that voting behavior on the part of the public was very different from what it is today. In recent elections partisan voting has been very strongly correlated with income: The higher a voter’s income, the more likely he or she is to vote Republican. This presumably reflects voters’ understanding that a vote for a Republican is a vote for policies that favor the affluent as opposed to the poor and working class. But the relatively nonideological nature of the Republican Party in the fifties, reflected in the way its members voted in Congress, was also reflected in public perceptions. During the postwar boom, voters evidently saw very little difference between the parties on economic policy, at least when voting in presidential elections. Table 3 compares the average voting patterns of white voters grouped by income level in presidential elections between 1952 and 1972 on one side and 1976 and 2004 on the other. In the more recent period there was a strong relationship between higher income levels and voting Republican. During the period from 1952 to 1972, the era of bipartisan acceptance of the welfare state, however, there was hardly any relationship between income level and voting preference. The one presidential election in which there was a large voting difference by income level was 1964, the year in which Barry Goldwater—a true movement conservative, and the harbinger of things to come—seized the Republican nomination. Other surveys show that in the fifties and sixties there was remarkably little relationship between a voter’s income and his or her party registration: The upper third of the income distribution was only slightly more Republican than the middle or lower thirds.
Table 3. Percentage of Whites Voting Democratic in Presidential Elections, by Income group | ||
---|---|---|
Percentage Voting Democratic in 1952–1972 | Percentage Voting Democratic in 1976–2004 | |
Poorest Third | 46 | 51 |
Middle Third | 47 | 44 |
Richest Third | 42 | 37 |
Source: Larry Bartels, “What’s the Matter with What’s the Matter with Kansas?” p. 13 (photocopy, Princeton University, 2005).
If the Republican Party of the fifties and sixties didn’t stand for economic conservatism, what did it stand for? Or maybe the question is better phrased as follows: What did voters who voted Republican think they were voting for?
To some extent they were voting for the traditional ethnic order. The Republican Party of the 1950s was, above all, the WASP party—the party of non-Southern white Anglo-Saxon Protestants, with the Anglo-Saxon bit somewhat optional. (Eisenhower came from German stock, but that didn’t matter.) During the 1950s, 51 percent of those who considered themselves Republicans were WASPs, even though the group made up only 30 percent of the electorate.[6] White Protestants had been the dominant ethnic group in the United States for most of its history, but the rise of the New Deal, with many Catholic union members in its base and with a large role for Jewish intellectuals, undermined that dominance. And much of the rest of the country was suspicious of the change. It’s hard now to recapture that state of mind, but as late as the 1960 election a significant number of Americans voted against Kennedy simply because he was Catholic.
More creditably, many Americans voted Republican as a check on the power of the dominant Democratic coalition. From the thirties through the seventies, Democrats commanded a much larger share of registered voters than the Republicans. Although this didn’t translate into a Democratic advantage in capturing the White House—between the 1948 election and the election of Ronald Reagan the Republicans held the presidency for four terms, the Democrats for three—it did translate into consistent Democratic control of Congress from 1952 on. This consistent control led to abuses—not gross corruption, for the most part, but petty corruption and, perhaps more important, complacency and lack of attention to popular concerns. Republicans became the alternative for those who valued some accountability. In particular, Republicans in the Northeast often presented themselves as reformers who would clean up the system rather than change it in any fundamental way.
In sum, between 1948 and sometime in the 1970s both parties accepted the changes that had taken place during the Great Compression. To a large extent the New Deal had created the political conditions that sustained this consensus. A highly progressive tax system limited wealth at the top, and the rich were too weak politically to protest. Social Security and unemployment insurance were untouchable programs, and Medicare eventually achieved the same status. Strong unions were an accepted part of the national scene.
This equilibrium would collapse in the 1970s. But the forces that would destroy the politics of equality began building in the 1960s, a decade in which everything went right for the economy, but everything seemed to go wrong for American democracy.