7 THE GREAT DIVERGENCE

Medieval theologians debated how many angels could fit on the head of a pin. Modern economists debate whether American median income has risen or fallen since the early 1970s. What’s really telling is the fact that we’re even having this debate. America is a far more productive and hence far richer country than it was a generation ago. The value of the output an average worker produces in an hour, even after you adjust for inflation, has risen almost 50 percent since 1973. Yet the growing concentration of income in the hands of a small minority has proceeded so rapidly that we’re not sure whether the typical American has gained anything from rising productivity.

The great postwar boom, a boom whose benefits were shared by almost everyone in America, came to an end with the economic crisis of the 1970s—a crisis brought on by rising oil prices, out-of-control inflation, and sagging productivity. The crisis abated in the 1980s, but the sense of broadly shared economic gains never returned. It’s true that there have been periods of optimism—Reagan’s “Morning in America,” as the economy recovered from the severe slump of the early eighties, the feverish get-rich-quick era of the late nineties. Since the end of the postwar boom, however, economic progress has always felt tentative and temporary.

Yet average income—the total income of the nation, divided by the number of people—has gone up substantially since 1973, the last year of the great boom. We are, after all, a much more productive nation than we were when the boom ended, and hence a richer nation as well. Think of all the technological advances in our lives since 1973: personal computers and fax machines, cell phones and bar-code scanners. Other major productivity-enhancing technologies, like freight containers that can be lifted directly from ship decks onto trucks and trains, existed in 1973 but weren’t yet in widespread use. All these changes have greatly increased the amount the average worker produces in a normal working day, and correspondingly raised U.S. average income substantially.

Average income, however, doesn’t necessarily tell you how most people are doing. If Bill Gates walks into a bar, the average wealth of the bar’s clientele soars, but the men already there when he walked in are no wealthier than before. That’s why economists trying to describe the fortunes of the typical member of a group, not the few people doing extremely well or extremely badly, usually talk not about average income but about median income—the income of a person richer than half the population but poorer than the other half. The median income in the bar, unlike the average income, doesn’t soar when Bill Gates walks in.

As it turns out, Bill Gates walking into a bar is a pretty good metaphor for what has actually happened in the United States over the past generation: Average income has risen substantially, but that’s mainly because a few people have gotten much, much richer. Median income, depending on which definition you use, has either risen modestly or actually declined.

About the complications: You might think that median income would be a straightforward thing to calculate: find the American richer than half the population but poorer than the other half, and calculate his or her income. In fact, however, there are two areas of dispute, not easily resolved: how to define the relevant population, and how to measure changes in the cost of living. Before we get to the complications, however, let me repeat the punch line: The fact that we’re even arguing about whether the typical American has gotten ahead tells you most of what you need to know. In 1973 there wasn’t a debate about whether typical Americans were better or worse off than they had been in the 1940s. Every measure showed that living standards had more or less doubled since the end of World War II. Nobody was nostalgic for the jobs and wages of a generation earlier. Today the American economy as a whole is clearly much richer than it was in 1973, the year generally taken to mark the end of the postwar boom, but economists are arguing about whether typical Americans have benefited at all from the gains of the nation as a whole.

Now for the complications: It turns out that we can’t just line up all 300 million people in America in order of income and calculate the income of American number 150,000,000. After all, children don’t belong in the lineup, because they only have income to the extent that the households they live in do. So perhaps we should be looking at households rather than individuals. If we do that we find that median household income, adjusted for inflation, grew modestly from 1973 to 2005, the most recent year for which we have data: The total gain was about 16 percent.

Even this modest gain may, however, overstate how well American families were doing, because it was achieved in part through longer working hours. In 1973 many wives still didn’t work outside the home, and many who did worked only part-time. I don’t mean to imply that there’s something wrong with more women working, but a gain in family income that occurs because a spouse goes to work isn’t the same thing as a wage increase. In particular it may carry hidden costs that offset some of the gains in money income, such as reduced time to spend on housework, greater dependence on prepared food, day-care expenses, and so on.

We get a considerably more pessimistic take on the data if we ask how easy it is for American families today to live the way many of them did a generation ago, with a single male breadwinner. According to the available data, it has gotten harder: The median inflation-adjusted earnings of men working full-time in 2005 were slightly lower than they had been in 1973. And even that statistic is deceptively favorable. Thanks to the maturing of the baby boomers today’s work force is older and more experienced than the work force of 1973—and more experienced workers should, other things being equal, command higher wages. If we look at the earnings of men aged thirty-five to forty-four—men who would, a generation ago, often have been supporting stay-at-home wives—we find that inflation-adjusted wages were 12 percent higher in 1973 than they are now.

Controversies over defining the relevant population are only part of the reason economists manage to argue about whether typical Americans have gotten ahead since 1973. There is also a different set of questions, involving the measurement of prices. I keep referring to “inflation-adjusted” income—which means that income a generation ago is converted into today’s dollars by adjusting for changes in the consumer price index. Now some economists argue that the CPI overstates true inflation, because it doesn’t fully take account of new products and services that have improved our lives. As a result, they say, the standard of living has risen more than the official numbers suggest. Call it the “but they didn’t have Netflix” argument. Seriously, there are many goods and services available today that either hadn’t been invented or weren’t on the market in 1973, from cell phones to the Internet. Most important, surely, are drugs and medical techniques that not only save lives but improve the quality of life for tens of millions of people. On the other hand, in some ways life has gotten harder for working families in ways the official numbers don’t capture: there’s more intense competition to live in a good school district, traffic congestion is worse, and so on.

Maybe the last word should be given to the public. According to a 2006 survey taken by the Pew Research Center, most working Americans believe that the average worker “has to work harder to earn a decent living” today than he did twenty or thirty years earlier.[1] Is this just nostalgia for a remembered golden age? Maybe, but there was no such nostalgia a generation ago about the way America was a generation before that. The point is that the typical American family hasn’t made clear progress in the last thirtysomething years. And that’s not normal.

Winners and Losers

As I’ve suggested with my Bill-Gates-in-a-bar analogy, ordinary American workers have failed to reap the gains from rising productivity because of rising inequality. But who were the winners and losers from this upward redistribution of income? It wasn’t just Bill Gates—but it was a surprisingly narrow group.

If gains in productivity had been evenly shared across the workforce, the typical worker’s income would be about 35 percent higher now than it was in the early seventies.[2] But the upward redistribution of income meant that the typical worker saw a far smaller gain. Indeed, everyone below roughly the 90th percentile of the wage distribution—the bottom of the top 10 percent—saw his or her income grow more slowly than average, while only those above the 90th percentile saw above-average gains. So the limited gains of the typical American worker were the flip side of above-average gains for the top 10 percent.

And the really big gains went to the really, really rich. In Oliver Stone’s 1987 movie Wall Street, Gordon Gekko—the corporate raider modeled in part on Ivan Boesky, played by Michael Douglas—mocks the limited ambition of his protégé, played by Charlie Sheen. “Do you want to be just another $400,000 a year working Wall Street stiff, flying first class and being comfortable?”

At the time an income of $400,000 a year would have put someone at about the 99.9th percentile of the wage distribution—pretty good, you might think. But as Stone realized, by the late 1980s something astonishing was happening in the upper reaches of the income distribution: The rich were pulling away from the merely affluent, and the super-rich were pulling away from the merely rich. People in the bottom half of the top 10 percent, corresponding roughly to incomes in the $100,000 to $150,000 range, though they did better than Americans further down the scale, didn’t do all that well—in fact, in the period after 1973 they didn’t gain nearly as much, in percentage terms, as they did during the postwar boom. Only the top 1 percent has done better since the 1970s than it did in the generation after World War II. Once you get way up the scale, however, the gains have been spectacular—the top tenth of a percent saw its income rise fivefold, and the top .01 percent of Americans is seven times richer than they were in 1973.

Who are these people, and why are they doing so much better than everyone else? In the original Gilded Age, people with very high incomes generally received those incomes due to the assets they owned: The economic elite owned valuable land and mineral resources or highly profitable companies. Even now capital income—income from assets such as stocks, bonds, and property—is much more concentrated in the hands of a few than earned income. So is “entrepreneurial income”—income from ownership of companies. But ownership is no longer the main source of elite status. These days even multimillionaires get most of their income in the form of paid compensation for their labor.

Needless to say we’re not talking about wage slaves toiling for an hourly rate. If the quintessential high-income American circa 1905 was an industrial baron who owned factories, his counterpart a hundred years later is a top executive, lavishly rewarded for his labors with bonuses and stock options. Even at the very top, the highest-income 0.01 percent of the population—the richest one in ten thousand—almost half of income comes in the form of compensation. A rough estimate is that about half of the wage income of this superelite comes from the earnings of top executives—not just CEOs but those a few ranks below—at major companies. Much of the rest of the wage income of the top 0.01 percent appears to represent the incomes of sports and entertainment celebrities.

So a large part of the overall increase in inequality is, in a direct sense, the result of a change in the way society pays its allegedly best and brightest. They were always paid well, but now they’re paid incredibly well.

The question, of course, is what caused that to happen. Broadly speaking there are two competing explanations for the great divergence in incomes that has taken place since the 1970s. The first explanation, favored by people who want to sound reasonable and judicious, is that a rising demand for skill, due mainly to technological change with an assist from globalization, is responsible. The alternative explanation stresses changes in institutions, norms, and political power.

The Demand for Skill

The standard explanation of rising inequality—I’m tempted to call it the safe explanation, since it’s favored by people who don’t want to make waves—says that rising inequality is mainly caused by a rising demand for skilled labor, which in turn is driven largely by technological change. For example, Edward Lazear, chairman of the Council of Economic Advisers in 2006, had this to say:

Most of the inequality reflects an increase in returns to “investing in skills”—workers completing more school, getting more training, and acquiring new capabilities…. What accounts for this divergence of earnings for the skilled and earnings for the unskilled? Most economists believe that fundamentally this is traceable to technological change that has occurred over the past two or three decades. In our technologically-advanced society, skill has higher value than it does in a less technologically-advanced society…with the growing importance of computers, the types of skills that are required in school and through investment in learning on the job become almost essential in making a worker productive. The typical job that individuals do today requires a much higher level of technical skills than the kinds of jobs that workers did in 1900 or in 1970.[3]

To enlarge on Lazear’s remarks: Information technology, in the form of personal computers, cell phones, local area networks, the Internet, and so on, increases the demand for people with enough formal training to build, program, operate, and repair the new gadgets. At the same time it reduces the need for workers who do routine tasks. For example, there are far fewer secretaries in modern offices than there were in 1970, because word processing has largely eliminated the need for typists, and networks have greatly reduced the need for physical filing and retrieval; but there are as many managers as ever. Bar-code scanners tied to local networks have reduced the number of people needed to man cash registers and the number required to keep track of inventory, but there are more marketing consultants than ever. And so on throughout the economy.

The hypothesis that technological change, by raising the demand for skill, has led to growing inequality is so widespread that at conferences economists often use the abbreviation SBTC—skill-biased technical change—without explanation, assuming that their listeners know what they’re talking about. It’s an appealing hypothesis for three main reasons. First, the timing works: The upward trend in inequality began about the same time that computing power and its applications began their great explosion. True, mainframe computers—large machines that sat in a room by themselves, crunching payrolls and other business data—were in widespread use in the sixties. But they had little impact on how most workers did their jobs. Modern information technology didn’t come into its own until Intel introduced the first integrated circuit—the first computer chip—in 1971. Only then could the technology become pervasive. Second, SBTC is the kind of hypothesis economists feel comfortable with: it’s just supply and demand, with no need to bring in the kinds of things sociologists talk about but economists find hard to incorporate in their models, things like institutions, norms, and political power. Finally, SBTC says that the rise in inequality isn’t anybody’s fault: It’s just technology, working through the invisible hand.

That said, there’s remarkably little direct evidence for the proposition that technological change has caused rising inequality. The truth is that there’s no easy way to measure the effect of technology on markets; on this issue and others, economists mainly invoke technology to explain things they can’t explain by other measurable forces. The procedure goes something like this: First, assume that rising inequality is caused by technology, growing international trade, and immigration. Then, estimate the effects of trade and immigration—a tendentious procedure in itself, but we do at least have data on the volume of imports and the number of immigrants. Finally, attribute whatever isn’t explained by these measurable factors to technology. That is, economists who assert that technological change is the main cause of rising inequality arrive at that conclusion by a process of exclusion: They’ve concluded that trade and immigration aren’t big enough to explain what has happened, so technology must be the culprit.

As I’ve just suggested, the main factors economists have considered as alternative explanations for rising inequality are immigration and international trade, both of which should, in principle, also have acted to raise the wages of the skilled while reducing those of less-skilled Americans.

Immigration is, of course, a very hot political issue in its own right. In 1970, almost half a century after the Immigration Act of 1924 closed the door on mass immigration from low-wage countries, less than 5 percent of U.S. adults were foreign born. But for reasons that remain somewhat unclear,* immigration began to pick up in the late 1960s, and soared after 1980. Today, immigrants make up about 15 percent of the workforce. In itself this should have exerted some depressing effect on overall wages: there are considerably more workers competing for U.S. jobs than there would have been without immigration.

Furthermore, a majority of immigrants over the past generation have come from Latin America, and many of the rest from other Third World countries; this means that immigrants, both legal and illegal, are on average considerably less educated than are native-born workers. A third of immigrants have the equivalent of less than a high-school diploma. As a result the arrival of large numbers of immigrants has made less-educated labor more abundant in the United States, while making highly educated workers relatively scarcer. Supply and demand then predicts that immigration should have depressed the wages of less-skilled workers, while raising those of highly skilled workers.

The effects, however, are at most medium-size. Even the most pessimistic mainstream estimates, by George Borjas and Larry Katz of Harvard, suggest that immigration has reduced the wages of high-school dropouts by about 5 percent, with a much smaller effect on workers with a high school degree, and a small positive effect on highly educated workers. Moreover, other economists think the Borjas-Katz numbers are too high.

In chapter 8 I’ll argue that immigration may have promoted inequality in a more indirect way, by shifting the balance of political power up the economic scale. But the direct economic effect has been modest.

What about international trade? Much international trade probably has little or no effect on the distribution of income. For example, trade in automobiles and parts between the United States and Canada—two high-wage countries occupying different niches of the same industry, shipping each other goods produced with roughly the same mix of skilled and unskilled labor—isn’t likely to have much effect on wage inequality in either country. But U.S. trade with, say, Bangladesh is a different story. Bangladesh mainly exports clothing—the classic labor-intensive good, produced by workers who need little formal education and no more capital equipment than a sewing machine. In return Bangladesh buys sophisticated goods—airplanes, chemicals, computers.

There’s no question that U.S. trade with Bangladesh and other Third World countries, including China, widens inequality. Suppose that you buy a pair of pants made in Bangladesh that could have been made domestically. By buying the foreign pants you are in effect forcing the workers who would have been employed producing a made-in-America pair of pants to look for other jobs. Of course the converse is also true when the United States exports something: When Bangladesh buys a Boeing airplane, the American workers employed in producing that plane don’t have to look for other jobs. But the labor embodied in U.S. exports is very different from the labor employed in U.S. industries that compete with imports. We tend to export “skill-intensive” products like aircraft, supercomputers, and Hollywood movies; we tend to import “labor-intensive” goods like pants and toys. So U.S. trade with Third World countries reduces job opportunities for less-skilled American workers, while increasing demand for more-skilled workers. There’s no question that this widens the wage gap between the less-skilled and the more-skilled, contributing to increased inequality. And the rapid growth of trade with low-wage countries, especially Mexico and China, suggests that this effect has been increasing over the past fifteen years.

What’s really important to understand, however, is that skill-biased technological change, immigration, and growing international trade are, at best, explanations of a rising gap between less-educated and more-educated workers. And despite the claims of Lazear and many others, that’s only part of the tale of rising inequality. It’s true that the payoff to education has risen—but even the college educated have for the most part seen their wage gains lag behind rising productivity. For example, the median college-educated man has seen his real income rise only 17 percent since 1973.

That’s because the big gains in income have gone not to a broad group of well-paid workers but to a narrow group of extremely well-paid people. In general those who receive enormous incomes are also well educated, but their gains aren’t representative of the gains of educated workers as a whole. CEOs and schoolteachers both typically have master’s degrees, but schoolteachers have seen only modest gains since 1973, while CEOs have seen their income rise from about thirty times that of the average worker in 1970 to more than three hundred times as much today.

The observation that even highly educated Americans have, for the most part, seen their incomes fall behind the average, while a handful of people have done incredibly well, undercuts the case for skill-biased technological change as an explanation of inequality and supports the argument that it’s largely due to changes in institutions, such as the strength of labor unions, and norms, such as the once powerful but now weak belief that having the boss make vastly more than the workers is bad for morale.

Institutions: The End of the Treaty of Detroit

The idea that changes in institutions and changes in norms, rather than anonymous skill-biased technical change, explain rising inequality has been gaining growing support among economists, for two main reasons. First, an institutions-and-norms explanation of rising inequality today links current events to the dramatic fall in inequality—the Great Compression—that took place in the 1930s and 1940s. Second, an institutions-and-norms story helps explain American exceptionalism: No other advanced country has seen the same kind of surge in inequality that has taken place here.

The Great Compression in itself—or more accurately, its persistence—makes a good case for the crucial role of social forces as opposed to the invisible hand in determining income distribution. As I discussed in chapter 3, the middle-class America baby boomers grew up in didn’t evolve gradually. It was constructed in a very short period by New Deal legislation, union activity, and wage controls during World War II. Yet the relatively flat income distribution imposed during the war lasted for decades after wartime control of the economy ended. This persistence makes a strong case that anonymous market forces are less decisive than Economics 101 teaches. As Piketty and Saez put it:

The compression of wages during the war can be explained by the wage controls of the war economy, but how can we explain the fact that high wage earners did not recover after the wage controls were removed? This evidence cannot be immediately reconciled with explanations of the reduction of inequality based solely on technical change…. We think that this pattern or evolution of inequality is additional indirect evidence that nonmarket mechanisms such as labor market institutions and social norms regarding inequality may play a role in setting compensation.[4]

The MIT economists Frank Levy and Peter Temin have led the way in explaining how those “labor market institutions and social norms” worked.[5] They point to a set of institutional arrangements they call the Treaty of Detroit—the name given by Fortune magazine to a landmark 1949 bargain struck between the United Auto Workers and General Motors. Under that agreement, UAW members were guaranteed wages that rose with productivity, as well as health and retirement benefits; what GM got in return was labor peace.

Levy and Temin appropriate the term to refer not only to the formal arrangement between the auto companies and their workers but also to the way that arrangement was emulated throughout the U.S. economy. Other unions based their bargaining demands on the standard set by the UAW, leading to the spread of wage-and-benefit packages that, while usually not as plush as what Walter Reuther managed to get, ensured that workers shared in the fruits of progress. And even nonunion workers were strongly affected, because the threat of union activity often led nonunionized employers to offer their workers more or less what their unionized counterparts were getting: The economy of the fifties and sixties was characterized by “pattern wages,” in which wage settlements of major unions and corporations established norms for the economy as a whole.

At the same time the existence of powerful unions acted as a restraint on the incomes of both management and stockholders. Top executives knew that if they paid themselves huge salaries, they would be inviting trouble with their workers; similarly corporations that made high profits while failing to raise wages were putting labor relations at risk.

The federal government was also an informal party to the Treaty of Detroit: It intervened, in various ways, to support workers’ bargaining positions and restrain perceived excess at the top. Workers’ productivity was substantially lower in the 1960s than it is today, but the minimum wage, adjusted for inflation, was considerably higher. Labor laws were interpreted and enforced in a way that favored unions. And there was often direct political pressure on large companies and top executives who were seen as stepping over the line. John F. Kennedy famously demanded that steel companies, which had just negotiated a modest wage settlement, rescind a price increase.

To see how different labor relations were under the Treaty of Detroit from their state today, compare two iconic corporations, one of the past, one of the present.

In the final years of the postwar boom General Motors was America’s largest private employer aside from the regulated telephone monopoly. Its CEO was, correspondingly, among America’s highest paid executives: Charles Johnson’s 1969 salary was $795,000, about $4.3 million in today’s dollars—and that salary excited considerable comment. But ordinary GM workers were also paid well. In 1969 auto industry production workers earned on average almost $9,000, the equivalent of more than $40,000 today. GM workers, who also received excellent health and retirement benefits, were considered solidly in the middle class.

Today Wal-Mart is America’s largest corporation, with 800,000 employees. In 2005 Lee Scott, its chairman, was paid almost $23 million. That’s more than five times Charles Johnson’s inflation-adjusted salary, but Mr. Scott’s compensation excited relatively little comment, since wasn’t exceptional for the CEO of a large corporation these days. The wages paid to Wal-Mart’s workers, on the other hand, do attract attention, because they are low even by current standards. On average Wal-Mart’s nonsupervisory employees are paid about $18,000 a year, less than half what GM workers were paid thirty-five years ago, adjusted for inflation. Wal-Mart is also notorious both for the low percentage of its workers who receive health benefits, and the stinginess of those scarce benefits.[6]

What Piketty and Saez, Levy and Temin, and a growing number of other economists argue is that the contrast between GM then and Wal-Mart now is representative of what has happened in the economy at large—that in the 1970s and after, the Treaty of Detroit was rescinded, the institutions and norms that had limited inequality after World War II went away, and inequality surged back to Gilded Age levels. In other words, the great divergence of incomes since the seventies is basically the Great Compression in reverse. In the 1930s and 1940s institutions were created and norms established that limited inequality; starting in the 1970s those institutions and norms were torn down, leading to rising inequality. The institutions-and-norms explanation integrates the rise and fall of middle-class America into a single story.

The institutions-and-norms explanation also correctly predicts how trends in inequality should differ among countries. Bear in mind that the forces of technological change and globalization have affected every advanced country: Europe has applied information technology almost as rapidly as we have, cheap clothing in Europe is just as likely to be made in China as is cheap clothing in America. If technology and globalization are the driving forces behind rising inequality, then Europe should be experiencing the same rise in inequality as the United States. In terms of institutions and norms, however, things are very different among advanced nations: In Europe, for example, unions remain strong, and old norms condemning very high pay and emphasizing the entitlements of workers haven’t faded away. So if institutions are the story, we’d expect the U.S. experience of rising inequality to be exceptional, not echoed in Europe.

And on that comparison, an institutions-and-norms explanation wins: America is unique. The clearest evidence comes from income tax data, which allow a comparison of the share of income accruing to the economic elite. These data show that during World War II and its aftermath all advanced countries experienced a Great Compression, a sharp drop in inequality. In the United States this leveling was reversed beginning in the 1970s, and the effects of the Great Compression have now been fully eliminated. In Canada, which is closely linked to the U.S. economy, and in Britain, which had its own period of conservative dominance under Margaret Thatcher, there has been a more limited trend toward renewed inequality. But in Japan and France there has been very little change in inequality since 1980.[7]

There’s also spottier and less consistent information from surveys of household incomes. The picture there is fuzzier, but again the United States and, to a lesser extent, Britain stand out as countries where inequality sharply increased, while other advanced countries experienced either minor increases or no change at all.[8]

There is, in short, a strong circumstantial case for believing that institutions and norms, rather than technology or globalization, are the big sources of rising inequality in the United States. The obvious example of changing institutions is the collapse of the U.S. union movement. But what do I mean when I talk about changing norms?

Norms and Inequality: The Case of the Runaway CEOs

When economists talk about how changing norms have led to rising inequality, they often have one concrete example in mind: the runaway growth of executive pay. Although executives at major corporations aren’t the only big winners from rising inequality, their visibility makes them a good illustration of what is happening more broadly throughout the economy.

According to a Federal Reserve study, in the 1970s the chief executives at 102 major companies (those that were in the top 50 as measured by sales at some point over the period 1940–1990) were paid on average about $1.2 million in today’s dollars. That wasn’t hardship pay, to say the least. But it was only a bit more than CEOs were paid in the 1930s, and “only” 40 times what the average full-time worker in the U.S. economy as a whole was paid at the time. By the early years of this decade, however, CEO pay averaged more than $9 million a year, 367 times the pay of the average worker. Other top executives also saw huge increases in pay, though not as large as that of CEOs: The next two highest officers in major companies made 31 times the average worker’s salary in the seventies, but 169 times as much by the early 2000s.[9]

To make some sense of this extraordinary development, let’s start with an idealized story about the determinants of executive pay.[10] Imagine that the profitability of each company depends on the quality of its CEO, and that the bigger the company, the larger the CEO’s impact on profit. Imagine also that the quality of potential CEOs is observable: everyone knows who is the 100th best executive in America, the 99th best, and so on. In that case, there will be a competition for executives that ends up assigning the best executives to the biggest companies, where their contribution matters the most. And as a result of that competition, each executive’s pay will reflect his or her quality.

An immediate implication of this story is that at the top, even small differences in perceived executive quality will translate into big differences in salaries. The reason is competition: For a giant company the difference in profitability between having the 10th best executive and the 11th best executive may easily be tens of millions of dollars each year. In that sense the idealized model suggests that top executives might be earning their pay. And the idealized model also says that if executives are paid far more today than they were a generation ago, it must be because for some reason—more intense competition, higher stock prices, whatever—it matters more than it used to to have the best man running a company.

But once we relax the idealized premises of the story, it’s not hard to see why executive pay is a lot less tied down by fundamental forces of supply and demand, and a lot more subject to changes in social norms and political power, than this story implies.

First, neither the quality of executives nor the extent to which that quality matters are hard numbers. Assessing the productivity of corporate leaders isn’t like measuring how many bricks a worker can lay in a hour. You can’t even reliably evaluate managers by looking at the profitability of the companies they run, because profits depend on a lot of factors outside the chief executive’s control. Moreover profitability can, for extended periods, be in the eye of the beholder: Enron looked like a fabulously successful company to most of the world; Toll Brothers, the McMansion king, looked like a great success as long as the housing bubble was still inflating. So the question of how much to pay a top executive has a strong element of subjectivity, even fashion, to it. In the fifties and sixties big companies didn’t think it was important to have a famous, charismatic leader: CEOs rarely made the covers of business magazines, and companies tended to promote from within, stressing the virtues of being a team player. By contrast, in the eighties and thereafter CEOs became rock stars—they defined their companies as much as their companies defined them. Are corporate boards wiser now than they were when they chose solid insiders to run companies, or have they just been caught up in the culture of celebrity?

Second, even to the extent that corporate boards correctly judge both the quality of executives and the extent to which quality matters for profitability, the actual amount they end up paying their top executives depends a lot on what other companies do. Thus, in the corporate world of the 1960s and 1970s, companies rarely paid eye-popping salaries to perceived management superstars. In fact companies tended to see huge paychecks at the top as a possible source of reduced team spirit, as well as a potential source of labor problems. In that environment even a corporate board that did believe that hiring star executives was the way to go didn’t have to offer exorbitant pay to attract those stars. But today executive pay in the millions or tens of millions is the norm. And even corporate boards that aren’t smitten with the notion of superstar leadership end up paying high salaries, partly to attract executives whom they consider adequate, partly because the financial markets will be suspicious of a company whose CEO isn’t lavishly paid.

Finally, to the extent that there is a market for corporate talent, who, exactly, are the buyers? Who determines how good a CEO is, and how much he has to be paid to keep another company from poaching his entrepreneurial know-how? The answer, of course, is that corporate boards, largely selected by the CEO, hire compensation experts, almost always chosen by the CEO, to determine how much the CEO is worth. It is, shall we say, a situation conducive to overstatement both of the executive’s personal qualities and of how much those supposed personal qualities matter for the company’s bottom line.

What all this suggests is that incomes at the top—the paychecks of top executives and, by analogy, the incomes of many other income superstars—may depend a lot on “soft” factors such as social attitudes and the political background. Perhaps the strongest statement of this view comes from Lucian Bebchuk and Jesse Fried, authors of the 2004 book Pay Without Performance. Bebchuk and Fried argue that top executives in effect set their own paychecks, that neither the quality of the executives nor the marketplace for talent has any real bearing. The only thing that limits executive pay, they argue, is the “outrage constraint”: the concern that very high executive compensation will create a backlash from usually quiescent shareholders, workers, politicians, or the general public.[11]

To the extent that this view is correct, soaring incomes at the top can be seen as a social and political, rather than narrowly economic phenomenon: high incomes shot up not because of an increased demand for talent but because a variety of factors caused the death of outrage. News organizations that might once have condemned lavishly paid executives lauded their business genius instead; politicians who might once have led populist denunciations of corporate fat cats sought to flatter the people who provide campaign contributions; unions that might once have walked out to protest giant executive bonuses had been crushed by years of union busting. Oh, and one more thing. Because the top marginal tax rate has declined from 70 percent in the early 1970s to 35 percent today, there’s more incentive for a top executive to take advantage of his position: He gets to keep much more of his excess pay. And the result is an explosion of income inequality at the top of the scale.

The idea that rising pay at the top of the scale mainly reflects social and political change, rather than the invisible hand of the market, strikes some people as implausible—too much at odds with Economics 101. But it’s an idea that has some surprising supporters: Some of the most ardent defenders of the way modern executives are paid say almost the same thing.

Before I get to those defenders, let me give you a few words from someone who listened to what they said. From Gordon Gekko’s famous speech to the shareholders of Teldar Paper in the movie Wall Street:

Now, in the days of the free market, when our country was a top industrial power, there was accountability to the stockholder. The Carnegies, the Mellons, the men that built this great industrial empire, made sure of it because it was their money at stake. Today, management has no stake in the company!…The point is, ladies and gentlemen, that greed—for lack of a better word—is good. Greed is right. Greed works.

What those who watch the movie today may not realize is that the words Oliver Stone put in Gordon Gekko’s mouth were strikingly similar to what the leading theorists on executive pay were saying at the time. In 1990 Michael Jensen of the Harvard Business School and Kevin Murphy of the University of Rochester published an article in the Harvard Business Review, summarizing their already influential views on executive pay. The trouble with American business, they declared, is that “the compensation of top executives is virtually independent of performance. On average corporate America pays its most important leaders like bureaucrats. Is it any wonder then that so many CEOs act like bureaucrats rather than the value-maximizing entrepreneurs companies need to enhance their standing in world markets?” In other words, greed is good.[12]

Why, then, weren’t companies linking pay to performance? Because of social and political pressure:

Why don’t boards of directors link pay more closely to performance? Commentators offer many explanations, but nearly every analysis we’ve seen overlooks one powerful ingredient—the costs imposed by making executive salaries public. Government disclosure rules ensure that executive pay remains a visible and controversial topic. The benefits of disclosure are obvious; it provides safeguards against “looting” by managers in collusion with “captive” directors.

The costs of disclosure are less well appreciated but may well exceed the benefits. Managerial labor contracts are not a private matter between employers and employees. Third parties play an important role in the contracting process, and strong political forces operate inside and outside companies to shape executive pay. Moreover, authority over compensation decisions rests not with the shareholders but with compensation committees generally composed of outside directors. These committees are elected by shareholders but are not perfect agents for them. Public disclosure of “what the boss makes” gives ammunition to outside constituencies with their own special-interest agendas. Compensation committees typically react to the agitation over pay levels by capping—explicitly or implicitly—the amount of money the CEO earns.[13]

In other words Jensen and Murphy, writing at a time when executive pay was still low by today’s standards, believed that social norms, in the form of the outrage constraint, were holding executive paychecks down. Of course they saw this as a bad thing, not a good thing. They dismissed concerns about executive self-dealing, placing “looting” and “captive” in scare quotes. But their implicit explanation of trends in executive pay was the same as that of critics of high pay. Executive pay, they pointed out, had actually fallen in real terms between the late 1930s and the early 1980s, even as companies grew much bigger. The reason, they asserted, was public pressure. So they were arguing that social and political considerations, not narrowly economic forces, led to the sharp reduction in income differences between workers and bosses in the postwar era.

Today the idea that huge paychecks are part of a beneficial system in which executives are given an incentive to perform well has become something of a sick joke. A 2001 article in Fortune, “The Great CEO Pay Heist,”[14] encapsulated the cynicism: “You might have expected it to go like this: The stock isn’t moving, so the CEO shouldn’t be rewarded. But it was actually the opposite: The stock isn’t moving, so we’ve got to find some other basis for rewarding the CEO.” And the article quoted a somewhat repentant Michael Jensen: “I’ve generally worried these guys weren’t getting paid enough. But now even I’m troubled.”[15] But no matter: The doctrine that greed is good did its work, by helping to change social and political norms. Paychecks that would have made front-page news and created a furor a generation ago hardly rate mention today.

Not surprisingly, executive pay in European countries—which haven’t experienced the same change in norms and institutions—has lagged far behind. The CEO of BP, based in the United Kingdom, is paid less than half as much as the CEO of Chevron, a company half BP’s size, but based in America. As a European pay consultant put it, “There is no shame factor in the U.S. In Europe, there is more of a concern about the social impact.”[16]

To be fair, CEOs aren’t the only members of the economic elite who have seen their incomes soar since the 1970s. Some economists have long argued that certain kinds of technological change, such as the rise of the mass media, may be producing large salary gaps between people who seem, on the surface, to have similar qualifications.[17] Indeed the rise of the mass media may help explain why celebrities of various types make so much more than they used to. And it’s possible to argue that in a vague way technology may help explain why income gaps have widened among lawyers and other professionals: Maybe fax machines and the Internet let the top guns take on more of the work requiring that extra something, while less talented professionals are left with the routine drudge work. Still, the example of CEO pay shows how changes in institutions and norms can lead to rising inequality—and as we’ve already seen, international comparisons suggest that institutions, not technology, are at the heart of the changes over the past thirty years.

The Reason Why

Since the 1970s norms and institutions in the United States have changed in ways that either encouraged or permitted sharply higher inequality. Where, however, did the change in norms and institutions come from? The answer appears to be politics.

Consider, for example, the fate of the unions. Unions were once an important factor limiting income inequality, both because of their direct effect in raising their members’ wages and because the union pattern of wage settlements—which consistently raised the wages of less-well-paid workers more—was, in the fifties and sixties, reflected in the labor market as a whole. The decline of the unions has removed that moderating influence. But why did unions decline?

The conventional answer is that the decline of unions is a result of the changing structure of the workforce. According to this view, the American economy used to be dominated by manufacturing, which was also where the most powerful unions were—think of the UAW and the Steelworkers. Now we’re mostly a service economy, partly because of technological change, partly because we’re importing so many manufactured goods. Surely, then, deindustrialization must explain the decline of unions.

Except that it turns out that it doesn’t. Manufacturing has declined in importance, but most of the decline in union membership comes from a collapse of unionization within manufacturing, from 39 percent of workers in 1973 to 13 percent in 2005. Also, there’s no economic law saying that unionization has to be restricted to manufacturing. On the contrary, a company like Wal-Mart, which doesn’t face foreign competition, should be an even better target for unionization than are manufacturing companies. Think how that would change the shape of the U.S. economy: If Wal-Mart employees were part of a union that could demand higher wages and better benefits, retail prices might be slightly higher, but the retail giant wouldn’t go out of business—and the American middle class would have several hundred thousand additional members. Imagine extending that story to other retail giants, or better yet to the service sector as a whole, and you can get a sense of how the Great Compression happened under FDR.

Why, then, isn’t Wal-Mart unionized? Why, in general, did the union movement lose ground in manufacturing while failing to gain members in the rising service industries? The answer is simple and brutal: Business interests, which seemed to have reached an accommodation with the labor movement in the 1960s, went on the offensive against unions beginning in the 1970s. And we’re not talking about gentle persuasion, we’re talking about hardball tactics, often including the illegal firing of workers who tried to organize or supported union activity. During the late seventies and early eighties at least one in every twenty workers who voted for a union was illegally fired; some estimates put the number as high as one in eight.

The collapse of the U.S. union movement that took place beginning in the 1970s has no counterpart in any other Western nation. Table 5 shows a stark comparison between the United States and Canada. In the 1960s the U.S. workforce was, if anything, slightly more unionized than Canada’s workforce. By the end of the 1990s, however, U.S. unions had been all but driven out of the private sector, while Canada’s union movement was essentially intact. The difference, of course, was politics: America’s political climate turned favorable to union busting, while Canada’s didn’t.

I described in chapter 6 the centrality of antiunionism to Barry Goldwater’s rise, and the way opposition to unions played a key role in the consolidation of movement conservatism’s business base. By the second half of the 1970s, movement conservatives had enough political clout that businesses felt empowered to take on unions.

Table 5. Percentage of Unionized Wage and Salary Workers
United States Canada
1960 30.4 32.3
1999 13.5 32.6

Source: David Card, Thomas Lemieux, and W. Craig Riddell, Unionization and Wage Inequality: A Comparitive Study of the U.S., the U.K., and Canada (National Bureau of Economic Research working paper no. 9473, Jan. 2003).

And once Ronald Reagan took office the campaign against unions was aided and abetted by political support at the highest levels. In particular, Reagan’s suppression of the air traffic controllers’ union was the signal for a broad assault on unions throughout the economy. The rollback of unions, which were once a powerful constraint on inequality, was political in the broadest sense. It was an exercise in the use of power, both within the government and in our society at large.

To understand the Great Divergence, then, we need to understand how it was that movement conservatism became such a powerful factor in American political life.

Загрузка...