Part II TRENDS AND ISSUES

Chapter 7 Historical Trends

Part I analyzed some more or less enduring features of various social processes, and their implications for the coordination of fragmented individual knowledge. Part II will analyze some of the historic changes which have occurred, and are occurring, in such processes — and the long-run implications of such changes. The next three chapters will deal with historic trends in specific economic, legal, and political processes. They will center on the American experience, for purposes of keeping the discussion specific and manageable, but many of these trends have been common in Western civilization and beyond, and some have in fact gone further in various other countries than in the United States. This chapter will briefly sketch a broader background picture of trends in social institutions and processes over the past century.

The twentieth century has brought so many changes across the face of the earth — in science, culture, demography, living standards, devastation — that it is difficult to disentangle purely institutional changes from this tapestry of human events. Indeed, it is impossible to fully do so, for at least one of the great world wars of this century grew out of a particular brand of totalitarian institution and its drive to conquer “today Germany, tomorrow the world.” In addition to the carnage of war, the twentieth century has seen the unprecedented horror of deliberate slaughter of millions of unarmed human beings because of their categorical classification: Jews, Kulaks, Ibos, etc. These events too have been intertwined with institutional change.

In terms of general trends in the social application of knowledge, there are a number of ways in which decision making has tended to gravitate away from those most immediately affected and toward institutions increasingly remote and insulated from feedback. The variety of institutional changes, even in a given country, presents an intricate, kaleidoscopic picture, which becomes still more complex when extended to international scale and interwoven with the fast changing historical events of the century. Still, on a spectrum stretching from individual decision making at one end to totalitarian dictatorship at the other, the general direction of the drift is discernible. It is fairly obvious in the case of national changes from democratic to nondemocratic governments (as in various Eastern European and South American countries) or — among autocratic governments — from loosely controlling and removable autocrats to enduring and pervasive party totalitarianism (as in Russia and China). Even within democratic nations, the locus of decision making has drifted away from the individual, the family, and voluntary associations of various sorts, and toward government. And within government, it has moved away from elected officials subject to voter feedback, and toward more insulated governmental institutions, such as bureaucracies and the appointed judiciary. These trends have grave implications, not only for individual freedom, but also for the social ways in which knowledge is used, distorted, or made ineffective.

These institutional changes have been accompanied by social changes. Perhaps the most far-reaching social change in the past century — in the United States and elsewhere in the Western world — has been that vast numbers of people have ceased being residual claimant decision makers and become fixed claimant employees. When the bulk of the population consisted of farmers (whether owners, tenants, or sharecroppers), the options and constraints facing the economy as a whole were transmitted more or less directly to those individuals, in the form of varying rewards for their efforts, whether those rewards were in money or in produce. The connection between efforts and outcomes was clear, though not all-determining: the weather, blights, and other menaces to crops and livestock made risk also a very personally felt variable. The transformation of Western economies from agriculture to industry brought with it a reduction in the proportion of the population consisting of autonomous economic decision makers. However much “consumer sovereignty” was retained, as producers their role as fixed claimants to some extent insulated them from the direct consequences of their own decisions, largely by limiting the scope of their decision making itself. This was not necessarily a net increase in security, either objectively or subjectively. They might find their futures varying considerably from prosperity to privation — but largely as a result of decisions made by others. The immediate question here is not whether they were better or worse off on net balance, but rather, what did this mean for their knowledge of what was happening, and for the social consequences of that knowledge?

Parallel with these economic developments, the political expansion of the franchise meant that people with progressively less decision making experience in the economy were acquiring progressively more power to shape the economic sector through the political process. A price-coordinated economy, as such, can function without being understood by anyone. But insofar as it must function in a given legal and ultimately political structure, the extent or manner in which these latter structures allow it to function depends upon how others judge its results — or whether they choose to judge or control its processes instead.

Another historic change in the past century has been the rise of intellectuals to prominence, influence, and power. The expansion of mass education has meant an increase in both the supply of intellectuals and in the demand for their products. They have become a new elite and, almost by definition, competitors with existing elites. The very nature of their occupation makes them less inclined to consider opaque “results” than to examine processes, quite aside from such other incentives as may operate when publicly discussing their elite competition. Intellectuals have spearheaded criticisms of price-coordinated decision making under individually transferrable property rights — i.e., “capitalism.” As far back as polls, surveys, or detailed voting records have been kept, Western intellectuals have been politically well to the political left of the general population.1

Another way of looking at all this is that there has been a political isolation of residual claimants to variable incomes as a small special class operating in response to incentives and constraints no longer generally felt throughout the society. Knowledge of changing economic options and constraints conveyed through price, investment, and employment decisions by this class (capitalists) has all the appearance of having originated with this class, and thus serving the sole interests of this class. The extent to which this is true or false in particular instances is not the central point here. The point is that this appearance is necessarily pervasive — and politically important — regardless of what the particular facts may be. It is only after the conceptual separation of questions of causation from questions of communication (the slain bearer of bad news problem) that the factual issue can even be addressed.

Finally, no discussion of the trends of the past half century would be complete without one of the great socially traumatic episodes of this era, the Great Depression of the 1930s. Both in magnitude and duration it outstripped all other depressions in history. The unemployment rate reached 25 percent and corporate profits in the United States as a whole were negative two years in a row. This depression was unique not only in its magnitude and duration, but in the degree of government intervention — episodic and enduring — occasioned by it. Although questions might be raised as to whether these three characteristics of the Depression were related, the popular explanation has been that it was a failure of the market economy and demonstrated the need for government economic activity. While this thesis can be, and has been, challenged on the basis of scholarly analysis,2 the point here is merely that this central economic episode of the past century reinforced other trends toward the political isolation of residual claimant decision makers and price-coordinated economic systems. To some extent, the Great Depression undermined political support for traditional Western values in general, including freedom and democracy — as shown by the rise of the Nazis in Germany, fascism in Spain and parts of Latin America, and the post-World War II spread of communism around the world.

The next three chapters deal in detail with specific developments in social institutions, and their consequences — especially as regards the crucial question of how any system coordinates its scattered and fragmented knowledge for optimal social effectiveness, and the even more momentous question of the implications for human freedom.

Chapter 8 Trends in Economics

Economic systems have been seen as institutional processes for weighing costs and benefits. Costs in turn are foregone alternative benefits. Costs and benefits are ultimately subjective, but that does not mean that they vary arbitrarily or that one way of weighing them is as rational as the next. The physical and psychic costs of digging a ditch are subjective to whoever digs one. However, the compensating inducement necessary to get A to dig a ditch is objective data to B. If B simply wants a ditch dug, and does not care who digs it, then the lowest of the various subjective costs of ditch digging — among A, C, D, E, etc. — becomes his necessary objective cost. Conversely, how much someone wants a ditch dug is subjective to him, but is objective data to anyone else considering doing such work.

Prices convey the experience and subjective feelings of some as effective knowledge to others; it is implicit knowledge in the form of an explicit inducement. Price fluctuations convey knowledge of changing trade-offs among changing options as people weigh costs and benefits differently over time, with changes in tastes or technology. The totality of knowledge conveyed by the innumerable prices and their widely varying rates of change vastly exceeds what any individual can know or needs to know for his own purposes.

How accurately these prices convey knowledge depends on how freely they fluctuate. The use of force to limit those fluctuations or to change the relationship of one price to another means that knowledge is distorted to represent not the terms of cooperation possible between A and B, but the force exerted by C. Looked at another way, the array of options people are willing to offer each other are reduced when force is applied to limit the level or the fluctuation of prices, and the array can shrink all the way to the vanishing point when the price is specified by a third party, if his specification does not happen to coincide with trade-offs mutually acceptable to entities contemplating transactions. Price fixing as a process cannot be defined by its hoped for results — “a decent wage,” “reasonable farm prices,” “affordable housing.” Price fixing does not represent simply windfall gains and losses to particular groups according to whether the price happens to be set higher or lower than it would be otherwise. It represents a net loss to the economy as a whole to the extent that many transactions do not take place at all, because the mutually acceptable possibilities have been reduced. The set of options simultaneously acceptable to A and B is almost inevitably greater than the set of options simultaneously acceptable to A, B, and C — where C is the third party observer with force, typically the government.

The form in which force is applied to constrain price communication varies widely, including (1) establishing an upper limit beyond which force will be applied (fines, jail, confiscation, etc.) to anyone charging and/or paying such prices, (2) establishing a lower limit, (3) indirectly raising some prices by taxing particular items moreso than others, and indirectly lowering some prices by subsidizing the product with assets forcibly transferred from the taxpayers rather than having the product paid for only by assets voluntarily transferred by consumers of that product.

Direct price controls are not the only method of superseding the market. Other methods include forcibly controlling the characteristics (“quality”) of the product, forcibly restricting competition in the market, forcibly changing the structure of the market through antitrust laws, and comprehensive economic “planning” backed by force. Again, the use of force is emphasized here not simply because of the incidental unpleasantness of force, but because the essential communication of knowledge is distorted when what can be communicated is circumscribed. All these ways of distorting the free communication of knowledge (preferences and technological constraints) have been growing, but each has its own distinct characteristics.

CONTROLLING PRICES

FORCIBLY RAISING PRICES

Minimum wage laws and laws forbidding businesses from selling goods “below cost” are typical of government’s forcibly setting a lower limit to price fluctuations. Although minimum wage laws may be more extensive in their coverage, the laws against particular businesses’ selling “below cost” are more readily revealing as to the nature and distortions of such processes.

It may seem strange — indeed, incomprehensible — that a business enterprise set up for the explicit purpose of making a profit would have to be forcibly prevented from selling at a loss, quite aside from the larger social question of whether such a prohibition benefits the economy as a whole. Yet much government regulation — of airlines, railroads, various agricultural markets, and of imported goods in general — limits how low prices will be allowed to go, whether in the explicit language of forbidding sales “below cost” or of preventing “ruinous competition,” “dumping,” “predatory pricing,” or more positively of “stabilizing the industry” or creating “orderly markets” or other euphonious synonyms for price fixing.

In addition to these direct prohibitions on lower prices, the administration and judicial interpretation of antitrust laws makes sales “below cost” damning evidence against a business. Moreover, the government’s required permission to enter various regulated industries or professions — transportation, broadcasting, medicine, etc. — is often denied or restricted to keep competition from forcing prices “too low” or “ruining” incumbents — often erroneously described as “the industry.”

The government is not behaving irrationally from a political standpoint. Neither are businesses behaving irrationally from an economic standpoint when they seem to be selling “below cost.” The costs of an industry are difficult — if not impossible — for third parties to determine. As we saw in Chapter 3, costs are foregone options — and options are always prospective. The past is irrevocably fixed, so all options are present or future. The objective data available to third parties refer to past actions taken in response to the prospective options subjectively foreseen as of that time. Those subjective forecasts themselves exist neither in the objective data of the past actions nor in the objective record of subsequent events, which may or may not have conformed to the forecasts. Apparently the foreseen costs were less than the foreseen benefits when Napoleon invaded Russia, or when the the Ford Motor Company produced the Edsel.

Government regulation can never be based on these fleeting and subjective appraisals of alternatives which actually guide business decision makers. Even if businessmen could remember everything exactly and describe it precisely, the government would have no way of verifying it. Government regulations and their estimates of “cost” are based on objective statistical data on actual outlays. Therefore businesses which determine their prices on the basis of options facing them at a given time often price below objective cost as defined by past expenditures on production.

If the hypothetical Zingo Manufacturing Company is launched with the idea that the world will be eager to buy zingoes, it may spend great sums producing that product, only to discover after the fact that consumers are so disinterested that zingoes can be sold only at prices which cover half of the past costs incurred in producing them. The options at that point are to (1) sell the existing zingoes at this price or (2) to incur additional costs by holding zingoes in inventory, in hopes of being able to drum up more consumer demand through advertising or other devices, or (3) declare bankruptcy and let it all become the creditor’s problem. Depending upon the capital reserves of the firm, selling “below cost” may allow them to minimize their losses on this product and survive as a firm producing some other product(s) in the future. But, regardless of which future option may be preferred, past “cost” data are irrelevant. As economists say, “sunk costs are sunk.” They are history but they are not economics.

The general principle applies much more widely than in economic transactions. Once Napoleon realized that he was losing in Russia, it mattered not how many lives had been sacrificed for the goal of conquering the country, or in capturing the Russian territory currently held; if future prospects were not good, he had to pull the army out of Russia, and write off the whole operation as a loss. In retreating, Napoleon may well have been returning territory to the Russian armies “below cost” in terms of the lives originally sacrificed to capture it. In military terms, as in economic terms, a given physical thing does not represent a given value without regard to time or circumstances. Land which was prospectively valuable as a strategic area from which to attack the rest of the country may turn out in retrospect to be just so much impediment on a retreating army’s escape route.

Businesses sell “below cost” not only when they have mistakenly forecast the future, but also when their costs for a given decision under specific conditions are less than the usual costs under the usual conditions. As seen in Chapter 3, the use of otherwise idle equipment may involve far lower incremental costs than acquiring equipment to serve the same specific purpose. Pricing according to these incremental costs (“marginal cost pricing” in the jargon of the economists) may be rational for the seller and beneficial to the buyer but is often attacked, penalized, or forbidden by the government. Regulatory agencies have consistently opposed low prices based on low incremental costs, and have insisted that the regulated firms base their prices on average costs, including overhead. The extent to which regulatory agencies — the Interstate Commerce Commission, Federal Communications Commission, Civil Aeronautics Board, etc. — keep prices above the level preferred by individual firms remains largely unknown to the general public, to whom such agencies are depicted as “protecting” the public from high prices or “exploitation” by “powerful” businesses. However, the government agencies are not being irrational, nor are the businesses altruistic. High volume at low prices has been the source of more than one fortune. Each side is responding to the respective incentives faced.

Low incremental costs are also no defense in antitrust prosecutions alleging sales “below cost” to “unfairly” drive out competitors. The U.S. Supreme Court, in a noted Sherman Antitrust Act case, ruled against a firm whose “price was less than its direct cost plus an allocation for overhead”1 even though overhead is not part of incremental cost. In this, as in many other antitrust cases, injury to an incumbent competitor was equated with injury to the competitive process, which the antitrust laws are supposed to protect.

Consumers are equally well protected against low prices based on low incremental costs in a number of other government-controlled areas, such as various agricultural markets. The government itself has an “almost universal avoidance”2 of incremental cost pricing for public goods and services, such as the Post Office or toll roads and bridges. Toll charges, in fact, typically are highest for those who create the least cost and lowest for those who create the most. The capacity of a highway or bridge is usually based on the volume of rush-hour traffic, so that the costs of building and expanding the facility are due to rush-hour users. The incremental cost of other people’s using it during nonrush hours, when it has idle capacity, are far less and perhaps virtually zero. Yet discount books of toll coupons are likely to be made available on terms which make them attractive only to regular rush-hour users, not to occasional users who are more likely to be nonrush-hour users. However economically perverse, this pricing method makes political sense to elected officials, because regular users are more easily organized into political pressure groups. That is, regular users’ costs of organization are spread over more units of benefit, so that a rational equation of their individual costs and benefits leads them to more political activity per person, as well as in the aggregate, compared to sporadic users.

The growth of regulatory agencies, the expansion of antitrust laws by legislative enactment and judicial interpretation, and increasing government control of pricing in a variety of ways and areas all put lower limits on price fluctuations, among many other effects that they have. The question is, what effect does this have on the transmission of knowledge? It overstates the actual cost of many goods and services, leading some consumers to do without, even though they are willing and able to pay enough to induce the producers to make more of those goods and services, if the producers were free to accept their offers. Knowledge is distorted in the transmission, due to the use of force by third parties — in this case, various organs of government.

While government actions inhibit or prevent the transmission of knowledge in the summarized form of price fluctuations, the government substitutes its own decisions in the form of more explicitly articulated knowledge, in either words or statistics. Articulation, however, can lose great amounts of knowledge. The continuously adjusting process of decision making through transient subjective estimates of prospects is not recorded or available in verifiable form to third parties. Retrospective data generated by this prospective process are fragmentary artifacts analogous to bits of broken pottery or remnants of clothing, from which an anthropologist tries to reconstruct the life process of prehistoric peoples. The anthropologist has no choice but to infer what he can from whatever he finds, but no one would prefer such inferences to the knowledge of someone who actually lived in prehistoric societies, if such people were available. A similar disparity of knowledge is involved when decisions are forcibly transferred from those who are part of an ongoing process to third-party observers of statistical artifacts. Such statistical artifacts are not merely incomplete but often positively misleading, by being cast in terms wholly different from those of the process they seek to depict. For example, we have already seen in Chapter 4 that the subjective “time horizon” is not indicated by objective data on remaining life span; babies have notoriously short time horizons. Similarly, the averaging of fixed “overhead” costs over output provides a categorical, retrospective picture of a prospective, incremental process of decision making. The social utilization of idle or only partly utilized resources — electricity generating capacity during off-peak hours, half empty airplanes, factories operating below capacity, etc. — is inhibited when effective knowledge of such low cost opportunities is distorted by forcibly preventing low prices from reflecting low incremental costs.

The element of force is crucial to the distortion. The knowledge transmitted by voluntarily chosen prices conveys the terms on which various forms of mutual cooperation are available. The knowledge transmitted under government price constraints reflects the desire to escape punishment, and the knowledge conveyed by such prices does not reflect the full array of options actually available to the economy. In particular it does not convey the cheapest options. For example, a large, far-flung corporation can communicate among its many plants either by using the already existing telephone network or by building its own telephone system connecting its plants. It may require far fewer of the economy’s resources to use the existing telephone network, but if these low incremental costs to the economy are forbidden to be conveyed by low prices, the corporation may find it cheaper (in its own financial terms) to build a socially redundant telephone network for itself rather than pay high prices reflecting the “average cost” of telephone service.

The crucial importance of force as a distorter of knowledge transmission is overlooked in abstract discussions of the merits and demerits of “marginal cost pricing.” Such discussions attempt to directly determine what should be done rather than decide who should make that determination. Such questions as the precision with which incremental (“marginal”) costs can be calculated,3 the cost of such precision,4 circumstantial variations in incremental costs,5 and the disparity between actual decision making variables and statistical artifacts,6 are serious social issues only in the context of forcibly “solving” economic “problems” directly from a unitary or godlike perspective, or as academic exercises. Where force is not involved, then whatever methods of coping with these difficulties emerge, the least cost methods among them will have a decisive competitive advantage in voluntary transactions, whether those methods result from intuitive insight, rationalistic expertise, or simply stumbling across something that happens to work. It does not depend upon the intentional modus operandi of businessmen,7 but on the systemic effects of competition.

Minimum wage laws likewise prevent transmission of knowledge of labor available at costs which would induce its employment. By misstating the cost of such labor, it causes some of the labor to be unemployed, even though perfectly willing to work for wages which others are perfectly willing to pay. The term “minimum wage” law defines the process by its hoped-for results. But the law itself does not guarantee that any wage will be paid, because employment remains a voluntary transaction. All that the law does is reduce the set of options available to both transactors. Once the law is defined by its characteristics as a process rather than by its hoped-for results, it is hardly surprising that there are fewer transactions (i.e., more unemployment) with reduced options. What is perhaps more surprising is the persistence and scope of the belief that people can be made better off by reducing their options. In the case of the so-called8 minimum wage law, the empirical evidence has been growing that it not only increases unemployment, but that it does so most among the most disadvantaged workers.9 This undermines some of the key assumptions of the price fixing approach.

Some who might not support the general proposition that people are made better off by reducing their options may nevertheless believe that one party to a transaction or negotiation can be made better off by eliminating his “worst” options — that is, low wages for a worker, high rents for a tenant, or sales at a loss for a business firm. But, almost by definition, these are not their worst options. They could have no transactions at all (or fewer transactions) — that is, be unemployed, unhoused, or unable to sell. Third parties may be morally uplifted by saying, for example, that they would rather see people unemployed than working at “exploitation” wages, but the mere fact that people are voluntarily transacting as workers, tenants, or businessmen reveals their own very different preferences. Unless price-fixing laws are to be judged as moral consumer goods for observers, the revealed preference of the transactor is empirically decisive. The fact that the worst-off workers tend to be the most adversely affected by minimum wage laws suggests that what is typically involved is not unconscionable “exploitation” but the payment of wages commensurate with their desirability as employees. If the lowest paid workers were simply the most “underpaid” workers relative to their productivity, there would be more than the usual profit to be made by employing them, and a minimum wage law could simply transfer that extra profit to the workers without costing them their jobs.

The “exploitation” explanation of low wages tends to emphasize the intentional morality of the employer (“unconscionable”) rather than the systemic effects of competition. Nothing is more common in economics than the attraction of new competitors whenever and wherever there is a profit above the ordinary. If hiring low paid workers presented such an opportunity — that is, if “exploitation” had some substantive economic meaning — the competition attracted would bid their wages up and keep them more fully employed than others. In fact, however, their marginal desirability to employers is indicated by their precarious and intermittent employment patterns, and by their generally higher rates of unemployment. In short, for workers as for business, knowledge transmitted by low prices (wages) is generally accurate knowledge, and forbidding its transmission costs both the economy and the intended beneficiary of such price fixing. Were the facts themselves to be changed — by improving the job qualifications of low paid workers, for example — the effects of that would be quite different from merely forbidding or distorting the transmission of knowledge of existing facts. In a purely informational sense, the employer still knows low productivity or high-risk categories of workers, but that only insures that the lack of effective knowledge transmission through prices (wages) will lead to less employment of them.

There is no inherent reason why low-skill or high-risk employees are any less employable than high-skill, low-risk employees. Someone who is five times as valuable to an employer is no more or less employable than someone else who is one-fifth as valuable, when the pay differences reflect their differences in benefits to the employer. This is more than a theoretical point. Historically, lower skill levels did not prevent black males from having labor force participation rates higher than that of white males for every U.S. Census from 1890 through 1930.10 Since then, the general growth of wage fixing arrangements — minimum wage laws, labor unions, civil service pay scales, etc. — has reversed that and made more and more blacks “unemployable,” despite their rising levels of education and skill, absolutely and relative to whites. In short, no one is employable or unemployable absolutely, but only relative to a given pay scale. Increasingly, blacks have been priced out of the market. This is particularly apparent among the least experienced blacks — that is, black teenagers, who have astronomical unemployment rates.

The alternative explanation of high black teenage unemployment by “racism” collides with two very hard facts: (1) black teenage unemployment in the 1940s and early 1950s was only a fraction of what it was in the 1960s and 1970s (and was no different from white teenage unemployment during the earlier period), despite the obvious fact that there was certainly no less racism in the earlier period, and (2) unemployment rates among blacks in their mid twenties drop sharply to a fraction of what it was in their teens, even though the workers have not changed color as they aged, but only become more experienced. The intentional explanation — “racism” — may be more moralistically satisfying, but the systemic explanation fits the facts. A decade of rapid inflation after the federal minimum wage law of 1938 virtually repealed the law as an economic factor by the late 1940s and early 1950s — before a series of amendments escalated the original minimum. During the late 1940s and early 1950s, when inflation and the exemption of many occupations from wage control made the minimum wage law relatively ineffective, black teenage employment was less than a third of what it was in the later period, after the minimum was raised to keep pace with inflation and the coverage of minimum wage laws extended to virtually the entire economy. To give some idea of the magnitude of this effect, black teenage unemployment in the recession year of 1949 was lower than it was to be in any of the most prosperous years of the 1960s or 1970s. Moreover, even in countries with all white labor forces, teenage unemployment has been similarly vulnerable to minimum wage laws.11 This is in keeping with the lesser work experience of teenagers and therefore the greater distortion of knowledge involved when minimum wage laws misstate their value to the employer. Statistical data happen to be kept by age and race, but the more general point is that the negative effect of forcible distortion of knowledge hurts most those for whom the distortion is greatest.

While the government is the central repository of force, it is by no means the sole repository of force. Labor unions often use force, threats, and harassment during strikes to stop or reduce the flow of customers or employees to the work place and/or the shipment of goods in or out from a struck business. Many major employers do not even attempt to operate during a strike, because of the high prospect of violence and the low prospect of effective law enforcement.12

This private use of force to prevent the effective transmission of prices reflecting economic options has very similar effects to those of governmental force in the form of minimum wage laws. The systemic effect of pricing the most disadvantaged workers out of a job is sometimes compounded by intentional effects of barring various minorities from unionized occupations, either explicitly or tacitly. Virtually every immigrant minority was the target of such union exclusions at one time or other during the nineteenth century, and “white only” clauses existed in many union contracts or constitutions in both the nineteenth and twentieth centuries, until civil rights legislation in the 1960s barred such words. However, such intentional discrimination is not necessary in order for unions to have adverse systemic effects on the employment opportunities of disadvantaged groups, similar to those of minimum wage laws which usually13 have no intentional discrimination at all. Whether by intentional or systemic effect, labor unions have historically had a devastating impact on the employment opportunities of blacks. Some occupations once dominated by blacks — railroad and construction occupations in the South, for example — became “white only” after unionization.14

The history of blacks in skilled occupations in the South and North graphically illustrates the difference between intentional and systemic variables. From an intentional point of view, the South would seem to be the most averse to the employment of blacks in skilled occupations, but in reality blacks remained in such positions longer in the South than in the North,15 because the systemic effects of labor unions and “liberal” or “progressive” wage-fixing legislation came much later to the South.

FORCIBLY LOWERING PRICES

Very similar principles are involved when prices are forcibly kept below the level they would reach if allowed to fluctuate freely. Rent control, interest rate ceilings, and general wage and price controls during wartime or under comprehensive “planning” are examples of forcibly limiting how high prices can go.

Since prices are simply knowledge of available terms of trade-off, to limit how high the price of A can go in trade-offs for B is economically the same as limiting how low the price of B can go in trade-offs for A. All that differs is the phrasing. It should not be surprising, therefore, when upper limits on rents lead to housing shortages just as lower limits on wages lead to unemployment. A mere change of phrasing shows that minimum wage laws limit how much labor can be offered for a given job, causing a shortage of jobs at that price, just as rent control limits how much rent can be offered for a given housing unit, causing a shortage of housing units at that price. All “shortages” and “surpluses” are at some given price, and not absolutely in terms of the scarcity or abundance of the item in quantitative terms. The severe housing shortage during World War II occurred with no significant change in either the amount of housing in the country or in the size of the population. Indeed, more than ten million people left the civilian population, and many left the country, during World War II. More housing was demanded by the remaining civilian population at rent-control prices. The effective knowledge conveyed by artificially low prices was of far more abundant housing than actually existed or had ever existed.

There is no fixed relationship between the number of people and the amount of space “needed” to house them. Whether or to what extent children will share rooms or have their own individual rooms, the time at which young adults will move out to form their own households, and the extent to which single kinfolks or roomers live with families are all variable according to the price of housing and the incomes of the people making the decisions. Virtually every American ethnic group, for example, has at some point or other gone through a stage at which taking in roomers was a pervasive social phenomenon.16

Artificially low prices under rent control facilitates the disaggregation of existing families or living units into smaller groups of individuals with separate households, and facilitates the use of more space per person in existing households, so that very quickly “no vacancy” signs appear almost everywhere. After that point, people who find themselves having to move for compelling reasons may have to double up or live in garages or other makeshift, overcrowded housing, precisely because of the general use of more space per person in the country as a whole. While young couples with growing families may find themselves increasingly overcrowded in housing that was once adequate, older couples whose children have left home have little financial incentive to give up larger housing units that the family once needed, because rent control makes the larger unit affordable and leaves few alternative places to move into. In the absence of rent control, there is an incentive for a continuous interchange of different sized housing units among families at different stages of their life cycle. The growing young family trades off other things for housing incrementally, while the older family with children “leaving the nest” can trade off excess space for other things they want. Prices convey effective knowledge of these ever-changing trade-offs, directing each set of decision makers to where they can get the most satisfaction — from their own respective viewpoints — from their respective assets. Rent control distorts — or virtually eliminates — this flow of information. The same set of people and the same set of physical assets continue to exist, but the simple fact that they cannot redistribute themselves among the assets in accordance with their divergent and changing desires means that there is less satisfaction derived from a given housing stock. Though it is the same physical matter, its value is less.

The losses resulting from rent control are not losses of physical matter or of money. Both can exist in the same amounts as before — and therefore cannot be measured in “objective” statistical data based on the relevant transactions (renting). The reduction or nonexistence of desired transactions is precisely the loss and no numbers or expertise can objectively measure thwarted desires. The most that can be objectively documented are waiting lists, illegal payments to landlords, and other scattered artifacts analogous to the broken pottery and remnants of clothing available to anthropologists studying prehistoric peoples. In a longer time perspective, rent control prices convey distorted knowledge not only about the optimal allocation of existing housing but about the trade-offs people would be willing to make to get new housing. Renters are forbidden to convey the full urgency of their desire for new housing, in the form of financial incentives that would reach landlords, financial institutions, and builders. This urgency may be growing as the old housing continually deteriorates and wears out, but the effective signal received by builders may be that there are few resources available to be traded off for more housing. The effective signals received by landlords with old buildings may be that there is little available to be traded off to get the maintenance and repair needed to keep them going — even though the tenants might prefer paying more rent to seeing the building deteriorate or the landlord abandon it entirely, as has happened on a mass scale in New York City, where rent control has persisted long after World War II.

Rent control illustrates not only the ease with which political systems can distort the transmission of knowledge in an economic system. Its history also illustrates how difficult it is for effective feedback to correct a political decision. Political decision making units are defined by geographic boundaries, not by particular subsets of people who experience the consequences of given policies. Rent control laws passed decades ago to benefit “New Yorkers” or tenants in New York were initially judged through the political process by incumbent New Yorkers and incumbent tenants, on the basis of the prospective plausibility of such laws. A generation later, deaths, births, and normal migration in and out of the city mean that the electorate has turned over considerably, and very few of them have personally experienced the effects of rent control from start to finish. Many of those who actually experienced the deterioration of housing under rent control in New York City are now living outside New York City, some as a direct result. Their experience does not feed back through the electoral process in the city. The current New York City electorate includes great numbers of people who arrived in the city — by birth or migration — when it was already experiencing the effects of rent control, so they have no “before” and “after” experience to compare. They do not know, for example, that the city once had a larger population, no housing shortage, and no masses of abandoned buildings. Their personal experience does not go back far enough to enable them to spot the fatal flaw in the argument that rent control cannot be safely repealed while there is still a housing shortage. Lacking this personal experience, they would have to be trained in economics to realize that a “shortage” is itself a price phenomenon, and so will persist as long as the rent control persists.

While time and complexity insulate many political decisions from effective feedback from the general electorate, some offsetting knowledge is furnished by groups with lower knowledge costs because they are more obviously affected adversely — the real estate lobby and landlord associations, in this case. In general, special interests have not only lower costs of knowledge of their own interests, but an incentive to invest in discovering how other groups’ interests are similarly affected, so as to acquire political allies. However, to the extent that special interest arguments are automatically discounted, this knowledge is ineffective or even counterproductive. Landlord and real estate interests, for example, provide pro-rent control forces with an enemy to fight, a sense of moral superiority in fighting, and a reassurance that they are acting in the interests of others who need protecting — though this last crucial point rests on an implicit conception of the economy as a zero-sum (or negative-sum) game. Once the economy is seen as a positive-sum game — that voluntary transactions are mutually beneficial or they would not occur — then the losses suffered when such transactions are forcibly restricted can also be mutual. The fact that the complaints issue first or exclusively from one party may reflect only his lower costs of knowledge of the effects on him.

More generally, to totally discount all special interest arguments is to implicitly assume that society is inherently a zero-sum game — which is difficult to reconcile with the fact that societies of some sort or other have existed among all peoples and ages.

The effects of rent control on the quality of housing illustrates a more general characteristic of price control and of the limits of articulation. Whatever price is forcibly set by an observer, he must define the product whose price is being controlled — and his articulation can seldom match the unarticulated experience of actual, voluntary transactors. The result is that prices set below the level that would have prevailed otherwise lead to quality deterioration. In the case of rent controlled apartments, maintenance, repair, painting, cleaning, heat, hot water and general monitoring all decline. This is less damaging to brand new buildings than to older buildings which require more upkeep to avoid becoming slums. Since low income people are more likely to live in older buildings, they are most likely to find their homes become unheated slums with uncorrected building hazards. In the extreme, they may find the building totally abandoned by the landlord, once the cost of maintaining it at minimum legal levels exceeds the rent permitted. In New York City, such abandonments average about twenty-five thousand units per year.17

Rent control is not unique in affecting the quality of the product. General price controls during World War II brought on a proliferation of inferior off brands, some made by brand-name producers who did not want to damage the long-run reputation of their regular label. Sometimes the quality deterioration took the form of deteriorated service, leading to much contemporary comedy based on arrogant butchers, insolent salespeople, etc. In general, price control involves articulating not only a price — which is easy — but also articulating the characteristics of a product. Although it may seem easy to define a product such as an apartment or a can of peas, actual experience demonstrates the crudity of articulation as compared to unarticulated experience. An apartment is not simply a physical thing, but involves a multitude of associated services, changes in the quantity and quality of which affect operating cost, the vacancy rate, and the price that can be charged in an uncontrolled market. When rents are forcibly lowered by the government, costs are voluntarily lowered by the landlords through declines in the quantity and quality of service, so that the “product” itself changes. A perfect legal specification of a product, perfectly monitored, would make this impossible. But the pervasiveness of this deterioration — including total abandonment — indicates the limits of articulation and third party monitoring.

In the absence of rent control, tenants monitor changes themselves and communicate their reactions to the landlord not only verbally but — more convincingly — through changes in the vacancy rate. They can even monitor services of which they are generally unaware, in the sense that they might not list them if asked to articulate what they want in an apartment building. For example, many tenants might not articulate a concern for management’s monitoring of people who enter the building — and yet if the building becomes a hangout for loiterers, hoodlums, or addicts, the vacancy rate would rise. Conversely, if the management officiously screened all entering guests, the same negative reaction would occur. In other words, a service which is seldom articulated must not only be performed but performed within limits on either side, if the landlord is to minimize his vacancy rate and maximize his rental income. The multiplicity and importance of these auxiliary services is most dramatically seen, not in uncontrolled markets where they become routine, but by their absence in rent-control buildings and in government housing projects. Typically there is far more explicit articulation of housing rules in such places but far less effective monitoring.

Even a simple can of peas cannot be exhaustively defined and completely monitored under price control. The flavor, appearance, texture, and uniformity of peas within a can and from one can to the next, depend on the selection and control of crops and the sorting and processing of the peas. In an uncontrolled market, these are all adjusted according to the incremental cost of each improvement and the incremental value of the improvements as revealed by how high a price the consumer is willing to pay for brands which reliably supply the desired characteristics. If this price is forcibly set below the market level by a third party, the supplier has incentives to supply less of these qualities and thereby reduce his production costs.

Just as a price forcibly set below the market level tends to reduce the quality of the price controlled product, so a price forcibly set above the market level tends to increase the quality of the product. Minimum wage laws tend to cause employers to hire fewer but better qualified workers — that is, they make less skillful, less experienced, or otherwise less desirable workers “unemployable.” Higher quality workers and more “unemployability” in a given work force are the same things expressed in different words.

Interest rate ceilings — usury laws — tend similarly to reduce a major service performed by the lender (risk taking) by causing him to eliminate more borrowers as insufficiently good risks. When one considers that the risk of losing money considerably exceeds 50 percent when drilling an oil well (that is, a well whose hoped-for result is oil), it is clear that high risk alone will not deter capital suppliers if the rate of return is allowed to vary sufficiently to compensate the risk. But by forcibly restricting the rate of return on personal loans to what is “reasonable” in the experience of good-credit-risk, middle-class people who write such laws, credit is often denied or restricted to low income people who may be only slightly less dependable risks and would be able to get credit at only slightly higher interest rates. Instead, they are left with no other choice but to resort to illegal “loan sharks” whose interest rates are much higher and whose collection methods are much rougher. Like other forms of price controls, usury laws distort the communication of correct facts about credit risks without in any way changing those facts themselves.

One of the more dramatic recent examples of the effect of forcibly keeping prices below the market level has been the so-called “gasoline crisis” of 1979. Because of the complexities in long-standing government regulations controlling the price of gasoline, their full effects began to be felt in the spring of 1979. As in the case of rent control, the effects were not primarily on the quantity of the physically defined product — gallons of gasoline in this case — but on the auxiliary services not articulated in the law. Just as rent control tends to reduce such auxiliary services as maintenance, heat, and hot water, so controlling the price of gasoline reduced such auxiliary services as hours of service at filling stations, credit card acceptance, and checking under the hood. Indeed, what was called a “gasoline shortage” was primarily a shortage of hours of service at filling stations, and the traumatic effects of this indicate that unarticulated aspects of the physically defined product are by no means incidental. In New York City, for example, the average filling station was open 110 hours a week in September 1978 and only 27 hours a week in June 1979.18 The actual amount of gasoline pumped declined by only a few percentage points, while the hours of service declined 75 percent. That is, filling stations tried to recoup their losses from price control by reducing the man-hours of labor they paid for, while the motorists’ losses of man-hours waiting in gasoline lines went up by many times what the filling stations had saved. Moreover, the motorists suffered from increased risks in planning long distance trips, given the unpredictability of filling station hours en route. This prospective psychic loss to motorists was reflected in dramatically declining business at vacation resorts, for example, but retrospective data on the actual amount of gasoline sold showed only small percentage declines. In short, the real cost of the so-called gasoline shortage was not simply the small statistical change in the quantity of the physical product, but the large prospective change in the ability to get it when and where it was wanted. As in so many other cases, objective retrospective data do not capture the economic reality.

FORCIBLY CHANGING COSTS

Costs to the economy as a whole may be given at a given time under given technology. But, even so, costs as experienced by the decision making unit can be raised by special taxes or lowered by subsidies. Any tax represents force used to influence decisions, and subsidies represent taxes forcibly extracted from others. It is indirect price fixing. A special tax, over and above the normal tax on items of similar value, misstates the cost transmitted through the economic system. The extra money paid by the consumer is not a loss suffered by the economy as a whole. The higher price is just an internal transfer of wealth among individuals in the same system — making the system as a whole no richer or poorer. What makes the system as a whole poorer are the transactions that do not take place because of the artificially high price. Where a high price conveys an actual scarcity of material or a reluctance of people to do certain work, then it accurately conveys information about the incremental cost to the economic system. But when the price is simply made higher by government fiat — whether by direct price fixing or by a special tax — then it conveys a false picture of the cost, thereby causing potential consumers to forego the product even though others are perfectly willing to supply it for a price that they are willing to pay.

Information about the availability of goods is distorted in the opposite direction when the government subsidizes goods. Some of the people consuming a subsidized good would be unwilling to pay the cost of it if that cost were accurately conveyed to them in the price. Instead, third parties are forced to pay part of the cost in taxes, regardless of their evaluation of the good and even regardless of whether they ever used it.

Sometimes subsidies are more subtly arranged, without explicit taxation. Where there is a government-run monopoly (such as the Post Office) or a government-regulated industry where competition is kept out by force of law (public utilities), then the prices that are set by government cause some users to subsidize other users. Force is applied, not to those users but to potential competitors, who are not allowed to enter the industry and offer lower prices to those consumers who are subsidizing others. Users of first-class mail pay more postage than is necessary to cover the cost of delivering such mail, while senders of “junk mail” pay less than its cost. The economic system in this case conveys distorted information, making junk mail seem cheaper to deliver than it is, and thereby causing more of it to be sent than if its true cost was conveyed to the senders in prices. Resources that would be more valuable to other people in other uses are used to move junk mail, because its bids for those resources include not only the assets voluntarily sacrificed on the basis of the value of that mail to the sender but also assets which nonsenders of junk mail had to surrender as the price of their own first-class mail — thereby becoming involuntary bidders for resources they neither want nor use.

In an ideally functioning political system with zero costs of knowledge, the extra-payers would have as much ability to end this cross-subsidy as the special interests have to create it. In the real world, however, special interests are — almost by definition — groups with lower costs of knowledge. They know individually what it is that they have in common so that they can contact and organize each other as people or organizations similarly affected by government policy. Their greater political weight then enables them to forcibly take economic resources from others.

As in all systems of price discrimination, cross-subsidy works only as long as competitors can be kept out, and usually only the government has sufficient force to do that effectively. Where price discrimination is attempted in a competitive market, those who are paying more than their own costs can be served more cheaply (and profitably) by firms charging each set of customers according to their own respective costs. Price discrimination under these conditions quickly becomes attempted price discrimination, as overcharged customers find other firms to transact with. This has happened, for example, in the railroad industry as it lost its original monopoly with the development of trucking and airlines. Those kinds of freight which had been overcharged to subsidize other kinds of freight simply began being shipped by trucks, planes, or barges.

Given that a monopolistic market is essential for cross-subsidizing (or other forms of price discrimination), it is not surprising that cross-subsidy prices are common in the postal service, public utilities, and other enterprises either run or directly controlled by the government. The cross-subsidization of mail occurs not only as between first-class and junk mail. It also occurs as between users in large cities and those in remote places. The huge volume of mail between New York and Chicago tends to make the cost per letter very low, while the low volume of mail to remote villages makes their cost per letter much higher. In an uncontrolled, competitive market, the respective prices would tend to reflect these large cost differences. In a government market, however, all the costs are lumped together and all the users pay the same postage without regard to how much each contributed to those costs. The knowledge conveyed by the uniform prices is therefore a distortion of the real costs in terms of the resources used up by the economy in directing mail to different places. To the extent that other government controlled prices similarly distort the cost of delivering electricity, water, and other services to rural locations, the whole cost of living in isolated towns or villages is understated to those who are deciding where to locate.

The history of American transportation, from municipal bus and streetcar lines to railroads and airlines, is a history of government-imposed cross-subsidies. Initially, municipal transit was privately owned by a number of firms operating streetcars along various routes. The creation of city-wide franchises — monopolies — was usually accompanied by fixed fares, regardless of distance traveled or transfers required. Short-distance passengers subsidized long-distance passengers. The effects were not only distributional but allocational. More resources were devoted to carrying people long distances than would have been if the true costs had been conveyed to those using the service. Therefore, the creation of suburbs and central business districts was subsidized, at the expense of people living in the city and of neighborhood enterprises. The question is not which of these residential or business arrangements is “better” in some categorical sense. The point is simply that cross-subsidy conveyed false economic information to those making decisions as to where to live or shop, and the fact that the subsidy never appeared in a government budget conveyed no information at all to the electorate.

Like most price discriminators, municipal transit was vulnerable to competitors who chose to serve the overcharged segment of their customers. Around 1914–1915, the mass production of the automobile led to the rise of owner-operated bus or taxi services costing five cents and therefore called “jitneys,” the current slang for nickels:

The jitneys were owner-operated vehicles which essentially provided a competitive market in urban transportation with the usual characteristics of rapid entry and exit, quick adaptation to changes in demand, and, in particular, excellent adaptation to peak load demands. Some 60 percent of the jitneymen were part-time operators, many of whom simply carried passengers for a nickel on trips between home and work. Consequently, cities were criss-crossed with an infinity of home-to-work routes every rush hour.

The jitneys were put down in every American city to protect the street railways and, in particular, to perpetuate the cross-subsidization of the street railways’ citywide fare structures. As a result, the public moved to automobiles as private rather than common carriers…19

In short, the cross-subsidy scheme not only distorted the location of homes and businesses; it artificially increased the “need” for private automobiles by forcibly preventing or restricting the sharing of cars through the market.

Ironically, years later, some municipalities have tried to encourage car pools to reduce traffic congestion, but car-pooling through nonmarket mechanisms requires far more knowledge than through the market for jitneys, and conveys far less incentive for dependability and cooperation. Because car pools are advance agreements among particular small subsets of persons, rather than a systemic arrangement for all the cars and passengers in the whole set of travelers, enormous sorting and labeling costs are involved in car-pooling — determining specifically who is going where and discovering how dependable and punctual each other person in the subset happens to be. By contrast, the jitney owner made profits by picking up people (usually on his own way to work) and had every incentive to pick them up on time every day, or some other jitney owner would pick them up before he got there. But with nonmarket car pools, a particular set of riders is waiting for a particular car — and it remains illegal for other cars to sell their services to them without a city franchise as taxis. Under these constraints, car pools have done little to relieve traffic congestion, despite much exhortation.

The rush-hour traffic congestion caused by thousands of people going to work separately in individual automobiles has been denounced by social critics as “irrational” and explained by some mysterious psychological attraction of Americans to automobiles. It is, however, a perfectly rational response to the incentives and constraints conveyed. The actual costs and benefits of automobile-sharing are forcibly prevented from being conveyed by prices. As in other areas, claims of public irrationality are a prelude to arguments for a government-imposed rational “solution” to the “problem.” Also as in other areas, it is precisely the government’s use of force to prevent the accurate transmission of knowledge through prices that leads to the suboptimal systemic results which are articulated as irrational intentional results of a personified “society.”

Private force is used to prevent price transmission of knowledge of the availability of drivers. Many unemployed people are perfectly capable of driving, but are prevented from competing for such work, either as employees or as owner-operators of vehicles. Labor unions are the private force. This is not metaphorical force, though it may be infrequently exercised force (as in armed robberies), because both sides understand the situation. If any unemployed worker receives X dollars as unemployment compensation but would rather work at 2X, he will be prevented from doing so if the union wage is 3X.

It is not enough that the union have a contract for 3X with a given employer, such as a bus company or taxi fleet. The unemployed individual could work for 2X for himself or for a nonunion firm — if this were not prevented by union threats and/or government force applied directly to make these other options illegal.

Unions do not simply set the wages paid on a predestined number of jobs. The wage rate charged determines how a certain task will be performed — that is, how many “jobs” it will involve. In the case of municipal transit, high wage rates for bus drivers create incentives for large buses — the substitution of capital for labor in transporting a given number of passengers. A leading transportation economist estimates that about eight passengers per vehicle would be optimal in a system where prices were allowed to convey accurate costs of vehicles, drivers, and roads20 — in contrast to the usual forty- to fifty-passenger buses actually used. If only one fifth as many passengers were carried per bus, there would be five times as many small buses, meaning five times as many jobs for drivers and only one-fifth the waiting time between buses for passengers. It would also be possible to have a far greater variety of bus routes, as the jitneys had, rather than clogging a few main thoroughfares during rush hours and letting passengers off farther from their destinations than necessary, as at present. Under these conditions buses would also be a far more attractive alternative to private automobiles for many people.

Disastrous as the effects of political decision making have been in municipal transportation, it is by no means irrational politically. Indeed, the same set of policies have emerged in so many different cities across the country, and reappeared again and again in national transportation policy regarding passenger railroads and airline routes that it is clearly a consistent effect, reflecting consistent causes rather than anything as random as “irrationality.” Central to the decision making in this area has been the maintenance of incumbent transportation entities, which often implies the maintenance of incumbent technologies — i.e., subsidized obsolescence21 — resisting the phasing out of existing modes of operation, as competing modes arise. On the contrary, competing modes with technological or organizational advantages are either penalized or prohibited (as in the case of the jitneys), to preserve incumbent organizations and technology. It is not even a pro-industry position but a pro-incumbent position, since there might well be a far more profitable industry (consisting of new firms), as well as one better serving the public, in the absence of such regulation. To be pro-industry would be an ideological position; to be pro-incumbent is a practical political position, since the incumbents are either organized or easily organizable into effective special interest groups. The same incumbent bias applies to labor — i.e., to unions of existing employees, at the expense of other workers whose job opportunities are sacrificed. For example the federal mass transit subsidy program requires labor union approval of any major expenditure22 thereby assuring that no changes will be made that adversely affect the incumbent union members.

There is nothing peculiar about transportation that brings about such results. The regulated communications industry shows the same patterns. As in transportation, there was once a plausible case for government intervention, when the alternative of free competition did not seem feasible under existing conditions. In the broadcasting industry, there are inherent technological limits to how many competitors can operate in a given area, because broadcast signals interfere with one another, and beyond some point such interference makes all broadcasts unintelligible. This was a clear case where the government creation of a property right — in this case the right to exclude others from broadcasting on a given station’s wavelengths — was a social gain, not simply a gain for the property owners. But the government went beyond defining a property right to assigning a property right. The crucial difference between the two functions is apparent in the case of land, where there are elaborate laws on property rights in general, and elaborate government records on each piece of land, but the actual assignment of ownership occurs almost entirely through market transactions. The defining of a property right in broadcasting over certain wavelengths served the public interest, but the power to assign such rights to particular individuals or corporations served the interests of politicians. The regulatory process they created — and continue to influence through appointments and appropriations — had enormously valuable property rights to hand out at their discretion, with little more legal restrictions than vague phrases about “the public interest.” In exchange, politicians and their appointees were in a position to receive everything from simple obeisance23 through campaign contributions, favors to constituents and friends, jobs in the regulated industry, and outright bribes.

In communications, as in transit, new technological developments threatened incumbent organizations and incumbent technology. Cable television made possible an unlimited transmission of stations to any given point, unlike broadcasting through the air. The whole structure of the industry — networks, affiliates, advertising patterns — could have been undermined or destroyed by the new technological possibilities. So too would have been the existing regulatory apparatus, which was no longer needed after the industry was no longer inherently monopolistic. But as in transportation after alternative modes (autos, airplanes) eliminated the railroad monopoly on which the I.C.C. was based, so in communications the response to the elimination of the initial rationale for regulation was to extend the regulation to encumber and contain the new threatening technology.

Under this set of institutional incentives and constraints, it is hardly surprising that corruption scandals have plagued broadcasting regulation for decades,24 and surrounding the outright proven corruption is a large gray area of questionable financial windfalls to politicians, including the fortune of Lyndon B. Johnson.

Sometimes the political gains from regulation are more indirect but no less substantial and no less distorting to the use of resources in the economy. For example, the routes of federally subsidized passenger trains reflect the locations of the constituencies of key politicians, rather than the concentration of people requiring the service:

Because the Chairman of the House Commerce Committee and a prominent member of the ICC come from West Virginia, at various times three passenger trains have been run east and west through the state, which has limited demand for passenger service. Similarly Amtrak has had to provide two routes through Montana on the former Great Northern and Northern Pacific main lines because of the political strength of senators from Montana. Because members of Congress from Ohio have shown no special interest in transportation, that populous state receives a relatively small coverage of passenger trains: Cleveland was not served by Amtrak at all in the initial plan…25

Similar political considerations cause the federally financed highway system “to contain a large mileage of lightly utilized freeway, especially in the plains states, whereas the investment would have given society a greater return in the more populous areas of the country.”26 Again, the point is not simply its inconsistency as economic optimizing, but its perfect consistency as political optimizing. A more basic question might be why anyone would expect economic optimizing by people chosen politically, and operating under political incentives and constraints. Vague personifications of “society” and projections of government into that role may be the explanation.

Cross-subsidy is so widespread and so deeply ingrained in government controlled enterprises that a special term of opprobrium is used to describe the disturbance of such schemes by new firms entering to serve the previously overcharged segment of the market: “cream skimming.” Thus, when the United Parcel Service began delivering more packages — more cheaply, quickly, and safely than the Post Office — it was charged with skimming the cream of the market by serving urban and suburban areas rather than all the remote areas which are served by the Post Office. A private business has no incentives to subsidize one set of customers at the expense of another. Its individual incentive is to produce the maximum value at the least cost (the difference being its profit), and systemically that means getting the most possible from given resources at the least sacrifices of alternative uses of those resources.

An uncontrolled, competitive market for package deliveries would not mean that people in remote areas would have no packages delivered. It means that the frequency of such deliveries would be less, reflecting the higher cost. Those people in such areas who are able to stop by a post office or parcel service office in town during shopping trips, or when going to or from work, would pick up packages then rather than pay postage reflecting their true cost of delivery. There would also be some incremental substitution of local products for products shipped in. By contrast with market-induced economizing on the use of costly resources, a government enterprise whose residual claimants (taxpayers) are not its decision makers has an incentive to maximize its size and budget by extending the “need” for its service as far as possible — even when increasing incremental costs are greater than the incremental value to the customers. Considering the lack of incentives for internal efficiency in a tax supported organization, it is also possible that all users of the service — in remote areas as well as large cities — pay more for mail delivered by the government than they would under private management, constrained by profit and loss considerations.

Airports sell monopoly rights to a taxi company, a restaurant, gift shops, and other concessionaires and use the proceeds to subsidize the prices they charge to planes for landing at the airport. Thus, even though economists estimate the cost of a landing at Kennedy Airport during the peak hours at about $2,000, the plane pays only $75.27 Distorting the knowledge of the true cost of the plane’s landing this way means that the airlines make their decisions as if landing at Kennedy Airport is far cheaper to the economy than it really is. A given airline will, for example, fly numerous planes from a given city into Kennedy Airport at various times during the day — these planes sometimes carrying only a fifth or a tenth of the passengers that the seating capacity will hold. In addition, other airlines serving the same city will fly other planes in at similar times, with similarly few passengers per plane. The net result is an inflated “need” for airport facilities — calling (politically) for expansion of given airports and/or the construction of new and expensive airports. Cross-subsidy thus creates a “need” for a larger empire of staff, facilities, and appropriations, whether the particular governmental enterprise is an airport, a postal system, or whatever.

Objective statistics which apparently demonstrate the “need” for more service — the numbers of planes landing and taking off per hour, their waiting time in the air or on the ground, etc. — are completely misleading. There is no such thing as objective, quantitative “need.” Whether with airports or apartments or a thousand other things, how much is “needed” depends on the price charged. Just as artificially low prices under rent control caused the same population to “need” more apartments, so artificially low landing fees cause far more airplanes to be “needed” to transport a given number of passengers between two cities, in planes with many empty seats. With landing fees increased about twenty-five times, reflecting the true cost of landing a plane at Kennedy Airport, fewer flights per day would be made and a higher percentage of the seats would be filled on each flight. Few private planes with one or two passengers would be using up valuable landing space at major airports if they had to pay thousands of dollars per landing, though a little plane with only the pilot aboard may now choose to land at an enormously expensive airport, delaying thousands of other people circling around in a “stack,” because the price he is charged does not convey these alternative uses to him as effective knowledge that he must incorporate into his decision as to where to land.

The average commercial airliner in the United States flies with half its seats empty — which means that only half as many flights would be needed to transport the same number of passengers in existing planes. Actually, less than that would be needed, since (1) planes idled by more effective scheduling would tend to be the smaller planes, (2) future planes would average larger sizes if landing fees rose by the larger amounts reflecting the true economic cost of using major airports. Small private planes would have financial incentives to land at smaller airports, rather than add to the congestion at major airports serving a large volume of commercial air traffic. In short, under prices reflecting cost, the number of flights “needed” in the major urban airports would be less, with less noise to destroy millions of dollars worth of residential property values in the vicinity of airports, and less “need” to confiscate more of such property to expand airport facilities.

The pattern of overuse through underpricing — including zero prices for many government services — is not a case of “irrationality.” Its pervasiveness among the most diverse products and services, from airports to stamps, suggests a reason for it, not random caprice. It is completely rational from the standpoint of maximizing the well-being of the decision making unit (airport authorities, postal officials, TVA executives, etc.). When discussing under-pricing policy, more “need” can always be demonstrated “objectively” than under market pricing, which would convey knowledge that would cause more economical use of whatever is being sold.

Some idea of the complications insulating regulatory agencies from feedback from the affected public may be suggested by the fact that specialists studying federal regulatory agencies “cannot even agree on the number” of such agencies, although “it is thought to be over 100.”28 A senator critical of regulatory commissions claims that simple “common sense” is “rare” in many of them, and then characterizes them as “undemocratic, insulated, and mysterious to all but a few bureaucrats and lawyers.”29 Such criticism misses the point that the agencies’ own interests could hardly be better served than by being so incomprehensible to outsiders that even a United States senator with a staff at his disposal cannot find out precisely how many such agencies there are, much less exercise effective legislative oversight over their activities. The costs of regulation to the public — that is, its uneconomic effects as well as its administrative costs — have been estimated by the U.S. General Accounting Office at about $60 billion per year30 — about $1000 for every family in the United States. The regulatory decisions which impose such costs may seem to lack “common sense” as public policy, but such decisions often make perfect sense from the regulatory commission’s own viewpoint — especially in favoring such incumbent special interests as have enough at stake to pay the high knowledge costs of continuously monitoring a given agency’s activities.

FORCIBLE TRANSFERS OF RESOURCES

In addition to forcibly changing — distorting — the price signals that convey knowledge of scarcities and options, the government has also increasingly used force directly to transfer resources. Massive “urban renewal” programs, for example, have simply ordered people to give up their homes and businesses, in order that land may be cleared and something else built on the site. Similarly, the military draft has forcibly transferred people from one occupation to another. Less dramatically, but no less importantly, the government has also forcibly appropriated many property rights over the years, without appropriating the physical things to which these rights are attached. As noted earlier (Chapter 5), to appropriate 10 percent of the value of land is the same thing economically as appropriating 10 percent of the land itself. Politically, however, the two things are quite different. The cost of knowledge to the electorate is much higher when part of the value of land is appropriated by restricting the options as to its use than when an equivalent appropriation takes the obvious form of expropriating a portion of the land itself. The same principle is involved when the government forcibly changes the terms of contracts already voluntarily negotiated between private parties, as when it changes the so-called “retirement” age — i.e., the age at which one party’s obligation to employ the other ceases. Assets set aside for other purposes must legally be expended to retain unwanted services — thereby reducing the real value of given money assets by reducing the options as to their use, just as land that cannot legally be used in as many ways is less valuable than physically identical land unrestricted by entails, zoning, or lost mineral rights.

Much articulation goes into trying to demonstrate to third party observers that the forcible transfers lead to more beneficial results. Yet on general principle, it is not clear that articulation is the best mode for weighing alternative values or that third party observers are the best judges. When a given set of homes and businesses are destroyed to make way for a very different set of homes and businesses, as in “urban renewal,” a truly greater value of the second set would have enabled their users (or financial intermediaries) to bid the land away from the original users through voluntary market competition without the use of force by the government (especially since the second set of users almost invariably has higher incomes than the first).31 Voluntary transfers of land are so commonplace as to cast doubt on the “need” for force, if the second set of uses is in fact more valuable. Actually, force is used twice in urban renewal transfers — once to dispossess the original users and again to transfer assets from taxpayers to subsidize the second set of users. The issue here is not the unpleasantness of force so much as its implications for the claim that the transfer of resources was to a more valuable use.

The particular site of the “urban renewal” may be far more attractive afterwards than it was before, and this adds plausibility to the claim of social benefits. But any site, activity, or person, can be made more attractive by expending resources. Whether the incremental costs experienced by those who pay them outweigh the incremental benefits experienced by those who receive them is the crucial question. When those who pay and those who benefit are the same, as in voluntary market transactions, then it is unnecessary for third parties to incur the costs of deciding on the basis of plausibility, much less pay the still higher costs of obtaining more solid knowledge. Where force must be used to effect the transfer, the incremental costs apparently exceed the incremental benefits of the change as experienced by those directly involved. “Objective” data showing that the people dispossessed moved to “better” housing elsewhere likewise has more plausibility than substance. That “better” housing was always an option before — at a price, and the rejection of that option indicates a trade-off of housing for other things more valued by those actually experiencing the options. Forcibly reducing any set of options available can lead to a new collection of results — some part of which is “better” than its counterpart in the old collection, but the real question is which whole collection was preferred by the chooser when he had the choice.

More generally, “urban renewal” has involved visible benefits concentrated on a particular site and costs diffused over a nation of taxpayers, as well as costs borne by dispersed former residents. In other words, the cost of knowledge of benefits is much lower than the cost of knowledge of losses — even when the losses exceed the benefits. Therefore, it is rational for political decision makers to continue such programs, even when irrational economically or socially.

The use of draftees by the army may similarly be rational from the standpoint of the army and irrational from the standpoint of the economy or society. There are no objectively quantifiable “needs” for manpower by the military, any more than by any other organization. At some set of prices, the number of soldiers, civilian employees, and equipment needed to achieve a given military effect will be one thing, and at a very different set of prices for each, the quantitative “needs” for each can be quite different. Even in an all-out war, most soldiers do not fight, but perform a variety of auxiliary services, many of which can be performed by civilian employees, since most of these services take place far from the scenes of battle. From the standpoint of the army as an economic decision making unit, it is rational to draft a chemist to sweep floors as long as his cost as a draftee is lower than the cost of hiring a civilian floor sweeper. From the standpoint of the economy as a whole, it is of course a waste of human resources. Again, the use of force is significant not simply because force is unpleasant, but because it distorts the effective knowledge of options.

The appropriation of physical objects or of human beings is more blatant than the appropriation of intangibles like property rights, but the principles and effects are similar. Neither “property” nor the value of property is a physical thing. Property is a set of defined options, some of which (mineral rights, for example) can be sold separately from others. It is that set of options which has economic value — which is why zoning law changes, for example, can drastically raise or lower the market value of the same physical land or buildings. It is the options, and not the physical things, which are the “property” — economically as well as legally. There are property rights in such intangibles as copyrighted music, trademarked names, stock options, and commodity futures. A contract is a property right in someone else’s future behavior, and can be bought and sold in the market, as in the case of contracts with professional athletes or consumer credit contracts. But because the public tends to think of property as tangible, physical things, this opens the way politically for government confiscation of property by forcibly taking away options while leaving the physical objects untouched. This reduction of options can reduce the value of the property to zero or even below zero, as in the case of those rent controlled apartment buildings in New York, which are abandoned by landlords because they can neither sell them nor give them away, because the combination of building codes and rent controls makes their value negative. Had the government confiscated the building itself, the loss would have been less. The landlord in effect gives the building to the government by abandoning it. Indeed, he pays to get rid of it, because abandonment has additional costs in the form of legal liability if the landlord is ever located and convicted of abandoning the building, which is illegal.

Property rights which are not attached to any physical object are even more vulnerable politically. Contracts concerning future behavior have been virtually rewritten by legislation and/or court interpretation. These have included both prior restraints on the terms of contracts — interest rate ceilings, minimum wage laws, rent control, etc. — and subsequent nullification of existing contracts, as in laws against so-called “mandatory retirement.” Few, if any, contracts require anybody to retire, and about 40 percent of all persons above the so-called retirement age continued to work, even before this legislation was passed. The so-called “retirement” age was simply the age at which the employer’s obligation to employ individuals ended. The only thing “mandatory” was that contractual obligation — and it has been unilaterally extended by the government. Categorical, speculative articulation by third parties regarding the productive ability of the elderly as a group has superseded incremental judgments of each situation by the person actually employing each worker in question.

As in other cases, moving an asset or obligation backward or forward in time drastically alters its value or cost. Changing the retirement age a few years in either direction is the same as forcibly transferring billions of dollars from one group to another, since the costs of such commitments as life insurance, annuities, etc., depend crucially on time. One of the largest financial commitments arbitrarily changed by changing the retirement age is that of the government’s own “Social Security” program — which saves billions of dollars by postponing its own payments to the retired by forcing employers to continue to hire them longer. But because these changes in massive financial obligation (on employers) and defaults (by government) take the outward form of “merely” changing a date, it is politically insulated by the cost of the knowledge required for voters to detect their full economic impact.

CONTROLLING PRODUCERS AND SELLERS

Controlling the terms which individuals may offer each other is only one method of economic control. Other techniques include (1) controlling who can be included or excluded from a particular economic activity, (2) what characteristics will be permitted or not permitted in products, producers, or purchasers, and ultimately (3) comprehensive economic “planning” which controls economic activity in general on a national scale.

FORCIBLE RESTRICTION OF COMPETITION

While prices are crucial as conveyors of knowledge to decision makers, artificial prices which distort this knowledge can persist only insofar as competitors whose prices would convey the true knowledge are forcibly excluded.32 One reason for forcibly excluding competitors has already been noted — “external” effects, as in broadcast interference, which makes unrestricted competition unfeasible.33 There are also industries where the production costs are overwhelmingly fixed costs — and high fixed costs at that — so that the cost per unit of output is constantly declining over any range of output that is likely to be demanded. In this case, one producer can supply the market more cheaply than two or more, since more output means lower production costs. Examples include industries with huge investments in massive systems of conduits of one sort or another delivering water, gas, electricity, or telephone calls. These are what economists call “natural monopolies,” since it would cost more to get the same service through multiple producers than through one producer per given area. Therefore government regulation substitutes for competition as a means of preventing high monopolistic prices from being charged.

This is the idealized economic theory. The reality is something else. Once a rationale for regulation has been created, the actual behavior of regulatory agencies does not follow that rationale or its hoped-for results, but adjusts to the institutional incentives and constraints facing the agencies. For example, the scope of the regulation extends far beyond “natural monopolies,” even where it was initially applied only to such firms. The broadcast-interference rationale for the creation of the Federal Communications Commission in no way explains why it extended its control to cable television. The “natural monopoly” that railroads possessed in some nineteenth century markets led to the creation of the Interstate Commerce Commission, but when trucks and buses began to compete in the twentieth century, the regulation was not discarded but extended to them. Airplanes have never been a “natural monopoly,” but the Civil Aeronautics Board has followed policies completely parallel with the policies of other regulatory agencies. It has protected incumbents from newcomers, just as the FCC has protected broadcast networks from cable TV, as the ICC has tried to protect railroads from trucking, or municipal regulatory commissions have protected existing transit lines from jitneys or other unrestricted automobile-sharing operations. As a leading authority has summarized CAB policy: “Despite a 4,000 percent increase in demand between 1938 and 1956, not a single new passenger trunk line carrier was allowed to enter the industry.”34

Regulatory agencies in general have the legal right to exclude firms from entering the industry they regulate. This is a property right worth billions of dollars. The members of the commissions are not allowed to sell this right, but they can dispense it in ways that make their job easier, or their individual fortunes more secure as later employees of the firms they currently regulate. Favoritism to incumbents is a perfectly rational response to such incentives, however inconsistent with the public interest. The only legal guidelines are that entry of firms into the regulated industry must serve the “necessity and convenience” of the public. The regulatory agency determines how many firms are “needed” to serve the public. The idea is that there are quantitative, objective “needs” determinable by third party observers — as distinguished from the economic reality of varying quantities and qualities demanded according to varying costs. But the “need” for railroad service, for example, is “measured in physical rather than economic terms” so that “as long as existing carriers are physically capable of performing a particular service, prospective competitors are to be denied entry — even if their service is cheaper, better, and more efficient.”35 Similar policies are followed by other regulatory commissions.

Because the right to operate in a regulated industry is a valuable property right available at virtually zero cost, the claimants’ demand always exceeds the supply, even when only incumbents are allowed to compete. It is to the regulatory agency’s political advantage to satisfy, or at least appease, as many incumbents as possible — which is to say, to distribute these operating rights widely, and therefore thinly. Thus legal rights to engage in interstate trucking are spread so thin that they are often rights to operate in only one direction — a “carrier between the Pacific Northwest and Salt Lake City may haul commodities eastbound, but not westbound,”36 for example — thereby doubling the cost to consumers, who must pay enough freight charges to cover the cost of the truck both ways. Sometimes the right to carry goods between two points does not include the right to pick up and deliver at points in between, so that again the cost of the service is made artificially high by not allowing it to be shared by as many customers as possible. However economically costly this is to the country, it makes perfect political sense as a means of spreading a given amount of patronage as widely as possible to mollify as many constituents as possible.

Since the general public knows little or nothing about such regulatory agencies, their interests are a politically negligible consideration. Whatever the individual morality or intentions of regulatory commissions, the systemic factors leading to such results are (1) the vast disparity in cost of knowledge per unit of benefit as between the public and special interest groups, and (2) the appointment rather than election of commissioners, so that no political competitor has a high personal or organizational stake in informing the public of incumbent commissioners’ misdeeds. Political as well as economic competition has been restricted or eliminated. Mollifying as many constituents as possible means not only protecting incumbents from prospective competitors; it means protecting high-cost (inefficient) incumbents from unrestricted competition from low-cost (efficient) incumbents, who could otherwise undercut their prices, taking away their customers, and driving them toward bankruptcy. Rather than quietly enter bankruptcy courts, such higher cost firms are more likely to noisily enter the political arena, probably through the congressional committee controlling the powers and appropriations of the regulatory commission in question. It is politically prudent for the commission to buy “insurance” against such problems — at costs externalized to the public — by maintaining a minimum level of prices designed to insure survival of the highest-cost firms. Lower-cost firms therefore earn more profits per unit of sales but are prevented from completely destroying the high-cost firms. In short, there is something for everybody, which is a politically more viable situation than the “cutthroat” or “ruinous” competition which regulatory agencies constantly guard against.

Insofar as the public is interested in, and able to monitor, the results of the regulatory process, it is usually in terms of product prices or the profit rate of the industry. Almost by definition, they have nothing to compare the prices with — there being no unregulated firm producing the same good or service, in most cases. This leaves the profit rate of a regulated firm as their criterion. The regulatory agency therefore appeases the public by keeping this profit rate “low” in comparison with unregulated firms. That is wholly different from keeping the prices low. A low profit rate on a truck delivery that costs twice as much as necessary, because the truck returns empty for lack of legal authority to do business the other way, may still mean almost double what the price would have been under unregulated competition. Passenger fares may also be double what they would be without regulation when commercial airlines fly half empty — which is the rule. In the latter case, there is some comparison possible, because large states like Texas and California have purely intrastate airlines which thereby escape federal regulation. Pacific Southwest Airlines, for example, flies between Los Angeles and San Francisco at far lower fares — and higher profits — than federally-regulated airlines flying between Washington and Boston, which is the same distance.37 They simply fly with more of the seats filled,38 partly because there is no CAB to stop them from charging low fares. In the words of economists studying prices of airlines, the “subtantial traffic gains of the intrastate carriers have more than offset the lower revenue yields per passenger…”39 Indeed low markups and high volume have been the secret of many profitable businesses in many fields.

Although pious words about the “public interest” may abound in regulatory legislation and regulatory rulings, there is no institutional mechanism to compel, induce, or reward commissions for weighing the costs and benefits to the public when they make their decisions. In particular, there are no incentives to keep costs down — and costs make up a far higher percentage of the price of most goods than does profit. A small inefficiency can raise the price of a good by much more than the doubling of the profit rate would. The average profit rate in the United States is about 10 percent, and a 20 percent rate for any firm is considered enormous. Yet if Firm A has only 10 percent higher costs than Firm B, its price would tend to rise as much as if its profit rate had doubled. The political visibility of profit rates results in much regulatory time, energy, and controversy when going into determining whether a “reasonable” rate of return is 6 percent, 7 percent, or 8 percent — differences which may mean very little to the average consumer in dollars and cents. Much less effort goes into determining whether costs of production are higher than they need be, even though production costs may have far more effect on prices. This is partly because of both legal and common sense limits on how far a regulatory agency can go into the actual management of a firm.

Regulated firms whose explicit financial profit rate is restricted have every incentive to allow costs to rise, taking various benefits in nonpecuniary forms, such as fringe benefits (especially for management) more relaxed (inefficient) management, less innovative activity and the headaches it brings, less unpleasantness such as firing people or hiring associates who are offensive in manner, race or sex.40 In addition, the more costs the regulated firm can accumulate — and get the regulatory agency to accept as valid — the higher its total profits at a given rate of profit.41 In short, there is little incentive for regulated firms to keep down costs, and much incentive to let them rise, especially in ways that make the management of such firms easier. For example, high wage demands by unions in regulated industries need not be resisted (and strikes risked) as strongly as in unregulated industries, because wage increases become part of the cost on which the regulatory agency sets prices. Some of the highest paid workers in America are railroad workers and municipal transit workers, despite the dire conditions of both industries and the frequent transfusions of taxpayers’ money they require.

Many of the most extreme examples of employing unnecessary labor — “featherbedding” — are found in regulated industries. Duplicate crews for handling trains on the road and handling the same trains when they enter the railroad yard, retention of coal-shovellers or “firemen” after locomotives stopped using coal, and elaborate “full crew” laws and practices are among the many financial drains on the American railroad industry, which is financially unable to keep its tracks repaired or maintained in sufficiently safe conditions to prevent numerous derailments per year and the spread of noxious or lethal chemicals which often accompany such accidents. The managements of such financially depleted railroads have likewise enjoyed extraordinary financial benefits, including many of questionable legality. To explain this by individual intentions — “greed” — is to miss the central systemic question: Why can such greed on the part of both labor and management be satisfied so much more in this industry than in others? The incentives and constraints of regulation, compared to those of competition, are a major part of the answer.

Regulation spreads not only because more regulatory agencies are created to regulate more industries, but also because existing regulatory agencies reach out to regulate more firms which have an impact on their existing regulated industry. The FCC’s reaching out to include cable TV or the ICC’s reaching out to include trucking are classic examples of regulatory extension of the original mandate based on the original rationale to include things neither contemplated nor covered by that rationale. The tenacity with which regulatory agencies hang onto existing regulated activity is indicated by the ICC’s reaction to the exemption of agricultural produce from its regulatory scope. It ruled that chickens whose feathers had been plucked were no longer agricultural but “manufactured” products — as were nuts whose shells had been removed or frozen vegetables.42

Competition may be restricted not only by direct control of the necessary legal papers required to enter a given industry but also by control of subsidies in an industry whose whole price structure requires subsidy for firms to survive. The American maritime industry, for example, has such high wages and inefficient union rules that its firms cannot survive without massive government subsidy. A firm which is denied such subsidy simply cannot compete with the other firms that have it, because it will have to charge its customers far more than the subsidized firms charge. The Federal Maritime Board determines who gets how much subsidy on which routes, on the basis of its decisions about the “essential” nature (“need”) for those routes.43 Both the maritime industry and the maritime unions are heavy contributors to both political parties, insuring the continuance of such arrangements regardless of the outcome of elections.

Not all governmental restrictions on competition take the form of regulation in the classic public utilities sense. There is much regulation of particular markets such as various agricultural and dairy-product markets, under a variety of rationales having nothing to do with “natural monopoly” or consumer protection. The usual effect of such restrictions is to raise product prices, and in many cases it is rather transparent that that was the intention as well. Sometimes these government interventions go beyond generalized price fixing to, for example, setting a different price for milk for each of its various uses. The terms of the trade-offs of yogurt for cheese or ice cream, etc., are not allowed to be conveyed by prices that fluctuate with consumer demand or technological change, but are fixed politically and therefore distort knowledge of economic alternatives.

Occupational licensing laws are another very different form of economic regulations which nevertheless share many of the political characteristics common in commissions regulating public utilities or common carriers. First there is an enormous bias towards incumbents. Escalating qualification standards in the licensed occupation almost invariably exempt existing practitioners, who thereby reap increased earnings from the contrived scarcity, without having to pay the costs they impose on new entrants in the form of longer schooling, tougher qualifying examinations, or more extended apprenticeship.44 Second, the prices of the services are artificially raised and the undercutting of price either forbidden (taxi rides) or rendered uneconomic by forbidding price advertising (lawyers, doctors, optometrists). Although “the public interest” is a prominent rhetorical feature of occupational licensing laws and pronouncements, historically the impetus for such licensing comes almost invariably from practitioners rather than the public, and it almost invariably reduces the quantity of new practitioners through various restrictive devices, and the net result is higher prices.

Some idea of the magnitude of the effect of occupational licensing may be obtained from the prices of such licenses as are transferrable through market sales. A taxi license in many American cities costs thousands of dollars — up $50,000 in New York City.45 Where licenses are nontransferrable, as in medicine, the effect of the restrictive practices can be indicated by the income of doctors — which were below those of lawyers in the 1930s but are now more than double the income of lawyers as a result of restrictive practices by the American Medical Association, possessing far more control over medical school admissions and hospital staffing than the American Bar Association possesses over corresponding legal institutions.

Another area in which the government restricts competition is in the application of laws on land use — including municipal land use or recreational land policy for wilderness areas. Restrictions on the use of land forcibly prevents bidding for it by certain users — notably middlemen (“developers”) selling or renting to working-class people. The political impracticality of openly admitting that government force is being summoned to keep out the poor leads to much vague and lofty discussion in which people fade from the picture entirely and such impersonal entities as “valuable open space”46 and “fragile areas”47 dominate discussion about the need to “protect the environment”48 under “rational and comprehensive”49 allocation of the land through political processes. But the strong class bias is evident in such things as (1) the heavily upper income occupations (executives, doctors, engineers, academics) of members of the Sierra Club, which spearheads much “environmental” political activity,50 (2) strong working-class voter opposition to zoning and strong upper-class support for it,51 (3) expensive home building “requirements” having nothing to do with the “environment” or “ecology” but having much to do with pricing the poor out of the market,52 and (4) the limiting of cheap and fast access to wilderness recreation areas and favoring time-consuming access usable only by those with substantial leisure.53 A student of the so-called “environmental controversy” finds “an ugly strain of narrow class interests involved in the wilderness issue,” an “attempt by the prosperous to bar the rabble” and efforts by those who “already have vacation colonies on secluded lakes” to keep out “developments that cater to the masses.”54

Defending class privileges in the name of the public interest has required constant alarms and misleading statistics. For example, a picture of spreading and pervasive urbanization is projected by using the Census definition of “urban” as any place with 2,500 inhabitants or more. This technique conjures up a “megalopolis” extending “from southern New Hampshire to northern Virginia and from the Atlantic shore to the Appalachian foothills.”55 In fact, however, the average density of most of that area is about one house per every twelve acres. A few high density areas like New York and other eastern cities contain most of the people (87 percent) in the supposed “megalopolis,” most of which is covered with greenery rather than concrete.56 Zoning law proponents likewise invoke fears of factories and gas stations in residential neighborhoods. But in cities without zoning — notably Houston — no such dire things happen. Middle-class neighborhoods there look like middle-class neighborhoods elsewhere. In lower income neighborhoods, there are sometimes auto repair shops and other such local conveniences — but it is precisely in these neighborhoods with automobile repair shops that zoning is overwhelmingly rejected by the voters.57 Apparently the trade-off between convenience and aesthetics is different for those with less money and older cars. Looked at another way, zoning allows some people to impose their values and life-style on others who may not share the values or be able to afford the life-style.

ANTITRUST

Markets may be controlled by private parties as well as by the government, and the antitrust laws are in general aimed at preventing monopoly and related market distortions. However, the major antitrust laws have been passed at widely varying times and represent varying concepts and conflicting goals. The Sherman Antitrust Act of 1890 is the oldest and most important of the federal statutes, carrying the heaviest penalties, which can range up through millions of dollars in civil damages to dissolution of a firm and/or jail for its executives. The Sherman Act forbids anyone to “monopolize, or attempt to monopolize,” or to engage in “restraint of trade.” The Clayton Act of 1914 forbade certain actions incident to monopolistic behavior, such as price discrimination, and the Federal Trade Commission Act of the same year established an organization to monitor and issue orders against a variety of undesired (“unfair”) business practices. The most enigmatic and controversial of the antitrust laws is the Robinson-Patman Act of 1936, ostensibly strengthening the Clayton Act’s ban on price-discrimination, but in practice creating legal risks and uncertainties for firms engaging in vigorous price competition. The 1950 Celler Amendment to the Clayton Act created new legal obstacles to the merger of firms.

The legal problem of reconciling these overlapping statutes is complicated by the overlapping jurisdiction of the Justice Department and the Federal Trade Commission in antitrust cases, and by the full or partial exemption from antitrust laws of some economic activities, including regulated public utilities and labor unions. Moreover, the vague language of the law leaves ample room for judicial and bureaucratic interpretations which have caused some of the leading economic and legal scholars to claim that the antitrust laws have had the opposite effects from their intentions.58

Among the central concerns of the antitrust laws are market structures, price fixing, and price discrimination. A monopoly would not accurately transmit costs through its prices because those prices would be set above a level that could persist with competitors. Competitive businesses set prices reflecting costs of production only because they stand to lose too many sales at prices that exceed what is necessary to compensate others for supplying the same product. It is neither greed nor altruism that explains price differences but rather the systemic differences between competitive and noncompetitive markets. Price discrimination is both a symptom of a noncompetitive market and a further distortion of economic knowledge, as it conveys different information about the relative scarcity of the same product to different users — causing them to economize differently, and thus at least one of them wrongly.

Antitrust laws, like all forms of third party monitoring, depend for their social effectiveness on the articulation of characteristics objectively observable in retrospect, which may or may not capture the decision-making process as it appeared prospectively to the agents involved. There is usually nothing in antitrust cases comparable to finding someone standing over the corpse with a smoking pistol in his hand. Objective statistical data abound, but its interpretation depends crucially on the definitions and theories used to infer the nature of the prospective process which left behind that particular residue of retrospective numbers. For example, merely defining the product often opens a bottomless pit of complexities. Cellophane is either a monopoly — if the product is defined to include the trademarked name, which only Dupont has a legal right to use — or has varying numbers of competing substitutes, depending on how transparent and how flexible some other brand of wrapping material must be in order to be considered the same or comparable. Under some definitions or demarcations of transparency and flexibility, cellophane is monopolistic for lack of sufficient substitutes. But by other definitions it is in a highly competitive market with innumerable substitutes. The controversies following the Supreme Court’s decision as to whether cellophane was a “monopoly” (no) suggests that there were other definitions which some (but not all) legal and economic experts found preferable. The point here is that there is no objective and compelling reason to take one definition rather than another, though the whole issue often turns on which definition is chosen. In more complicated products, there are often numerous variations on the same goods, and which of these are lumped together as “the same” product determines what the market is and how much the producer’s share of the market is, as variously defined. For example, Smith-Corona has a smaller share of total American typewriter sales than of electric typewriters sold in the United States, or of all portable electric typewriters made by American manufacturers. For many products, so much is imported that a firm’s share of American production is economically meaningless: any American producer of single-lens reflex cameras would have a monopoly by definition; all such cameras are currently imported. But the purely definitional monopoly has no effect on economic behavior, in the face of dozens of foreign competitors.

What is involved here is not a technicality of antitrust law but a far broader question about the use of knowledge, and the role of articulation. The basic problem in these definition-of-product issues is that substitutability is ultimately subjective and prospective, while attempts to define it must be objective and retrospective.

Even where a product seems unambiguously definable in some plain sense — a tangerine, for example — a question may still arise as to the economic significance of such a definition. If a worldwide cartel were to gain control of every tangerine on the planet, they could still not double the price of the monopolized product without ending up with millions of unsaleable and spoiling tangerines in their warehouses, while consumers switched to oranges, tangelos, and the like. In short, even where the physical demarcation of a product seems obvious and unambiguous, its economic demarcation may be difficult or impossible. The extent to which the price of one product affects the sales of another product is what is economically important. As a practical matter, sellers can acquire an unarticulated “feel” for this in an ongoing trial-and-error process, but that is very different from third-party observers of retrospective statistics being able to objectively document irrefutable results to courts. For one thing, the discrete time units in which data are collected by observers may be far longer than the almost continuous time dimensions of the actual transactors’ ongoing experience, so that the observers’ data are more likely to represent an amalgamation of highly disparate price and sales fluctuations during the time interval studied.

Discussions of the systemic effects of monopoly tend to center on the intentions or behavior of monopolists, when what is crucial is the exclusion of competitors who would offer different terms to his customers. This exclusion of competitors is of course the defining characteristic of monopoly, so its explicit statement may seem unnecessary. However, a real monopoly is quite rare, where governmental exclusion is not involved, and in practice antitrust suits claiming “monopolization” or attempting to prevent mergers or to break up existing large firms usually involve industries where there are not one, but a small number, of firms producing the bulk of a given industry’s output. A treacherous analogy or extension is then made to the situation of one seller (monopoly) producing all of an industry’s output to the situation of a few sellers producing most of an industry’s output — which is implicitly taken to be very similar. But it becomes crucial to recall that the systemic economic effect is not due to what the producer(s) can do but to what the producer(s) can prevent others from doing.

An industry with four firms producing 80 percent of its output may seem to be a quasi-monopoly, but if there are dozens of other firms producing the other 20 percent, then it has failed to exclude, which is crucial. Any artificial raising of prices above competitive levels by collusion among the four firms risks the fate of the tangerine cartel in our other hypothetical example. Customers can start buying from the dozens of other producers. The retrospective statistic that four firms sold 80 percent of the industry output during a given time span does not mean that there is anything fixed or prospective about that number. Antitrust proponents have scored a verbal coup by constantly terming such percentages the “share” of the market controlled by certain firms, as if they were discussing prospective behavior rather than retrospective numbers. Such insinuations of exclusionary powers or intimidation require no evidence but instead rely on the time tested principles of repetition. But historically, market shares have changed over time — some drastically — and in some cases the so-called “dominant” firm has disappeared entirely. Life magazine and the Graflex Corporation are recent examples. Once the Graflex Corporation sold virtually all the cameras used by newspaper photographers. But they “controlled” nothing; there were always many other domestic and foreign producers of press cameras, and almost all of them disappeared along with Graflex when improvements in smaller-sized cameras made the latter effective substitutes.

The intellectual state of antitrust doctrine may be suggested by the fact that some of the leading authorities in this field refer to these prevailing doctrines in such terms as “a secular religion,”59 consider them analogous to “evangelical theory,”60 or simply “wild and woolly.”61 Even a Supreme Court Justice observed that in certain kinds of antitrust cases the “sole consistency” is that “the government always wins.”62 It is therefore especially important to systematically spell out the specifics behind some of the many vague and tendentious terms used in antitrust doctrines (“control,” “predatory pricing,” “foreclosing” the market, “incipient” monopoly, etc).

There are two fairly obvious alternative explanations of why one firm or a few firms sell the bulk of the output in a given industry. One is that they in some way exercise “control” over others — either by being able to exclude potential competitors or by intimidating them from competitive pricing by threats to ruin them financially. An opposing explanation is that firms differ in efficiency — whether in production, in the quality of the product, in shipping costs, or in the general quality of their respective managements. Those who argue that concentrated industries represent monopolistic control, in some sense, deny production efficiencies, product quality differences or differences in management. For example, management quality differences are simply assumed away in analyses which proceed as if each firm or plant represents the “best current practice” in its production,63 or that “managerial competence” can be “held equal,”64 by observers. Economies of scale are sometimes defined narrowly as individual plant economies — ignoring managerial differences among multiplant corporations, as expressed in such things as how wisely each plant is located, so as to minimize shipping costs of raw materials and finished products and the costs of an efficient labor supply, a favorable economic and political climate, etc. Economies are simply pronounced to be negligible with such phrases as “only 2.7 percent” of production and transportation costs.65 But given an average profit rate of 10 percent, a relatively small difference in such costs can translate into the difference between a profit rate that keeps the business viable and one low enough to reduce stockholders’ return to less than they could get by depositing their money in an insured savings and loan association — obviously not a situation that can continue in the long-run. Observers are the last people who can declare what is negligible with someone else’s money.

The alternative hypothesis is that some industries are concentrated because some firms’ products are simply preferred by consumers, either because of their quality, price, convenience or other appeal. If this is true, then the slightly greater profitability of industries with few sellers is not because the whole industry is more profitable (as it would be under collusion), but because some particular firms have a higher profit rate which arithmetically brings up the average, while it economically does not make the rest of the industry any more profitable than under competitive conditions. The data in fact show no profit advantage to a firm of a given size in being in a “concentrated” versus a nonconcentrated industry.66

The weakness of the case for believing that industries with few sellers have monopolistic practices or results is indicated by (1) the absence of any evidence generally accepted as convincing by either the legal or the economics profession, (2) the arbitrary definitions and sweeping assumptions included in such evidence as is offered, and (3) the policy position of “deconcentration” advocates that the burden of proof must be put on defendants in concentrated industries to show that they are not harmful to the economy.67

Much of the legal and economic analysis of industries where one or a few firms produce and sell most of the output give great weight to the supposed homogeneity of the product, which should presumably preclude any rational basis for a consumer preference that would lead to such disproportionate market shares. However, on closer scrutiny this supposed homogeneity usually turns out to mean that brand-new, perfect specimens of each product as already located are identical or similar. The difference between “similar” and “identical” can involve substantial costs of knowledge, as can the process of locating the product. Among the major ways in which apparently similar products differ is in their durability — that is, their performance long after they have ceased to be brand new — and in their respective quality control, which determines what percentage of the specimens will have flaws, as well as in their distributional availability to the consumer in convenient retail outlets.

In such cases, so-called “expert” testimony can be the most misleading kind of testimony. The expert has, by definition, already paid more cost for knowledge than the average consumer, and so has far lower present or prospective incremental knowledge costs than the consumer. The mere fact that he can render a judgment on the product means that he has already located a place from which to obtain a specimen. That he knows how to produce equivalent results from “similar” products means that he has sufficient knowledge of both products to make them interchangeable to him, although not necessarily to a consumer familiar with only one, and who may perhaps have substantial prospective knowledge costs in changing to the use of the other.

Examples abound. In a famous antitrust case involving Clorox, the Supreme Court said that “all liquid bleach is identical.”68 But the factual finding in the very same case was that “Clorox employed superior quality controls” and that some brands of liquid bleach “varied in strength” from one to another69 — a fact of no small importance to users considering how much is enough and how much will ruin their clothes. It may well be that there are other brands of liquid bleach absolutely identical to Clorox but the knowledge of which ones they are is not a free good, and whether the uncertainty of a variation is worth the price difference is not a question that must be settled once and for all by third party observers, since consumers find various brands sitting side by side on supermarket shelves. In another well-known antitrust case, competing pies were considered by the Supreme Court as being “of like grade and quality” despite one pie company’s “unwillingness to install quality control equipment,” to meet the competition of its more successful rival.70 Undoubtedly a photograph taken with a press camera produced by the Graflex Corporation, which dominated that market, would have been wholly indistinguishable from a photograph taken with any number of other cheaper press cameras, as of the date both were purchased brand new. However, since its cameras were usually purchased by professional photographers, and especially by the photographic departments of newspapers, the strong preference for Graflex press cameras could not be attributed to technical ignorance, “irrationality” or the caprice or psychological susceptibilities of uninformed consumers. Experience had simply established the ruggedness of this particular brand of press camera in the rough usage to which it was subjected in crowds, on sports fields, and in war time combat situations.

Sometimes the difference in consumer preference as between products is not due to the characteristics of the products so much as it is due to differences in the cost of knowing of other products’ characteristics. Photographic experts have determined that a number of films manufactured by Ilford, Inc. produce results virtually indistinguishable from those produced by films manufactured by Eastman Kodak, which dominates that market. That is, a photographic technician equally familiar with the processing of both brands of film, can produce the same end results from either. Nor are the Ilford processing requirements any more difficult than those of Kodak. They are simply not as well known, just as the characteristics of Ilford film are not as well known. Nor are all brands comparable to these two. Even the singling out of Ilford as one brand among many others that is comparable to Kodak requires a prior knowledge and sorting of little-known brands. Note that what is involved here is not “taking advantage” of consumers’ ignorance. A professional photographer, well aware of the similarity, may nevertheless continue to purchase the one familiar brand rather than exert himself to stock or refer to two different sets of developing data. There is also much to be gained by using one brand to (1) free one’s picture taking attention for aesthetic concerns rather than technical considerations, and (2) be able to buy new film identical to the old wherever one happens to be on assignment, which is to say, not having to worry because one company’s dealer outlets are not as numerous as another’s.

Third-party observers may dismiss product differences as negligible, just as they dismiss production cost differences as negligible. However, there is no “objective” measure of what is negligible. Something is negligible or not negligible to someone. In baseball, for example, the difference between a.250 hitter and a.350 hitter is only about one hit out of every three games, which might seem negligible to a casual onlooker, but that can be the difference between being sent back to the minor leagues and ending up in the Hall of Fame. Customers or stockholders may differ greatly from third-party observers as to what is or is not negligible. Products sold to professional photographers and photographic organizations exhibit the same strong customer preference patterns and attendant “market concentration” as products sold to the supposedly “irrational” general public. What is repeatedly ignored in attempts to discount buyer preferences is the cost of knowledge — knowledge of where to buy a product, knowledge of its characteristics and of ways of using it, and knowledge of the way quality varies from specimen to specimen. To approach this from the standpoint of whether the producer “deserves” such a large market share is to dismiss consumers’ interests. To say that a firm’s reputation gives it an advantage — presumably an unfair advantage — in competition71 is to say that consumers economize on knowledge by sorting and labeling only to the firm level, in cases where a company’s history of product reliability makes finer sorting not incrementally worth the cost. The issue is not so much the retrospective justice of rewarding a firm for establishing a reputation for reliability. What is more important socially is the prospective incentive to all companies to acquire or maintain such a reputation — that is, from a social point of view, to localize monitoring incentives where they can be most effectively carried out.

Preoccupation with the firm’s market share has led to adverse antitrust decisions even when there was no adverse economic effects discernible by the courts. In the celebrated antitrust case against the Aluminum Company of America — one of the very few privately created monopolies on record — it was found that the profit rate averaged only about 10 percent,72 like firms in competitive industries. Nor did the Court find any negative effects on the economy — but Alcoa still lost. Its “exclusion” of competitors consisted solely of building plant capacity in anticipation of the growing demand for aluminum.73 The chilling effect of this finding could be seen in the later history of cellophane, which was in chronic shortage because Dupont refused to build plant capacity ahead of the growing demand, for fear of antitrust suits.

Most antitrust cases involve legal actions against individual firms having nowhere near monopoly proportions of output or sales. In the celebrated case of Brown Shoe Company v. United States, a merger which gave the combined firms a total of 5½ percent of American shoe store sales was found to be in violation of the antitrust laws.74 Another merger which gave the Pabst Brewing Company 4½ percent of the nation’s beer sales was also broken up as a violation of the antitrust laws.75 In yet another well-known case, the Supreme Court broke up a merger between two local grocery chains in Los Angeles who together had only 7½ percent of the grocery sales in that city.76 “Secular religion” may not be too strong a characterization for antitrust doctrines which dismember firms that are that far from “monopolistic” control, in industries with sometimes hundreds of competitors. However, the processing of such cases by governmental agencies is by no means irrational as institutional policy. Agencies with a mandate to fight monopoly firms have every incentive to define the term as broadly as they can, to see “incipient” monopoly in as many places as possible — and especially so in an economy where private monopolies are rare. To restrict themselves to fighting real monopolies or significant monopoly threats could mean losing the bulk of their staff, appropriations, and power. A more basic social question is how they find the outside support that is politically necessary to continue such activities into the region of diminishing (or negative) returns. This has to do with the intellectual climate, and so will be discussed in Chapter 10.

Despite the original thrust of antitrust legislation toward preventing high prices from being charged by monopolistic firms, it has increasingly been used to prevent low prices from being charged. A landmark in this development was the passage of the Robinson-Patman Act in 1936. The ostensible purpose of this act was to prevent price discrimination of a kind that would ‘‘substantially lessen competition.” The immediate political impetus behind the law was the growth of high-volume, low-markup retail chains which bought from wholesalers in huge quantities at discount prices and then undersold the smaller merchants with whom it competed for retail sales to the public. Some cynics called it the anti-Sears, Roebuck law. Price discrimination complaints under the Robinson-Patman Act are usually made in transactions involving wholesalers.

Robinson-Patman Act cases, which depend on how competition is affected by a given action, provide especially dramatic examples of the ambiguity involved, throughout the antitrust laws, between (1) the systemic characteristics which constitute “competition” and (2) the incumbent firms which at any given time constitute the competitors of a defendant. Innumerable economists have complained that the administrative agencies and the courts have protected competitors instead of protecting competition. Courts have recognized such distinctions verbally,77 but in case after case the issue has been whether the defendant’s low price adversely affected some competitor(s). Wholesalers’ discounts for very large purchases have been declared illegal because smaller retailers “suffered actual financial losses” which were equated with “injury to competition.”78 So were reduced “competitive opportunities of certain merchants who are injured” by having to pay ten cents a case more for table salt when bought in amounts less than a railroad carload.79 Theoretically, price differences are legally permissible when they can be proved to represent cost differences in serving different customers. However, retrospective cost statistics are subject to highly variable interpretation, so that in practice a seller usually cannot prove anything — and the burden of proof is on the defendant, once it is established that he charged different prices to different customers. The Supreme Court itself has acknowledged that “too often no one can ascertain whether a price is cost justified.”80

The Supreme Court has included fixed overhead costs in claiming that a wholesaler was selling below cost (“suffered substantial losses”)81 which changed “market shares”82 — from 1.8 percent of sales in a local market to 8.3 percent!83 Moreover, the Federal Trade Commission has the power to put a limit on quantity discounts, regardless of cost justifications.84 In addition the courts have not allowed wholesalers to charge different prices to different categories of buyers — such as supermarket chains versus individual “mom and pop” grocery stores — even though the supermarkets are cheaper to serve, unless there is “such self-sameness” among all those in each category as to carry the burden of proof.85 Even though the Court acknowledged that “a large majority” of independent stores required services that supermarkets perform for themselves, “it was not shown that all independents received these services.”86 In short, sorting-and-labeling costs were ignored by insisting that every store be considered individually and only afterwards classified among those sufficiently similar — as this might be subsequently determined by a court.

The government does not “always win” in Robinson-Patman cases, but the cases where the defendant wins reveal very much the same pattern of economic (or noneconomic) reasoning. Despite the usual verbal obeisance to the idea of protecting competition as a systemic condition, the defendants who escape legal penalities do so because — in the Court’s words — they showed “proper restraint”87 in their price cutting, evidencing no “predatoriness”88 toward competitors, whose prices they chose to “exactly meet” instead of undercutting.89 This is in keeping with the legislative history of the Robinson-Patman Act, whose philosophy Congressman Patman expressed as one of “live and let live” and “everybody is entitled to a living”90 — presumably at the consumer’s expense.

One of the theories used to justify the Robinson-Patman Act is that big producers would otherwise temporarily cut prices, driving out small competitors, and later raise prices to monopolistic levels. Concrete examples have been notable by their scarcity (or nonexistence),91 even though the country existed for 160 years before the Robinson-Patman Act was passed. Even as economic theory, the argument has serious problems, because the only certainty would be the short-run losses sustained to drive out smaller competitors, while the longer-run profits needed to recoup these losses are highly problematical, because of innumerable ways that new competition can arise — including buying up the assets of the bankrupted firms at bargain prices and then profitably underselling the would-be monopolists. Actually, neither the empirical nor the theoretical case is made in specific anti-trust prosecutions under the Robinson-Patman Act. It is the defendant who must rebut the prima facie case, and the sinister theories merely hover in the background as unarticulated presumptions.

From the standpoint of the social consequences of social knowledge, what restrictions on price competition do is to inhibit or forbid information about the cheapest ways of doing things from being effectively communicated in prices. It is cheaper to deliver 100 boxes of cereal to a supermarket than to deliver ten boxes of cereal to each of ten different “mom and pop” stores. This is effectively communicated when the wholesaler shaves the price of goods sold in large quantity. If he is either forbidden to do so, or is put through costly processes to justify it in finely meshed sorting-and-labeling categories, that knowledge does not guide economic decision making. Burdens of proof on the defendants in areas where irrefutable proof is virtually impossible amount either to a de facto prohibition or are economically the same as a large fine (legal costs) for engaging in the activity, without any evidence of its social harmfulness.

As in other areas of law, antitrust decisions have impact far beyond the particular parties involved, and in ways never intended by the law. For example, many grocery wholesalers have their own trucks which deliver to retailers and return empty, while other trucks bring grocery items from factories or processors to those same wholesalers’ warehouses and also return empty. From a social point of view, it would obviously make more sense to have the wholesalers’ trucks stop by the processors’ plants and pick up grocery stock on their way back to the warehouses. The present system is estimated to waste annually 100 million gallons of gasoline — enough to drive 140 thousand automobiles for a year,92 not to mention the excess inventory of trucks, the wasted labor of the drivers, or the needless air pollution.

As mere information, this is easy to understand, but it is not socially effective knowledge because the prices that might transmit it are forcibly constrained by the Federal Trade Commission’s interpretations of the Robinson-Patman Act. Ordinarily, food processors would charge lower prices to those buyers who pick up their own shipments than to buyers who require delivery, and this would become an incentive for wholesalers to have their empty trucks stop by on their way back to the warehouse to pick up some more stock. But the FTC has issued advisories that such price differences could be interpreted as violating the Robinson-Patman Act’s prohibition against “price discrimination.” Therefore the uniform prices that are charged reflect the threat of force rather than the relative costs, and the wholesalers respond to those prices as if it were no cheaper to pick up groceries in empty trucks than to have another truck deliver them — because that is financially true, according to the knowledge conveyed to them by the legally constrained prices. It is, of course, distorted knowledge from a social point of view, but both its transmission and its reception are rational within the legal incentives created by the Robinson-Patman Act. The social rationality of the act itself is another matter.

Large costs are also created by the uncertainties surrounding the interpretations of vague antitrust laws — especially the Robinson-Patman Act, which a leading expert on that act refers to as a “miasma of legal uncertainty,”93 and which even a Supreme Court Justice has called a “singularly opaque and elusive statute.”94

Antitrust policy, like utility regulation, exhibits a strong bias towards incumbents — toward protecting competitors rather than competition. This is readily understandable as institutional policy: Competitors bring legal complaints; competition as an abstract process cannot. Competitors supply administrative agencies such as the Federal Trade Commission with a political constituency; competition as an abstraction cannot. It is only when governmental agencies are seen as decision makers controlled by people with their own individual career and institutional goals that many apparently “irrational” antitrust policies make sense. For example, although antitrust laws are ostensibly aimed at monopolistic practices, the actual administration of such laws — and especially the Robinson-Patman Act — has involved prosecuting primarily small businesses, most of whom are not even listed in Moody’s Industrials and very few of whom are among Fortune’s list of giant corporations.95 The institutional reason is simple: A case against a small firm is more likely to be successful, because small firms do not have the money or the legal departments that large corporations have. A major antitrust case against a giant corporation can go on for a decade or more. A prosecution against a small business can be concluded — probably successfully — within a period that is within the time horizon of both the governmental agencies and their lawyers’ career goals.

The “rebuttable presumption” of guilt after a prima facie showing by the government facilitates successful prosecutions, especially on complex matters subject to such different retrospective interpretations that no one can conclusively prove anything. In one well-known case, an employer with only 19 employees, and who had about seventy competitors in his own city alone, had to prove that his actions did not “substantially lessen competition” — and he lost the case.96 It confirms the wisdom of putting the burden of proof on the government in most other kinds of prosecutions.

In general, the public image of antitrust laws and policy is of a way of keeping giant monopolies from raising prices, but most major antitrust cases are against businesses that lower prices — and most of the businesses involved are small businesses.

ECONOMIC “PLANNING”

Economic “planning” is one of many politically misleading expressions. Every economic activity under every conceivable form of society has been planned. What differs are the decision making units that do the planning — which range from children saving their allowances to buy toys to multinational corporations exploring for oil to the central planning commission of a communist state. What is politically defined as economic “planning” is the forcible superseding of other people's plans by government officials. The merits and demerits of this mode of economic decision making can be discussed in general or in particular, but the issue is not between literal planning on the one hand versus letting things happen randomly, on the other. This obvious point needs to be emphasized and insisted upon, not only because of the general tendentiousness of the word “planning,” but also because of specific laments about how “accident,” “chance,” or “uncoordinated” institutions97 lead to “helplessness” as the economy “drifts.”98

We have already examined particular examples of the government’s superseding of other people’s plans, as in various forms of price control, control of particular markets, or direct or indirect transfers of resources. What remains to be examined is comprehensive economic “planning” — the subordination of nongovernmental economic decisions in general to a design imposed on the whole economy. This can take place while retaining private ownership of physical or financial assets (capitalism), as happened under fascist regimes, or government ownership of the means of production (socialism) may accompany comprehensive “planning,” or such government ownership may coexist with market pricing mechanisms instead of “planning,” as in so-called “market socialism” (Yugoslavia being an example). There are also welfare states (such as in Sweden) which may call themselves “socialist” but which operate largely through tax transfers of income earned in a private economy, rather than through comprehensive government control of production decisions. The focus of the analysis here will be comprehensive economic “planning” in general, rather than its particular political or ideological accompaniments. That is, the analysis will be in terms of institutional characteristics rather than hoped-for results.

Comprehensive economic “planning” faces many of the same problems already noted in particular kinds of governmental direction of economic activities — essentially, problems of knowledge, articulation, and motivation.

ARTICULATION

In an economy directed by national governmental authorities (“central planners”), the directives that are issued must articulate the characteristics of the products to be produced. Earlier discussions of rent control or price control in general have noted (1) the difficulties of defining even such apparently simple things as an apartment or a can of peas, and (2) the tendency of products — or labor — to change in quality in perverse ways in response to price or wage controls. Both problems are pervasive under comprehensive central direction of an economy.

Examples abound in the Soviet press, where economists and others decry particularly glaring instances and demand “better” specification — rather than raising the more politically dangerous question of whether any articulated specification by central planners can substitute for monitoring by actual users, as in price-coordinated economies. For example, when Soviet nail factories had their output measured by weight, they tended to make big, heavy nails, even if many of these big nails sat unsold on the shelves while the country was “crying for small nails.”99 When output is measured in value terms, the individual firm tends to produce fewer and more expensive units — whether clothing or steel,100 and regardless of the users’ preferences. Where the articulated measurements are in units of gross output, the firm tends to buy unnecessarily large amounts of parts from other firms,101 receiving credit in its final product statistics for things produced by others; where the articulated measurements are in units of net output, then the firm tends to make as much as possible itself, even where the cost of parts produced by specialized subcontractors is lower.102 All of these are perfectly rational decisions from the standpoint of the individual Soviet firm, maximizing its own well-being, however perverse the results may be from the standpoint of the Soviet economy. Even terror under Stalin did not make the individual producer adopt the economy-wide viewpoint. On the contrary, where imprisonment or death were among the penalties for failure to fulfill the task assigned by the central planners in Moscow, the individual firm manager was even more prone to fulfill the letter of the law, without regard to larger economic considerations. In one tragicomic episode, badly needed mining equipment was produced but not delivered to the mines because the equipment was supposed to be painted with red, oil-resistant paint — and the equipment manufacturer had on hand only oil-resistant green paint and nonoil-resistant red paint. The unpainted equipment continued to pile up in the factory despite the desperate need in the mines, because — in the producer’s words — “I don’t want to get eight years.”103 To the actual users, the color of the paint made no difference, but that incidental characteristic carried as much weight as articulation as the most important technical specification.

These are not peculiarities of Russians or of the Soviet economic or political system. They reflect inherent limitations of articulation. The American political demand for more high school graduates — in the academic paradigm, a solution to the “dropout” problem — led to more of that product being produced, by whatever lowering of standards was necessary. It is easy to articulate what is meant by a high school graduate — someone who receives a certain embossed piece of paper from an authorized agency — but it is much harder to articulate in operational terms what education that is supposed to represent.

In price-coordinated decision making, the user can monitor results with little or no articulation by either himself or the producers. The kinds of nails that are incrementally preferable will become more saleable or saleable at a higher price, and the producer will automatically emphasize their production, even if he has not the faintest idea why they are more in demand. If a certain color of paint makes mining equipment more saleable, the producer will tend to use that color of paint, but he will hardly forego, or needlessly postpone, sales until he can get the particular color of paint, if the demand for the equipment is such that it sells almost as fast with a different color. Where price-coordinated education (private school) is a feasible individual option, parents who have never sat down and articulated a list of education criteria can nevertheless judge educational results in a given school and compare them with results available from other private schools or public schools and determine whether the differences in results are worth the differences in cost.

Where prices are set by government fiat, they convey no information as to ever-changing economic trade-offs which reflect changing technology, tastes, and diminishing returns in both production and consumption. Price changes are virtually instantaneous, while statistics available to planners necessarily lag behind. As a student of British economic planning has noted: “The ceaseless changes in conditions affecting the daily demand and supply of countless goods and services must render the best statistics out-of-date before they can be collected.”104 Using a relatively few “stale statistics” to “guide a complex and ever-changing economy” means “in practice falling back on ad hoc interventions interspersed with endless exhortation ‘in the public interest’…”105 Nazi Germany had similar economic problems in basing prospective decisions on retrospective statistics.106 The problem is inherent in the circumstances, and not peculiar to a given ideology, though some ideologies are more insistent on maintaining such circumstances than are others.

Another way of looking at the vicissitudes of articulation is that one cannot articulate what does not exist — namely an objective set of characteristics which determine an objective scale of economic priorities. All values are ultimately subjective and incrementally variable. No single social group, or scale of priorities can define the varying importance of multifaceted characteristics, either to disparate consumers or to equally disparate producers. The millions of users of millions of products can judge incremental trade-offs when confronted with them, but no third party can capture these changing trade-offs in a fixed definition articulated to producers in advance. When user monitoring, conveyed through prices and sales, is replaced by third-party articulation, in words or numbers, vast amounts of knowledge are lost in the process. In the absence of user monitoring of producer output through a market, there must be third-party specification of what the output shall consist of, and this runs into the inherent limitations of articulation.

However many limitations and distortions articulation may have as a means of communicating economic knowledge, its political appeal is as widespread as the belief that order requires design, that the alternative to chaos is explicit intention, and that there are not merely incremental trade-offs but objectively specifiable, quantifiable and categorical “needs.” From this perspective, one must “understand the relationship”107 — which is to say, articulate the relationship — among economic sectors in order for them to coordinate. Price-coordination simply vanishes as an alternative within the framework of such beliefs. There must be “priorities” and a “time frame” articulated.108 Indeed, “we need a full presentation of the items we can choose among,” which “a completely automatic free market” would not articulate — which is why we “do not accept that approach.”109 Instead we “must be able to see” articulated alternatives in order to “make an intelligent choice.”110 Under the assumption of objectively definable, quantifiable “needs,” efficiency is merely an engineering problem rather than a reconciling of conflicting human desires, so that social policy can be analogized to such fixed-objective activities as putting a man on the moon,111 and even “planning” is simply a matter of “technical coordination” by “experts”112 using “systematic analysis.”113 In such a framework, even “the public interest”114 can be confidently discussed as an empirically meaningful notion, along with “objective analysis… of what is really desirable.”115 These quoted statements are not the glib remarks of sophomores, but the pronouncements of one of the most famous American senators and one of the most famous American economists — Hubert Humphrey and Wassily Leontief, respectively. They are by no means alone.

KNOWLEDGE TRANSFER

The limitations and distortions of articulation revolve around the simple fact that third-party central planners cannot know what users want, whether those users be consumers or other producers acquiring raw material, component parts or production-line machinery. Complex trade-offs among a given product’s characteristics and between one complex product and another, cannot be captured in a fixed definition, however detailed. Indeed, the amount of detail itself involves trade-offs, for beyond some point the detail becomes counterproductive, as in the case of Soviet mining equipment that was supposed to have a particular kind of paint.

It is not merely the enormous amount of data that exceeds the capacity of the human mind. Conceivably, this data might be stored in a computer with sufficient capacity. The real problem is that the knowledge needed is a knowledge of subjective patterns of trade-off that are nowhere articulated, not even to the individual himself. I might think that, if faced with the stark prospect of bankruptcy, I would rather sell my automobile than my furniture, or sacrifice the refrigerator rather than the stove, but unless and until such a moment comes, I will never know even my own trade-offs, much less anybody else’s. There is no way for such information to be fed into a computer, when no one has such information in the first place.

Market transactions do not require any such knowledge in advance. When actually faced with either an escalating price for a good which one normally purchases, or a real bargain on something one normally does not purchase, then and only then does a decision between the two goods have to be made — and it is not uncommon for persons in such situations to make decisions that they would not have expected of themselves, even if the results are sufficiently good to cause a permanent change of consumption patterns. Most of us need not think about what our choice would be as between owning a yacht and an airplane, much less an incremental choice between a longer yacht verses a higher-powered airplane. In a market economy, one individual or decision making unit need be concerned with only a minute fraction of the trade-offs in the economy. Under central planning, somebody has to try to reconcile them all simultaneously. In a market economy, even a manufacturer of yachts or a manufacturer of airplanes need not concern himself with the trade-offs between the two products, much less trade-offs between these and numerous other products which compete for the same metal, glass, fuel, storage space, worker skills, etc. Each producer need concern himself only with the trade-off between his own product and money — a fungible medium in which other people measure the trade-offs for their respective products. As a figure of speech, it may be said that the economy trades off one use for another through this medium. This is not only true, but an important truth, for it helps explain why knowledge is economized through price allocation. Another way of saying the same thing is that central planning would require far more knowledge to be actually known by the central planners to achieve the same net result.

Although it may be empirically true that different ideologies generally regard central planning in different ways, it is not ultimately in principle an ideological question. Marx and Engels were unsparing in their criticisms of their fellow socialists and fellow communists who wanted to replace price coordination with central planning. Proudhon’s theory that the government should fix prices according to the labor time required to produce each commodity was blasted by Marx in the first chapter of The Poverty of Philosophy:

Let M. Proudhon take it upon himself to formulate and lay down such a law, and we shall relieve him of the necessity of giving proofs. If, on the other hand, he insists on justifying his theory, not as a legislator, but as an economist, he will have to prove that the time needed to create a commodity indicates exactly the degree of its utility and marks its proportional relation to the demand, and in consequence, to the total amount of wealth.116

It was clear from the rest of the chapter that he expected Proudhon could do no such thing. Thirty years later, Engels denounced another socialist theoretician who wanted to abolish markets:

Only through the undervaluation or overvaluation of products is it forcibly brought home to the individual commodity producers what things and what quantity of them society requires or does not require. But it is just this sole regulator that the utopia in which Rodbertus also shares would abolish. And if we then ask what guarantee we have that necessary quantity and not more of each product will be produced, that we shall not go hungry in regard to corn and meat while we are choked in beet sugar and drowned in potato spirit, that we shall not lack trousers to cover our nakedness while trouser buttons flood us in millions — Rodbertus triumphantly shows us his famous calculation, according to which the correct certificate has been handed out for every superfluous pound of sugar, for every unsold barrel of spirit, for every unusable trouser button…117

Some modern socialist theoreticians have followed up on Marx and Engels’ ideas by constructing models of price-coordinated socialist economies.118 This goes to the heart of the purpose of socialism or “planning” in general. If the purpose is to give better economic expression to the desires of the people at large — overcoming the externalities of capitalism, for example — then such market socialism schemes have more appeal than if the purpose is to supersede the preferences of the people by the preferences of those who believe that third parties (especially themselves) can define objective “needs” (or its converse, “waste”). The prevalence of central planning over market socialism — both in theory and in practice — suggests something about the purpose or vision being pursued. Even where some elements of market socialism have been introduced, it has usually been after first attempting central planning and finding the results intolerable. Local Soviet agricultural and dairy markets, for example, have been allowed a measure of autonomy and coordination by uncontrolled prices after food shortages and even famines followed earlier attempts at the complete “planning” of agriculture. Private agricultural plots account for about 3 percent of the total arable land of the USSR, and about one third of the agricultural output.119

The difficulties of understanding other people’s complex trade-offs and successfully articulating them to producers are compounded by the difficulties of knowing how to produce what is wanted. It was noted in Chapter 1 that no one really understands completely how to make even a simple lead pencil. The task facing central planners is far more complex than that, involving not only far more complex products, but far more complex trade-offs among the millions of products using the same or substitutable inputs. For example, the Soviet machine tool industry alone produces about 125,000 products, involving an estimated “15,000,000,000 possible relations.”120 Even if the central planners were to assemble all the experts on the production of each of the products in the economy — which would amount to a stadium full of people — the trade-offs between products competing for the same inputs would still remain an unsolved problem. In short, central planners cannot know what the trade-off patterns are in production any more than in consumption. Others may know — each for his own minute segment of the economy — but the transfer of that knowledge intact to a central decision making unit is a costly and chancy matter.

Much depends on the incentives and constraints facing the individual on the spot who is supposed to transfer his knowledge to the central planners. A Soviet plant manager knows what his plant can and cannot do better than anyone in Moscow — just as settlers in colonial America knew what was and was not economically feasible under local conditions better than anyone in London, and just as slaves knew what they could and could not do better than any overseer or slave owner. The basic problem is the separation of knowledge and power. Incentives can be contrived by those with power to elicit the knowledge, but such incentives are themselves constrained by the need to preserve the basic relationship — central planning, colonialism, and slavery, in these examples.

Because the central planners’ estimates of each plant’s capacity will become the basis for subsequently judging each plant manager’s success, in transmitting information to the central planners Soviet managers consistently “understate what they can do and overstate what they need.”121 The central planners know that they are being lied to, but cannot know by how much, for that would require them to have the knowledge that is missing. One way of trying to get performance based on true potential rather than articulated transmissions is a system of graduated incentive payments for “overfulfillment” of the assigned tasks. Soviet managers, in turn, are of course well aware that much higher production will lead to upward revisions of their assigned tasks, so that a prudent manager is said to “overfulfill” his assignment by 5 percent, but not by 25 percent.122 In short, a “mutual attempt at outguessing the other”123 goes on between Soviet managers and central planners. Knowledge is not transmitted intact.

The distortion of knowledge is far more serious when the whole economy is coordinated on the basis of such articulation, supplemented by central planners’ guesses. In a market economy, decisions are made through an entirely different process. The individual enterprise that wants raw material, capital equipment, etc., goes into the market to bid for them on the basis of their own best estimate of what they can achieve with them. Competition with other potential users of the same inputs forces them to bid as high as they can afford to, in the light of their own on-the-spot knowledge of their enterprise and its customers. It is not a question of articulating anything to anybody, but of conveying knowledge implicitly through prices bid. Similarly, there is no point overstating production costs to the customer, when competitors will undercut the price and take customers away. In short, the unarticulated knowledge made implicitly through prices has more reason to be accurate than the explicitly articulated knowledge conveyed to central planners.

The special disadvantages of central planning in agriculture — symbolized by massive importations of American grain by the Soviet Union — are due to special problems of transmitting knowledge. There is great variability in agricultural production and in agricultural output, so that the volume of knowledge that would be needed for central planning on the same scale as in industry would be even more staggering. For example, land varies considerably — even within a few hundred yards — in rockiness, chemical composition, physical contours, and proximity to water (horizontally and vertically), all of which affect what can be grown at what cost. The output varies, often literally from unit to unit, and the freshness, nutritional value and perishability also varies, from day to day and sometimes from hour to hour. All this is in marked contrast to steel production, for example, where a given combination of iron ore and coal in a given furnace produces a given product, whether in Moscow or Vladivostok, and the product can remain in its original condition for years.

The Soviets themselves have long recognized “the very varied conditions which always exist in agriculture.”124 But there is a big gap between such recognition and being able to construct incentives to deal with it, while at the same time not abandoning the political and economic structure of the country. Innumerable “reforms” have swept over Soviet agriculture in succession, trying to cope with that inherent constraint. Many sound agricultural policies originating with the central planners — crop rotation, planting systems, etc. — have been applied categorically “regardless of local conditions” and over the opposition of local agronomists, in places where the environment necessary to make them successful was not present.125 Sometimes this was due to following orders from above, but even when the Soviet Premier warned against “excesses,” many local authorities found it safer to follow general official policy rather than risk a personal deviation which might or might not work.126

While there is much modern literature on the vicissitudes of Soviet planners, the point here is not that the Soviets are inefficient or that “planning” has difficulties. All human life has difficulties. The point is that a particular kind of institutional incentive structure has a specific set of difficulties, traceable to the articulation and transmission of knowledge. The point is reinforced by the appearance of the same kind of difficulties with the same incentive structures under entirely different historical and ideological conditions.

In colonial America, Georgia was the most elaborately “planned” colony, directed and heavily subsidized from London for twenty years by a nonprofit group of philanthropists, to whom the British government had entrusted the governance of that colony. They issued rations, appropriated funds for teachers and midwives, as well as for cooking utensils and items of clothing — all for people living 3,000 miles away in a land the London trustees had never seen.127 No other colony had the benefit of so much “planning” or central direction. Yet Georgia ended up “the least prosperous and the least populous of the colonies.”128

Its problems were the classic problems of planning. Initial miscalculations based upon the inadequate knowledge of the distant planners were not readily correctable by feedback based on the knowledge possessed or acquired by the experience of those actually on the scene. For example, property rights were not freely transferable, so that the London trustees’ initial estimates of the amount of Georgia land necessary or optimal for farming became frozen into colonial practice. While their articulated decisions were in terms of “land” — as if it were a homogeneous resource — as already noted, land always varies in chemistry, topography, and all the other variables which affect its output potential. Equal rations of land surface were not equal rations of these economically relevant variables, nor was there any way to trade off these characteristics without actual trades between those on the scene and familiar with the nature of the land, and of themselves as farmers, the interaction of which would determine “fertility.” In short, the distortions of planning involved not merely inequities, but inefficiencies. Had the initial allotments been freely transferable, the inefficiency at least could have been corrected.129

Under the rule that farms must be entailed to a male heir, those settlers with an allotment and no male heir to leave it to had an asset with a shorter time horizon than others — and therefore had less incentive to make long-run improvements, since it could not be sold in the market.130 The discontents and neglects to which this incentive system led eventually forced the London trustees to relax some of their control over the transfer of land, each concession being made grudgingly “as if it were a sacrifice of principle.”131

The London planners’ lack of knowledge was also reflected in their choice of economic activities to promote. Because Georgia had mulberry trees, it was decided that it would be a good place for silkworms and therefore for a silk industry. As often happens, “expert” testimony (from an expert on the Italian silk industry) was enlisted to promote the project, leading to a report “as rich in enthusiasm as it was poor in firsthand knowledge…”132 A crucial piece of firsthand knowledge that was lacking was that the particular variety of mulberry tree in Georgia was different from the kind of mulberry tree used by silkworms in the Orient. Nor was the climate the same, and half the silkworms in Savannah died as a result.133 Nor was the labor available in Georgia the same as that in the Orient in skill, diligence, or low pay. Still, there was a favorable “demonstration project” — a gown of silk produced in Georgia for the Queen — though Georgian silk never became commercially successful.134

Over a period of twenty years, the British government poured more than £130,000 into Georgia, supplemented by church and private donations, including over £90,000 from one of the trustees. Such massive subsidies made it unnecessary for the settlers in Georgia to pay taxes, and therefore made it unnecessary to have any representative local government to raise taxes — thereby eliminating the need for institutions which could have provided political feedback modifying the distant trustees’ plans. The sum spent by the British government was more than it had ever spent on any other nonmilitary project. Meanwhile, the beneficiaries of all this largesse were leaving Georgia for other colonies, less well subsidized but also less controlled. Eventually, even massive subsidies were unable to keep the planning project going, and in 1751 the trustees returned the colony to the British government. Like later “planners” they blamed failure not on their own decisions or on the inherent limitations of planning, but on lack of enough additional financial support!135

NON-ECONOMIC RATIONALES

There are moral and political, as well as economic, reasons for preferring governmental direction of the economy (“planning”) to decentralized price coordination (“capitalism”). Perhaps the most common reason for preferring “planning” in general and socialist “planning” in particular is a sense of the moral inadequacy of capitalism — either (1) outright “exploitation” of one group by another, domestically or internationally, or (2) a selfish, every-man-for-himself amorality, or (3) a “meritocracy” which ignores our common cultural inheritance and our common humanity. More narrowly economic reasons for preferring governmental direction to decentralized price coordination include the possibility of internalizing external costs, taking a longer-run view of the consequences of economic decision making and eliminating monopolistic practices which reduce the efficiency of a price-coordinated economy. Politically, one of the major objections to the price-coordination systems of Western society as they have emerged historically is their inequality in wealth and power among people and organizations, and the distortions which this inequality introduces into both political and economic processes.

Capitalist middlemen are often depicted as “mere interceptors and parasites”136 and profit as simply “overcharge.”137 While episodic interception of goods on their way from producer to consumer might seem plausible, the repeated and persistent choice of producers and consumers to route their sales and purchases through a middleman is difficult to explain unless they each find this less costly than dealing directly with one another. Consumers would not have to go to the factories, with all the inconveniences (and sometimes dangers) that might involve. Producers could own their own retail outlets, as some do. However, the rarity of this — even when producers have ample capital available to finance it — suggests that there are different skills necessary for different functions, so that firms which are successful in one stage of the economic process find it cheaper at some point to turn their output over to other firms which have lower costs of carrying out the next phase. If the next firm were not cheaper or better at conveying the products to the consumer, the producer would have no incentive to incur the bother and the cost of negotiating with middlemen, shipping his goods to them, and going through the financial problems of collecting payments from them. Perhaps even weightier evidence of the economy’s advantages from middleman functions is that even the “planned” Soviet economy — ideologically opposed to middlemen — has found itself driven to setting up similar organizations, not only for consumer goods but also for equipment and supplies used by producers.138

In any kind of economic system, inventories are a substitute for knowledge. The two are incrementally traded off for one another according to their respective costs. If a housewife knew exactly what her family was going to eat and in what amounts, neither her refrigerator nor her pantry would have to contain as large or varied an inventory as it does, nor would there be as much “waste” of food as there is. Like so much other retrospective measurement of “waste,” this is based on an implicit standard of prospective omniscience or zero knowledge cost. To trace in retrospect the path of a particular unit of a particular product is often to discover “overcharge” or an “exorbitant” markup for that item considered in isolation. But the whole reason for anyone — housewife or multinational corporation — to maintain an inventory is the cost of prospective knowledge, so that a whole aggregation of items is stocked precisely because no one can know in advance which one will be wanted at a given time, and the costs of stocking items which later turn out to be unwanted are covered by (are part of) the cost charged for the particular items which turn out to be in demand. This is most obvious in areas of greatest uncertainty (highest knowledge cost), notably perishable agricultural products. If one-third of all peaches have to be discarded somewhere on the way from producer to consumer, then the cost of eating 200 peaches is the cost of producing 300 peaches. To trace in retrospect the cost of the particular 200 peaches actually eaten would be to discover a 50 percent “overcharge” even if no one made a cent of profit. Similarly, to ask how much the original farmer was paid per peach compared to how much the consumer paid per peach would be to discover a substantial gap, even if all transportation, storage, and other middleman costs were zero, in addition to a zero profit.

Given that middleman functions serve some economic purpose, and have inherent costs, what is to prevent middlemen from charging more for their services than they cost or are worth? Only what inhibits everyone else performing any kind of function anywhere in the economy or society from doing the same thing. Costs, as noted earlier, are ultimately foregone alternatives. It is these alternatives open to competitors which determine how much any given seller can successfully demand. If some existing seller(s) charged more than enough to cover the costs involved — that is, more than the risks and efforts are worth to alternative producers, those alternative producers will displace him by underpricing him. Sellers are, after all, more concerned with increasing their total profits than with maximizing profits per unit of sale, and whole retail empires have been built on shaving a few cents off the price of various kinds of merchandise. Indeed, the constant efforts to prevent this with “fair trade” laws and the Robinson-Patman Act is some measure of how pervasive the incentives are for price cutting. The desire of businessmen for profits is what drives prices down unless forcibly prevented from engaging in price competition, usually by governmental activity. Even Karl Marx recognized that when one capitalist introduces a cost saving, the others have no choice but to follow.139

All prices — whether called wages, profits, interest, fees, or whatever — are constrained only by the competition of other suppliers. Profits are no different in principle, except for being residual and variable rather than contractually fixed. Sometimes profits are regarded as special in representing the “exploitation” of other inputs — notably labor — rather than (or in addition to) the consumer. One reason for believing this is simply an emphasis on the physical production process as the source of economic value, and the exclusion of those not taking part in that physical process from any contribution to the economic end result, so that anything that they receive for their nonexistent “contribution” is exploitation.

The most elaborate vision of this sort is the Marxian theory of “surplus value” — or rather, his definition of surplus value as the difference between the wages of the working class and total output.140 Like so many emotionally powerful visions, the Marxian vision is not a testable hypothesis but an axiomatic construction. Output per unit of labor is simply called “labor’s output,” a practice common far beyond the circle of Marxists. Obviously output can be divided by any input, just as any numerator can be divided by any denominator. Instead of output per man-hour we can arbitrarily divide automobiles by ounces of hand lotion. The mere fact that one number is upstairs in a fraction and the other number downstairs does not establish any causal relationship between the two things. The implied connection between automobiles and hand lotions is one we would see through immediately. But once we begin with two things which are plausibly connected, we can add the appearance of proof or precision to that plausibility by making fractions out of them. Businessmen often ask for tax reductions on grounds that they need X number of dollars of investment per job, so that increasing employment will result from the tax cut. That investment and employment are connected seems reasonable and plausible in general, but proof or precision by fractions is spurious. Quite aside from the possibility of distributing a business tax cut as dividends or higher executive salaries, even if it all goes into investment, this investment can just as easily go into displacing existing employees with machinery as into hiring new employees. It all depends on the relative prices, the state of the market for the output, and technological developments. None of these prospective variables are captured by retrospective data on total investment divided by total employees.

The Marxian argument is the same game played with a different deck of cards. Output per unit of labor becomes labor’s output by definition — indeed by a whole system of subsidiary definitions based on the same arbitrary postulate.141 The same doctrine expressed as a testable hypothesis would collapse like a house of cards. If labor is the sole — or even main — source of value, then in those economies where there is more labor input and less nonlabor input, output per capita and therefore real income would be higher. The opposite is blatant. In the most desperately poor countries, people work longer and harder for subsistence than in more elaborate and prosperous economies where most people never touch physical goods during the production process. Indeed, it is only in the latter countries that subsistence is sufficiently easy to achieve that it is taken for granted, and that there is time and money to spend on books on the “exploitation” or “alienation” of labor.

Attempts to salvage the exploitation theory sometimes use an international framework to claim that prosperous “capitalist” nations are guilty of “robbery of the rest of the world” through “imperialism.”142 Sometimes this is based on nothing more than the verbal arbitrariness of referring to a prosperous country’s consumption of its own output as its disproportionate consumption of “the world’s” output or “the world’s” resources. This is a particularly misleading procedure as regards the United States, whose total international economic transactions are an insignificant fraction of its domestic economic activity. Moreover, American international activity is disproportionately concentrated in other industrial nations rather than the Third World which is supposedly the source of American prosperity. The United States has more invested in Canada than in all of Latin America, or in Asia and Africa put together. American investments in Western Europe are even higher than in Canada.143 Even the data in Lenin’s classic, Imperialism, shows industrialized nations investing their money in each others’ economies more than in any underdeveloped areas,144 even though the words in the text claim that capitalism has escaped its economic self-destruction only by exporting capital to noncapitalist nations. When all else fails, believers in this vision point to specific activities by capitalist nations that have behaved in ways which are regarded as morally wrong. Whatever the merits of their arguments in particular cases, the abuse of power is too universal an historical phenomenon to be made a defining characteristic of capitalism. It seems especially inappropriate as part of an argument for alternative systems with more concentration of power.

SUMMARY AND IMPLICATIONS

The twentieth century has seen a definite trend toward third-party economic decision making, under a variety of political or ideological banners, and in many different economic forms. Sometimes it has imposed decision making as regards a given kind of economic transaction, as in rent control or minimum wage laws. Sometimes it has been a more arbitrary attempt to control prices in general, or to regulate particular industries such as transportation or communication. In some countries, it has gone as far as attempting to control the whole economy.

The results of modern “planning” have followed a pattern seen centuries ago in different circumstances and with entirely different ideologies and rhetoric. The results of comprehensive “planning” in colonial Georgia parallel the results of Soviet planning, just as various modern schemes of price control have produced results virtually identical to those produced by price control in Hammurabi’s Code or in the Roman Empire under the Emperor Diocletian.145 There is a special irony in this, for much of modern “planning” emphasizes its revolutionary newness — implying, presumably, some exemption from being tested by old-fashioned analytic methods or judged by old-fashioned moral standards. In fact nothing is older than the idea that human wisdom is concentrated in a select few (present party always included), who must impose it on the ignorant many. Repeated attempts to apply this doctrine rigorously, in a wide range of historical settings, suggests that it is less likely to survive as an hypothesis than as an axiom or an ideology.

Chapter 9 Trends in Law

Legal institutions in the United States are anchored in a Constitution that is nearly two hundred years old, and which has changed relatively little in its basic philosophy in that time. Most of the later amendments follow the spirit of the original document and its Bill of Rights. Yet despite this, American legal institutions have undergone a revolution within the past generation — a revolution which coincided not only in time, but also in spirit and direction, with changes in economic and political institutions. The centralization of decision making is a pattern that runs through landmark court cases, ranging from antitrust to civil liberties to racial policy to the reapportionment of state legislatures. The net result of these legal developments has been an enlargement of the powers of courts and administrative agencies — institutions least subject to feedback from the public, and therefore most susceptible to continuing on a given course, once captured by an idea or a clique. This represents an historic shift in both the location of decision making and in the mode of decision making. Decisions once weighed in an incremental and fungible medium like emotions or money, with low-cost knowledge readily conducted through informal mechanisms, are increasingly weighed in the medium of articulation, in more categorical terms, and with higher costs of transmitting knowledge through rules of evidence documentable to third parties. The predilections or susceptibilities of those third parties also become more important than was ever contemplated for a constitutional or a democratic society.

Along with historic changes within the law has come an enormous expansion of the sheer numbers of lawyers, judges, and cases. The number of lawyers and judges per capita increased by 50 percent from 1970 to 1977.1 California alone has a larger judicial system than any nation besides the United States.2

The quantitative and qualitative aspects of trends in the law are not independent of one another. As courts have expanded the kinds of questions they would adjudicate — including the internal rules of voluntary organizations, and the restructuring of political entities — more and more people have sought to win in court what they could not achieve in other institutions, or have appealed trial results on more and more tenuous grounds. A 1977 survey reported: “Appellate judges estimate that 80 percent of all appeals are frivolous.”3 The cost of all this is not simply the salaries of judges and lawyers. As in other areas, the real costs are the foregone alternatives — notably speedy trials to clear the innocent and convict the guilty, so that the public is not prey to criminals walking the streets while legal processes drag on. In civil cases, the costs of delay are obvious in cases with large economic resources idled by legal uncertainties, but they are no less real in cases where child custody or other emotionally-devastating matters drag on. In short, there is a social trade-off between the costs and the benefits of increased litigation or increasingly elaborate litigation. The institutional question is, how are these social costs and benefits conveyed to the individual decision makers: the parties, the lawyers, and the judges?

To some parties the costs of litigation are not conveyed at all, but are paid by the taxpayers, as in most criminal cases, where trial lawyers, appeals and prison law libraries in which to prepare appeals are at taxpayer expense. The more deadly costs of having criminals at large while waiting trial or appeal are also paid by the public. All these costs have been increased within recent decades by court decisions. Lawyers, of course, do not pay costs but instead reap benefits as the law becomes more intricate and time-consuming — and lawyers have in fact opposed attempts at simplification, such as “no-fault” automobile insurance. Lawyers’ benefits have increased in recent years as payments from clients have been supplemented by payments from others — not only taxpayers but also in institutional arrangements popularly defined by their hoped-for results as “public interest” law firms, supported by donors to “causes.” Insofar as the tax money is payable only for particular kinds of cases and the donors have a special focus — as with the “environmentalists” or contributors to the NAACP — then lawyers and legal institutions paid by third parties have every incentive to pursue such cases well past the point of diminishing returns or even negative returns to society at large.

Because the American judicial system of trial courts and appellate courts reaches an apex in the Supreme Court of the United States, the trends there are crucial for the behavior of the whole legal system. Within the past three decades — and especially in the controversial “Warren Court” era — there has been an expansion of the issues which the Supreme Court will adjudicate, and of the extent to which the court will go beyond defining the boundaries of other institutions’ discretion to reviewing the specific decisions made. Some degree of this is inherent in any appellate court’s functioning — a guilty verdict by a jury in a courtroom surrounded by a raging lynch mob cannot be allowed to stand merely because formal procedure was followed — but neither are appellate courts supposed to re-try issues rather than determine the constitutionality of trials and legislation. Otherwise, in the words of an appellate judge, “Law becomes the subjective preference of the reviewing court.”4

The U.S. Supreme Court was increasingly surrounded by controversy after Earl Warren became its Chief Justice in 1953. In the early stages of these controversies, those who accused the court of going beyond the legitimate bounds of constitutional interpretation into the dangerous area of judicial policy making tended to be those opposed to the particular social or political substance of the decisions made, while those who defended the court tended to be those in favor of the social and political impacts achieved or expected. It is unnecessary at this point to enter the specifics of these early controversies. As the Supreme Court continued along a path that involved increased judicial activism at all levels and in a variety of issues — lower courts running school systems, ordering prisons to be built, or even ordering a state legislature to pass a tax bill — the nature of the defense of the Court also began to change. Many of those in favor of the social or political results of Supreme Court decisions began to question whether there was any legal or constitutional basis for those decisions. Some argued that a constitutional case could be made for the decisions, though the court had not effectively made it.5 Others lamented that we had simply reached judicial policy making.6 Still others welcomed the judicial activism and lamented only its concealment — the “masking” of “decisions on the merits” and the court’s use of legal formalisms to “hide the reasoning behind its decision.”7 According to this line of thought, the court should not be restricted to the narrow role of interpreting the Constitution as a set of rules but should aspire to the role of applying the Constitution as a set of “values.”8 In short, both friend and foe alike came ultimately to see the Supreme Court as going beyond the previous bounds of constitutional interpretation, and into the realm of judicial policy making.

Trends in American legal institutions will be considered in four broad areas, those dealing with (1) administrative law-making, (2) free speech, (3) race, and (4) crime.

ADMINISTRATIVE AGENCIES

Along with the expansion of traditional legal institutions, operating under traditional constitutional constraints, has come the emergence and proliferation of a new hybrid institution — the administrative commission, combining legislative, judicial, and executive functions, in defiance of the separation-of-powers principle, and constrained in its exercise of power only by sporadic reversals of its decisions by appellate courts or even more rare congressional legislation. These institutions are a development within the past century — the first, the Interstate Commerce Commission, was founded in 1887 — but their rapid proliferation began with the New Deal of the 1930s which created many so-called “alphabet agencies”: the SEC, NLRB, FPC, etc. These administrative commissions are headed by presidential appointees with fixed and staggered terms which overlap one another and also overlap the term of office of any given administration, in order to promote independent decision making. Members of the commissions or boards heading these agencies are removable only by impeachment, and their regulations, which have the force of law, require neither presidential nor congressional approval, but go into effect automatically after having been published in the Federal Register. In addition to making law in this way, the same administrative commissions also act as judge and jury for anyone accused of violating their regulations. They also administer staffs and bureaus which research, advise, and generally patrol their domain. Some of their economic effects have been noted in Chapter 8. Here the concern is with the broader legal and social questions they raise.

The importance of these regulatory commissions is out of all proportion to their public visibility or political accountability. They create more law than Congress. Each year federal administrative agencies issue ten thousand or more new regulations.9 By contrast, it is rare for Congress to pass a thousand bills in one session.10 Until recent years, administrative agency regulations were directed largely toward limited segments of the economy or society. But while the scope of earlier administrative commissions was generally limited to particular industries such as railroads (ICC), merchant shipping (NMC), or broadcasting (FCC), the newer commissions regulate activities which cut across industry lines and reach directly into virtually every business, school, farm, or other social institution. They prescribe employment procedures and results under “affirmative action” policies, set and administer “environmental” standards, issue occupational health and safety regulations, define the racial distribution of schools’ pupil populations, teachers, and administrators — all largely as they see fit, limited only by such attention as appellate courts can give them amid the courts’ many other concerns.

Sometimes called a “fourth branch of government,” the administrative commissions from the outset faced grave challenges to their legality under a constitution that prescribed only three branches of government — and which carefully separated powers at that. The constitutional issue was settled in favor of the agencies, at a time when they were a peripheral factor in government decision making and national life, but that categorical decision remained in effect as the number and scope of such agencies expanded enormously over the decades. This is hardly a criticism of the Supreme Court, for once the incremental growth of regulatory commissions passed a certain point, any reconsideration or reversal of their constitutionality would have undermined a major part of the existing legal system of the country and whole sections of the economy and society dependent upon that set of regulatory “laws.” This does, however, once more illustrate the momentous impact of categorical decision making — in this case a stark dichotomy between “constitutional” and “unconstitutional” — and the high costs of subsequently attempting to bring to bear effective knowledge of its consequences.

Administrative agencies enforce their decisions in ways which escape the constraints of the Constitution or of Anglo-Saxon legal traditions in general. American laws are prospective — that is, they describe in advance what the citizen can and cannot do. The citizen cannot simply be punished because his actions prove in retrospect to be displeasing to the government. In addition, the burden of proof is on the government, or on the plaintiff in general. Moreover, the citizen cannot be forced to incriminate himself, under the Fifth Amendment. All these safeguards are readily circumvented by administrative agencies. As noted in Chapter 8, the National Maritime Commission has a financial life-and-death power over merchant shippers by its choice of when and where to grant or withhold the subsidies made necessary by costly, government-prescribed practices which would bankrupt any American shipping company solely dependent on revenue from customers. Legally, these subsidies are not a right, and so the denial of them is not a punishment subject to constitutional constraints. Economically, however, massive government subsidies to one’s competitors are the same as a discriminatory fine for having displeased the government, but legally the latter is not a constitutional violation. The maritime industry has no constitutionally mandated right to a subsidy, and indeed many economists find the whole scheme ridiculous, but the point here is that once the industry as a whole is being subsidized, to any individual competitor the loss of that subsidy does not restore him to the position of being in an ordinary competitive industry. On the contrary, it is a discriminatory fine for having displeased the National Maritime Commission.

The principle is far more general than the maritime industry, and affects federal revenue sharing, “affirmative action” contract compliance procedures and other administrative activity in which the federal government makes benefits available to other entities on condition that those other entities follow policies which the government has no existing legal power to directly force them to follow otherwise. As a matter of incentives and constraints, it makes no difference whether (1) someone pays X dollars in taxes and is then fined Y dollars for displeasing the government, or (2) pays X + Y dollars in taxes and receives Y dollars back for pleasing the government. Legally, however, it matters crucially. The constitutional safeguards which apply in the first approach are circumvented by using the second approach. There is no prospective law on the books allowing the government to control the racial, sex, or other composition of university faculties, but only such universities as please the government in that regard are eligible for the mass federal subsidies which make up much of the revenue of the leading “private” universities. Universities as a group have no constitutional right to the subsidies, but once most of Harvard’s revenue comes from the federal government, Yale cannot survive as a competitor if it displeases administrators who control its eligibility for federal money. Similarly, the federal government can require state and local governments to follow various policies on highways, schools, or welfare, not because the federal government has either constitutional or statutory authority to control such things, but because administrators of various funds can unilaterally make those requirements a precondition for receiving the funds. Again, it is the general availability of the subsidies which puts the individual competitor to whom they are denied in a worse position than if the subsidy had never existed. The glib doctrine, “to get the government off your back, get your hands out of the government’s pocket,”11 misses the point entirely. To an industry or sector (such as universities or local governments) that doctrine would make sense — if whole industries or sectors were decision making units. The real objection, however, is not the vicissitudes of particular claimants but the growth of extralegal powers of the federal government — powers never granted by the Constitution nor by legislation, and never voted on by the public, but as real as any law passed by Congress, and often carrying heavier penalties, including the total destruction of institutions by massive subsidies to their competitors.

Another practice counter to American legal tradition is putting the burden of proof on the defendant. As noted in Chapter 8, the Robinson-Patman Act makes mere price differences to different customers prima facie evidence creating a “rebuttable presumption” of illegal price discrimination. In practice, the many possible interpretations of given cost statistics makes such rebuttal virtually impossible and the Supreme Court’s conception of classifying customers can make it too costly to attempt. Moreover, the cost justification must first be made to the Federal Trade Commission, which has every incentive not to accept it. Like a justice-of-the-peace who is paid out of the fines he imposes, the FTC’s judicial decisions affect its own economic well-being, since the size of the appropriations and staff which it can ask of congress in its executive role, and the scope of the power it can exercise in its legislative role depend on how much of a problem it finds its judicial role. In view of these institutional incentives and constraints, it is hardly surprising that the FTC has almost invariably gone further than the courts in the stringency with which it has applied the Robinson-Patman Act.12 This is, however, neither peculiar to the FTC nor to the area of its jurisdiction.

Very similar principles and results are found in the very different jurisdiction of the Equal Employment Opportunity Commission (EEOC). Here an employer’s proportion of minority or female employees must, in retrospect, match the expectations of the EEOC or he faces a rebuttable presumption of discrimination, under guidelines legislated and administered by the EEOC. He must rebut this presumption before the EEOC, acting in its judicial capacity. Again, the EEOC like the FTC, has consistently applied the law more stringently than the courts.13 It is not the prospective use of law but the retrospective punishment of results displeasing to the EEOC. But because the punishment consists largely of liability to have federal money stopped, it is not legally the same as punishment and so escapes constitutional bans on retrospective punishments for acts not specified in advance. Also contrary to the principles behind the Fifth Amendment, employers are forced to confess in advance to “under-utilization” of minority and female employees whenever their employment numbers do not meet EEOC expectations, as a precondition for being eligible for federal money. The Fifth Amendment protects Nazis, Communists, and criminals but not businessmen in this situation, because technically the latter are not being punished or subjected to criminal penalties — even though they may be subject to heavier losses than the fines imposed in criminal cases.

In short, administrative agencies have become a major part of the American legal system, and a part not merely outside the original vision of the Constitution, but also able in practice to enact and enforce laws in ways forbidden to other organs of government by the Constitution. Despite their formal subordination to legislative correction by Congress and judicial review by the appellate courts, regulatory commissions are insulated from effective control by their sheer numbers, by the intricacies and arcane language of their regulations, and by the multitude of other claims on the time of Congress and the courts. Effective feedback comes largely from special interests, each with a sufficient stake to monitor its respective agency, to shoulder the cost of appeals, and to lobby before the appropriate committee of Congress. But a criminal can challenge the verdict of a trial court much more cheaply than an ordinary citizen can challenge the ruling of an administrative agency. Moreover, the kind of personal bias which would disqualify a judge is considered acceptable, or even desirable, in members of a regulatory commission. That advocates of recreational interests (“environmentalists”) should dominate commissions concerned with environmental matters is considered as natural as that “minority” activists should dominate the EEOC. This would be a questionable departure from legal tradition, even in cases not dependent upon “rebuttable presumptions,” to be rebutted to the satisfaction of such officials.

Costs are a crucial factor in all forms of legal proceedings. A legal right worth X (in money or otherwise) is not in fact a right if it costs 2X to exercise it. This is obvious enough when the rights and the costs can be reduced to money. The principle is no less true in cases where the values are nonfinancial. For example, a woman’s right to prosecute a rapist can be drastically reduced — for some women, obliterated — by allowing the defense attorney to put her through an additional trauma on the witness stand with wide-ranging questions and observations, publicly humiliating her but having little or nothing to do with the guilt or innocence of his client. There is some belated recognition of this cost in some places with changed trial rules in rape cases, but this is usually seen as a special problem in a special situation, rather than a general problem of costs in legal procedures. Where a right is so defined, in legislation or by judicial interpretation, that either the plaintiffs or the defendants can impose large costs on the others at little or no cost to themselves, then the law may be so lopsided in its impact that the right can be reduced to meaninglessness or expanded far beyond its original scope or purpose. In the case of rape, it is the defendant who can impose disproportionate costs — reaching prohibitive levels for many women. In other kinds of cases and rights, it is the plaintiff who can create huge costs for the defendant at little or no cost for himself. For example, recreational interests (“environmentalists”) can impose large costs on builders of everything from bicycle paths to power dams by demanding that they file “environmental impact” statements, in effect putting the burden of proof on the accused. Although such statements are officially defined by their hoped-for results, they have virtually no demonstrated effectiveness for predicting how any environment will in fact be affected.14 They are, however, very effective in imposing both direct financial costs and costs associated with delay. For projects requiring large investments, the mere delay can cost millions of dollars and doom the project, since the value of a given physical thing varies with the time at which it becomes available. That is, so-called “environmental impact” requirements impose high costs on one party at low cost to the other party, regardless of the legal outcome of the case.

The law and legal critics are both so preoccupied with the ultimate disposition of cases that costs of the process itself tend to fade into the background. Yet these process costs may determine the whole issue at stake. For some, to be totally vindicated after years of filing reports, attending many administrative hearings, trials, and appeals is often meaningless. Under environmental impact laws, the case to be made by the plaintiff to keep a costly legal process going is either nil or may consist solely of speculation. He does not bear the burden of proof.

Although adversary legal systems put much emphasis on litigants, or at most on the categories of people they represent, all legal systems are ultimately social processes serving social purposes, including transmitting knowledge for social decisions based on costs entailed by alternative social behavior. When the legal system causes the trade-offs between opposing private interests, or opposing social concerns, to take place in ways that put more costs on one side than on the other, this affects much more than the justice or logic of the final decision in those cases that are adjudicated. In legal as in economic processes, the transactions that do not take place at all may represent the largest cost to the public. The electric generating capacity that is not built, and the traumatic blackouts that result from overtaxed electric generating capacity, may far outweigh the annoyance of a handful of lakeside resort owners or the Sierra Club — if the costs of the two results could be equally accurately conveyed through either the economic system or the legal system. Where the costs of transmitting one set of knowledge (the demand for electricity, in this case) is artificially made greater than the costs of conveying the other set of knowledge (recreational demands), then the distortion of knowledge can lead to results which neither the economic nor the legal decision makers would have reached had accurate knowledge been equally transmittable from opposing sides at equal cost. In the criminal law as well, the real costs of the legal system are not the financial costs of such transactions as happen to take place, but are primarily the social costs of those transactions that do not take place — the cases that are not tried but dropped or plea bargained because of the prohibitive cost of doing otherwise.

FREE SPEECH

It is not merely as an individual benefit but as a systemic requirement that free speech is integral to democratic political processes. The systemic value of free speech depends upon the high individual cost of knowledge — that is, lack of omniscience. “Persecution for the expression of opinions” may be “perfectly logical,” according to Justice Oliver Wendell Holmes, when “you have no doubt of your premises.” He continued:

But when men have realized that time has upset many fighting faiths, they may come to believe even more than they believe the very foundations of their own conduct that the ultimate good desired is better reached by free trade in ideas — that the best test of truth is the power of the thought to get itself accepted in the competition of the market, and that truth is the only ground upon which their wishes safely can be carried out. That at any rate is the theory of our Constitution. It is an experiment, as all life is an experiment. Every year if not every day we have to wager our salvation upon some prophecy based upon imperfect knowledge. While that experiment is part of our system I think that we should be eternally vigilant against attempts to check the expression of opinions that we loathe and believe to be fraught with death, unless they so imminently threaten immediate interference with the lawful and pressing purposes of the law that an immediate check is required to save the country.15

This faith in systemic processes rather than individual intentions or individual wisdom meant that even “a silly leaflet by an unknown man”16 required constitutional protection, not for its individual merits, nor as an act of benevolence or patronage, nor as recognition of an opaque “sacred” character of an individual’s endowment of “rights,” but as a matter of social expediency in a long-run, systemic sense. For that very reason, it was not a categorical right but one subordinated to that social expediency which justified it in the first place, and therefore revocable whenever it presented a “clear and present danger”17 to the continuation of that systemic process itself or to the people and government in whom that process is expressed. In short, the right of free speech is not an opaque “sacred” right of an individual, any more than other rights such as property rights are “sacred” individual possessions. All are justified (or not) by the litmus test of their social expediency — not in the sense that any individual or group rash enough to imagine themselves capable of following the specific ramifications of each particular statute or privilege in its social impact may centrally control all words or equipment — but in the larger and longer-run sense that we can judge the historic benefits of systemic interplay better than we can determine individual wisdom in word or deed in process. Adam Smith’s systemic defense of laissez faire, despite his obvious and pervasive disgust with businessmen,18 paralleled Holmes’ systemic justification for freedom for opinions he regarded as harmful or contemptible. Both amount, ultimately, to recognition of different costs of knowledge in judging overall results rather than judging individual parts of a process.

Complications arise with the very meaning of “free” and of “speech.” The basic conception of freedom of speech — that the substantive content of individual communication be uncontrolled by government — has been judicially supplemented or extended by considering the economic cost of communication. If the content of speech remains unconstrained by government, but the modalities of its delivery are restricted (e.g., bans on sound trucks at 2:00 a.m.), then beyond some point in such restrictions, the alternative costs of other modes of communication could conceivably price the speaker out of the market. “Freedom” of speech has therefore, in recent decades, come to include concern for the cost of communication — almost as if “free” had an economic rather than a political meaning. “Speech” has also been judicially expanded to include various forms of articulation (picketing, for example) and even inarticulate symbolism (flag burning). Extensions of the concept of “speech” to other activities places other aspects of these activities — harassment and intimidation, for example — under constitutional protection intended only for communication. Similarly, extending the freedom of the press can mean allowing newspapers to be used as protected conduits for threats or ransom demands by individuals or groups who communicate with victims or their families or the authorities via newspaper stories phoned to reporters.

In the 1940 case of Thornhill v. Alabama the Supreme Court declared a state ban on picketing unconstitutional as a violation of free speech.19 The broadness of the ban and the corresponding broadness of the affirmation of the right of free speech as applied to pickets led to subsequent challenges to other picketing restrictions of a more limited sort. Here the court recognized the nonspeech aspects of picketing as subjecting the whole activity to some state control, such as when “the momentum of fear generated by past violence would survive even though future picketing might be wholly peaceful.20 Moreover, picketing by an organized group “is more than free speech” because the presence of its picket line “may induce action of one kind or another, quite irrespective of the nature of the ideas which are being disseminated.21 Despite these reservations as to the legal immunization of nonspeech activities by the “freedom of speech” provisions of the Constitution, over the years the courts have generally expanded the scope of activities deemed to be protected by the First Amendment, and extended the constitutional restrictions to organizations not part of the governmental apparatus. The First Amendment begins “Congress shall make no law…,” but by interpreting the Fourteenth Amendment as bringing the states under federal constitutional restrictions, the Supreme Court applied the rest of the earlier amendments to state governments.22 Then, in a series of cases, it extended the constitutional restrictions to various private organizations as well.

In the landmark case of Marsh v. Alabama (1946) the Supreme Court ruled that the state could not prosecute for trespass a woman who distributed religious leaflets in a privately owned suburb where such distribution was forbidden by the owner. Although the state was not forbidding distribution of leaflets, the state’s enforcement of the property owner’s rights against trespass was held to be sufficient to transform the property owner’s ban into “state action” in violation of a constitutional right. The court said: “When we balance the constitutional rights of owners of property against those of the people to enjoy freedom of press and religion, as we must here, we remain mindful of the fact that the latter occupy a preferred position.”23

The fact that different costs and benefits must be balanced does not in itself imply who must balance them — or even that there must be a single balance for all, or a unitary viewpoint (one “we”) from which the issue is categorically resolved. Each individual who chooses whether or not to live, work, or shop in a privately owned development can balance the costs of those rules against the benefits of living, working, or shopping there, just as people individually balance the costs of participating in other activities under privately prescribed rules (e.g., eating in a restaurant that requires a coat and tie, attending a stage performance where cameras are forbidden, living in an apartment building that bans pets). The court here went beyond the function of carving out boundaries, within which other institutions could make specific decisions, to making the substance of the decision itself. In doing so, it transformed an individual incremental decision into a categorical decision, confiscated a portion of one party’s assets and transferred them to another (a transfer recognized as such by the author of the decision24), and substituted its evaluation of the costs and benefits of access to communications for the evaluations of those living, working, and shopping where the leaflets were being distributed.

From a social decision-making point of view, it is a misstatement of the issue to represent the opposing interests as being the property owner and the leaflet dispenser. The owner of a development is a middleman, whose own direct interest is in seeking profit, and whose specific actions in his role as middleman represent transmissions of the perceived preferences of other people — tenants and shoppers — who are the sources of his profits. The real balance is between one individual’s desire for an audience and the prospective audience’s willingness to play that role. How important another channel of communication is to the audience is incrementally variable, according to each individual’s already existing access to television, newspapers, magazines, mail advertisements, lectures, rallies — and other places and times where leaflets can be handed out and received.

The prospective audience’s incremental preference for tranquility where they live or shop — undisturbed by messages or solicitations to read messages — may be of greater value to them than any losses they suffer from not receiving such messages at this particular time and place, or the value to the soliciting party of reaching them at this time and place, or even the social value of “free speech” as an input into political and other decision-making processes. But no such balancing takes place through legal processes conferring “rights” to uncompensated transfers of benefits.

Both the solicitor and the solicited have alternative channels of communication. To claim that the costs of some alternative channels are “prohibitive” is to miss the whole point of costs — which is precisely to be prohibitive. Costs transmit inherent limitations of resources compared to the desires for them, but do not create this fundamental disproportionality. All costs are prohibitive to some degree, and virtually no costs are prohibitive absolutely.25 Clearly, the costs of passing out leaflets would pay for direct mailing instead, or for newspaper ads, telephone solicitation, public gatherings, etc.

“Free speech” in the sense of speech free of governmental control does not imply inexpensive message transmission, any more than the right of privacy implies subsidized window shades. It is especially grotesque when the subsidy to message-senders takes the form of forcing others to be an unwilling audience, and where the small number of solicitors are called “the people” while the large number of those solicited are summarized through their intermediary as “the property owner.” Even the dissenters in Marsh v. Alabama posed the issue in those terms.26

More basic than the question of the probable desires of a prospective audience is the question of who shall decide what those desires are, either absolutely or relative to the desires of message senders. That is, what decision-making process can best make that assessment — and revise it if necessary? Apparently some people were presumed to be receptive, or the leaflet distribution would not have been undertaken. By the same token, others were presumed to want to be left alone, or the solicitation ban and the lawsuit to enforce it would not have been undertaken. Therefore, there is a question not only of the estimated numbers and respective social costs of one course of action versus another, but also a fundamental question of how an initially-mistaken perception either way would be corrected by feedback under various institutional processes.

Under informal or noninstitutionalized decision-making processes, with neither the government nor the developer involved, the leaflet distributor would have no incentive to take account of the external costs imposed on people who prefer undisturbed coming and going to receiving his message. Even if a large majority of his potential audience preferred being left alone, as long as this desire was conveyed in civil terms, short of abuse or violence, it may receive little or no weight in the distributor’s own balancing of costs and benefits. The distribution would continue, regardless of how little benefit a handful of passers-by felt they received and however much annoyance the others might feel — and regardless of how mistaken the leaflet distributor might be about either of these things.

Formal economic institutions translate the pleasure or displeasure of tenants, shoppers, or other users of a private development into a higher or lower financial value for a given set of physical structures. The property owner, even if he lives elsewhere, or is personally indifferent about leaflets, has an incentive to produce whatever degree of privacy or tranquility is desired, as long as its cost of production to him does not exceed its value to those who want it, as revealed by their willingness to pay for it.27 More importantly, those property owners who are mistaken as to the nature and magnitude of other people’s desires for privacy or tranquility find the value of their property less than anticipated, and therefore have an incentive to strengthen, loosen, or otherwise modify their rules of access.

Formal political institutions might reach similar results if constitutionally permitted. Such institutions could, in this case, take the form of a tenants or merchants association or an ordinary municipality. The problem with voting on an issue like this is that the vote of an individual who feels benefited to a minor extent counts the same as the vote of another individual who feels seriously harassed. By contrast, economic “voting” through the market reflects magnitudes of feelings as well as directions. Unfortunately, economic voting may also reflect substantial differences in income, but in general this effect is minimized by the variety of income levels on both sides of a given competition. Wealth distortions seem even less of a practical problem among tenants and shoppers in a given, privately owned development, which would tend to attract its own clientele, less socioeconomically diverse than the whole society. Economic decision-making processes also permit minority representation — in this case by transmitting the desires of whichever side is financially “outvoted” in a given development into a demand for other developments run by opposite rules. Such processes are not bound by the uniformity required of legislation nor by judicial concern for precedent. If a hundred developments adopt rule A, that in no way hinders the 101st development from adopting rule B to attract those economically “outvoted” elsewhere.

Judicial decision making on the substance of such issues loses many of the advantages of either economic or political institutions. Neither the initial court decision nor any subsequent modifications of it are the result of knowing the actual desires of the people involved, as distinguished from the parties in court. Nor, if those desires were known, would they provide any compelling incentive for the court to rule in accordance with them.

The balancing of costs and benefits includes not only tenants and shoppers with varying preferences but the leaflet distributors as well. The property owner’s legal right to exclude leaflet distributors as trespassers does not mean that he will in fact do so. They can purchase access, just as individual residential and business tenants do. The solicitors would have to pay enough to counterbalance any net reduction in the value of the property caused by its being less desired by existing and prospective tenants as a result of its reduced privacy or tranquility. Not only would leaflet distributors’ interests be weighed through the economic process against other people’s interests; there would be automatic incentives for them to modify the place, manner and frequency of their solicitations, so as to minimize the annoyance to others, and so minimize the price they would have to pay for access. Economic processes are not mere zero-sum games involving transfers of money among people. They are positive-sum decision-making processes for mutual accommodation.

The Supreme Court could not, of course, “fine tune” their decision as an economic process would, much less make it automatically adjustable in accordance with the successively revealed (and perhaps continuously changing) preferences of the people affected. Their decision was both categorical and precedential — a “package deal” in space and time. If this is what the Constitution commanded the court to do, discussions of alternatives might be pointless. But even the defenders of the court’s decisions in the “state action” cases justify those decisions on policy grounds as judicial improvisations — “sound results” without “unifying doctrines,”28 affirmation of the basic principles of a “free society” with a “poverty of principled articulation” of the legal basis for the conclusions,29 etc. The court has neither obeyed a constitutional compulsion nor filled an institutional vacuum; it has chosen to supersede other decision-making processes.

The legal basis of the Marsh decision was that the privately owned development prohibited activities which “an ordinary town” could not constitutionally prohibit, and that “there is nothing to distinguish” this suburban development from ordinary municipalities “except that the title belongs to a private corporation.”30 Similarly, there is nothing to distinguish the Supreme Court from any nine other men of similar appearance except that they have legally certified titles to act as they do. In neither instance can the elaborate social processes or weighty commitments involved be waved aside by denigrating the pieces of paper on which the end-results are summarized. If parallel appearance or parallel function is sufficient to subject a privately purchased asset to constitutional limitations not applicable to the same asset when in alternative uses, then the economic value of assets in general is reduced as their particular uses approach those of state run organizations in form or function. Economically, this is an additional (discriminatory) implicit tax on performing functions paralleling those of state agencies. The social consequences of discouraging alternatives to services provided by government seem especially questionable in a pluralistic society, founded on rejection of over-reaching government.

What distinguishes the economic relationships surrounding private property from the political relationships subject to constitutional state action constraints is nothing as gross as outward appearance or day-to-day functioning. The administrative routine in the headquarters of the Red Cross might well resemble the administrative routine in the headquarters of a Nazi death camp, but that would hardly make the two organizations similar in any socially meaningful way. In the case of economic relationships what is involved is voluntary association, modifiable by mutual agreement and terminable by either party. In the case of governmental relationships, what is involved is coercive power, overwhelming to the individual, and pervasive throughout a given geographic entity, however democratically selected the wielders of that power might be. The constitutional limitations on governmental power carve out areas of exemption from it, in order that individuals may voluntarily create their own preferred order within their own boundaries of discretion. The outward form of that voluntarily-created order may in some instances strikingly resemble governmental processes, but its voluntariness makes it fundamentally different in meaning, and in the ultimate control of its human results. The appellate courts’ role as watchdogs patrolling the boundaries of governmental power is essential in order that others may be secure and free on the other side of those boundaries. But what makes watchdogs valuable is precisely their ability to distinguish those people who are to be kept at bay and those who are to be left alone. A watchdog who could not make that distinction would not be a watchdog at all, but simply a general menace.

The voluntariness of many actions — i.e., personal freedom — is valued by many simply for its own sake. In addition, however, voluntary decision-making processes have many advantages which are lost when courts attempt to prescribe results rather than define decision-making boundaries.

The Marsh decision set a precedent which was not only followed but extended. If a private development was functionally similar to a municipality, a shopping center was “the functional equivalent” of part of a municipality.31 Therefore pickets could not be considered as trespassers in the shopping center.32 Again, the issue was posed in terms of the free speech rights of the many against the property rights of the few.33 The right of the public to be undisturbed, and the intermediary role of the property owners as communicators and defenders of that right, out of financial self-interest, were not allowed to disturb this tableau. In the case of Food Employees Union v. Logan Valley Plaza (1968), the few were described in terms of the much larger entities of which they were a part (“workers”) and in terms of other large entities, some few of whom might also wish to do similar things (“consumers,” “minority groups”), while the contrary interests of the many were described in impersonal terms as property rights or summarized through a handful of intermediaries (“business enterprises”).34 As in the earlier decision, the dissenting opinions accepted much of the same framework and complained primarily of the extent to which the functional analogy to “state action” had been stretched.

In a subsequent case, Lloyd Corporation v. Tanner (1972), the Supreme Court pulled back, in a five-to-four decision which emphasized that the leaflets were being distributed in a shopping plaza that was not a “functional equivalent” because it was not in a “large private enclave” like Logan Valley Plaza, where “no other reasonable opportunity” to convey a message existed.35 In short, once more political freedom from governmental prohibitions was confused with economic inexpensiveness in message sending. The dissenting opinion also leaned heavily on the expensiveness of message sending, but simply estimated the costs differently: “If speech is to reach these people, it must reach them in Logan Center.”36 There is, presumably, a right to an audience, regardless of the audience’s wishes.

Later Supreme Court rejections of the application of “state action” constraints were similarly based on how far “this process of analogy might be spun out to reach… a host of functions commonly regarded as nongovernmental though paralleling fields of government activity.”37 But the basic belief that such functional parallelism was the determining factor was not rejected. Again, the majority differed from the dissenters only in how far they were prepared to carry the analogy, not on its validity in principle.

In a still later case of a privately owned public utility that discontinued service without “due process,” the failure to invoke “state action” constraints was based on an assessment of insufficient parallelism in function, whereas from the point of view of state power, the consumer had no other choice of electric company precisely because the state forbade competition when it licensed this producer.38 Even if one accepts the “natural monopoly” theory of public utilities,39 it is not economically inevitable that a particular state-selected firm be that monopoly, regardless of how it treats customers. Natural monopolies exist in some fields because of cost advantages, but cost advantages are never absolute — and sufficiently bad treatment of customers creates opportunities for competitors — except where the state prevents this economic feedback mechanism to act as “checks and balances.” To lose the economic checks and balances without any offsetting political checks and balances is to combine the worst features of both institutional processes.

Neither the dissents nor the pullbacks of the whole court in the “state action” area were based on recognition of a different constitutional principle, nor on the recognition of the relative advantages of other decision-making processes for balancing the interests at issue.

RACE

The Constitution, as originally adopted, contained no explicit reference to slavery or to the enslaved race, though “free persons” and “other persons” were distinguished for voting purposes. Slavery entered the Constitution openly for the first time in 1865 when the Thirteenth Amendment banned slavery, and in 1870 when the Fifteenth Amendment asserted the right to vote without regard to “race, color or previous condition of servitude.” Sandwiched between them is the momentous Fourteenth Amendment which decrees “equal protection of the laws” to “all persons.” It has been estimated that the Fourteenth Amendment is the largest source of the Supreme Court’s work. Its ramifications reach beyond the area of race, though it is one of the three amendments transforming race relations in the United States.

Three main strands of legal trends involving race will be considered here: (1) state actions affecting race, struck down by the Supreme Court as unconstitutional, (2) “affirmative action” policies and practices of the 1960s and 1970s, as developed by courts and administrative agencies, and (3) the racial integration of schools as conceived in the landmark case of Brown v. Board of Education in 1954 and legally and socially evolved over more than two decades since then.

STATE ACTION

Before the Fourteenth Amendment was passed in 1868, numerous laws in both the North and the South specified different treatment for black and white citizens. More such laws were passed in the South after the Civil War and — particularly in the case of sweeping “vagrancy” laws — virtually reenslaved the emancipated Negro. Other laws had existed even before the Civil War to control the half million “free persons of color” and to deny them such fundamental rights as the right to testify in court (except against other blacks), to move freely from place to place, or even to educate their own children at their own expense.40 The sweeping and extreme nature of these denials of the most ordinary and basic rights must be understood as a background to the words of the Fourteenth Amendment. The “equal protection of the laws” had a very plain and simple meaning — and a very limited meaning, falling far short of a social revolution. So too did the ban on any state action to deprive anyone of “life, liberty, or property” without “due process.” The writers of these words explicitly, repeatedly, and even vehemently denied any interpretations going beyond prohibition of the gross abuses all too evident around them.41 Even voting rights were not included.42

The nineteenth-century Supreme Court decisions under the Fourteenth Amendment followed the limited scope and intentions of its authors. The Court declared that it was only “state action of a particular character that is prohibited”; “Individual invasion of individual rights is not the subject matter of the amendment.”43 Public accommodation laws were therefore held invalid.44 Even lynchings of prisoners in state custody were ruled beyond the scope of the Amendment.45

In the twentieth century, the Supreme Court began to expand the meaning of “state action” in a series of cases (beginning in the 1920s) revolving around white-only primaries in the South, where the Democratic primary was tantamount to election, and where “state delegation” of its power to set voter qualifications to the Democratic party was a transparent subterfuge to prevent blacks from voting.46 In these cases, governmental bodies took the initiative and made the decisions which denied citizens equal treatment.

A very different series of “state action” cases began in the 1940s. In these new cases, both the initiative and the decisions to treat individuals unequally by race were private. The state became involved only subsequently in protecting the legal rights of those private individuals and organizations to make whatever decisions they chose as regards contracts (restrictive covenants) and the use of their own property (trespass laws). In short, the state in these cases simply decided who had the right to decide, as defined in contracts and trespass laws. State power was involved in enforcing contracts and laws, but state decision making was not.

The Supreme Court conceded that the Fourteenth Amendment “erects no shield against merely private conduct, however discriminatory or wrongful.” But state “enforcement” of restrictive covenants was deemed paramount to “participation” by the state.47 This was called state action “in the full and complete sense of the phrase.”48 Similarly, state enforcement of trespass laws against sit-in demonstrators seeking the desegregation of privately owned businesses serving the public was invalidated as “state action” in violation of the Fourteenth Amendment.49 Perhaps the furthest extreme of this concept of “state action” was a 5 to 4 Supreme Court decision in Reitman v. Mulkey (1967) that repeal of a California “fair housing” law was a violation of the Fourteenth Amendment because the state was thereby guilty of “encouraging” private discrimination.50

In other cases, private discriminatory decisions were classified as “state action” because some governmental body was financially, administratively, or otherwise involved with the private party — as in Burton v. Wilmington Parking Authority (1961), where a restaurant leased in a government facility was racially discriminatory. The question of how much government involvement with a private party was necessary to make private decisions “state action” for legal purposes was never resolved. The Supreme Court deemed the fashioning of a “precise formula” to be “an impossible task” which “this Court has never attempted.”51 In other cases, however, state licensing — even when restrictive52 or monopolistic53 — was not sufficient to convert the licensees’ decisions into “state action.” As the dissenters in Burton observed, the lack of clear principle “leaves completely at sea” what was and was not “state action.”54 What was left unresolved was not merely the question of where to draw the line — a “precise formula” — but on what principle.

In place of principle, a miscellany of ad hoc reasons are sprinkled through “state action” cases: functional parallelism of private to public activity,55 state receipt of benefits from a private activity,56 the “publicness” of the activity,57 or even the fact that the state “could have” acted in an area but chose to “abdicate” instead.58

The Civil Rights Act of 1964 made many distinctions between private and state decision making legally unnecessary, since private operators of various public accommodations were statutorily prohibited from racial discrimination, just as the state was constitutionally prohibited. Subsequent cases show the Supreme Court pulling back in the “state action” area — not only on the question of where to draw the line, but more fundamentally on the principle involved in drawing it: “Respondent’s exercise of the choice allowed by state law where the initiative comes from it and not from the state, does not make its action in doing so ‘state action’ for purposes of the Fourteenth Amendment.”59 This distinction between state authorization of an area of private discretion and direct state decision making would annihilate the rationale for most of the prior series of landmark “state action” decisions, beginning with restrictive covenants and ending with repeal of California’s “fair housing” law. Although this principle was announced in a nonracial discrimination case, presumably the definition of constitutional principles does not depend on who is involved. Neither in the “free speech” cases like Marsh nor in such racial cases as Burton did the state initiate the decisions which led to the legal activity. All that the state did was enforce private individuals’ general (nonracial) right to exclude. Yet the inconsistencies throughout this series of cases raises disturbing questions about whether this was simply another “results”-oriented area, for which the Supreme Court became known in the Warren era.60 If so, the underlying consistency of the cases may lie in the social policy preferred by the court in the racial area, and in the greater ease of achieving those results, after the Civil Rights Act of 1964, without strained and shaky reasonings about “state action.”

AFFIRMATIVE ACTION

The phrase “affirmative action” is ambiguous. It refers both to a general approach and to highly specific policies. The general approach is that to “cease and desist” from some harmful activity may be insufficient to undo the harm already done, or even to prevent additional harm in the future from a pattern of events set in motion in the past. This idea antedates the civil rights issues of the 1960s. The 1935 Wagner Act prescribed that “affirmative action”61 be taken by employers found guilty of intimidating unionized employees — for example, posting notices of changed policies and/or reinstating discharged workers with back pay.62

Racial discrimination is another area where simply to cease and desist would not prevent future harm from past actions. The widespread practice of hiring new employees by word-of-mouth referrals from existing employees means that a racially discriminatory employer with an all-white labor force is likely to continue having an all-white labor force long after he ceases discriminating, because he will be hiring the relatives and friends of incumbent employees. Opponents of racial discrimination therefore urged that “affirmative action” be taken to break up or supersede hiring patterns and practices which left racial or ethnic minorities largely outside the usual hiring channels. This might include advertising in newspapers or in broadcast media more likely to reach minority workers, or a variety of other ways of creating equalized access to apply for employment, college admissions, etc.

The first official use of the phrase “affirmative action” in a racial or ethnic context was in an Executive Order issued by President Kennedy, requiring that government contractors act affirmatively to recruit workers on a nondiscriminatory basis.63 Another equally general Executive Order was issued by President Johnson, requiring affirmative action to insure that workers be hired “without regard to their race, creed, color, or national origin.”64 The Civil Rights Act of 1964 likewise repeatedly required in its various sections that hiring and other decisions be made without regard to race or ethnicity.65 In short, special efforts were to be made to include previously excluded racial or ethnic groups in the pools of applicants, though the actual decisions among applicants were then to be made without regard to race or ethnicity. This was the initial thrust of “affirmative action.”

Both the presidential orders and the congressional legislation required various administrative agencies — existing and newly created — to carry out and formulate more specific policy on a day-to-day basis. It was here that “affirmative action” was transformed from a doctrine of prospective equal opportunity to a doctrine of retrospective statistical “representation” or quotas. This transformation was all the more remarkable in the light of the explicit language and legislative history of the Civil Rights Act of 1964, which expressly repudiated the statistical representation approach. While steering this legislation through the Senate, Senator Hubert Humphrey pointed out that it “does not require an employer to achieve any kind of racial balance in his work force by giving any kind of preferential treatment to any individual or group. ”66 There was an “express requirement of intent” before an employer could be found to be guilty of discrimination.67 Ability tests would continue to be legal, even if different proportions of different groups passed them.68 Another supporter, Senator Joseph Clark, pointed out that the burden of proof would be on the government to show discrimination under the Civil Rights Act.69 Still another supporter, Senator Williams of Delaware, declared that an employer with an all-white work force could continue to hire “only the best qualified persons even if they were all white.”70 All these assurances are consistent with the language of the Civil Rights Act71 but not with the actual policies subsequently followed by administrative agencies.

A series of Labor Department “guidelines” for government contractors began in 1968 with requirements for “specific goals and timetables” involving the “utilization of minority group personnel,” and by degrees this evolved into “result-oriented” efforts (1970) and finally (1971) it meant that the employer had the burden of proof in cases of “under-utilization” of minorities and women, now explicitly defined as “fewer minorities and women in a particular job classification than would be expected by their availability…”72 These so-called guidelines had the force of law, and given the large role of the federal government in the economy, the affected government contractors and subcontractors included a substantial proportion of all major employers. The “availability” of minorities and women, as judged by administrative agencies, often meant nothing more or less than their percentage in the population.

“Representation” based on population disregards huge differences in age distribution among American ethnic groups, due to differences in the number of children per family. Half of all Hispanics in the United States are either infants, children, or teenagers. Their median age is about a decade younger than that of the U.S. population as a whole, two decades younger than the Irish or Italians, and about a quarter of a century younger than the Jews.73 Such demographic facts are simply ignored in statistics based on “representation” in the population, which includes infants as well as adults. The high-level positions on which “affirmative action” attention is especially focused are positions usually held by persons with many years of experience and/or education — which is to say, persons more likely to be in their forties than in their twenties. The purely demographic disparities among groups in these age brackets can be extreme. Half of all Jewish-Americans are forty-five years old or older, while only 12 percent of Puerto Ricans are that old. Even a totally nondiscriminatory society would have gross “underrepresentation” of Puerto Ricans in the kinds of jobs held by people of that age. More generally, American ethnic groups are not randomly distributed with respect to either age, education, region, or other variables having substantial impact on incomes and occupations.74

The qualitative dimensions of “availability” have also been stretched in affirmative action concepts. The barely “qualified” are counted as fully as the well qualified or the highly qualified. Indeed, the Equal Employment Opportunity Commission (EEOC) has stretched the concept of a qualified applicant to mean “qualified people to train”75 — that is, people lacking the necessary qualifications, whose hiring would entail more expense to an employer than if he hired someone already qualified. Applicants or employees cannot be denied a job even for serious crimes. The EEOC ruled that because “a substantially disproportionate percentage of persons convicted of ‘serious crimes’ are minority group persons” an employer’s policy against employing anyone with a conviction for a serious crime “discriminates against Negroes.”76 Employers could retain this practice only if they could bear the burden of proof of the “job-relatedness of the conviction” and, in addition, take into account the employee’s “recent” past employment history — to the ex post satisfaction of the EEOC.”77

The EEOC defined which groups were “minorities” for legal purposes: Negroes, Indians, Orientals, and Hispanics.78 Because this was an unconstrained choice by an unelected commission, it did not have to justify this selection to anyone, even though Orientals were included when they have higher incomes than other ethnic groups not included (such as Germans, Irish, Italians, or Polish-Americans79) — and, in fact, had higher incomes and occupations than the average American.80 The other officially designated ethnic minorities all have lower average ages and educational levels than the general population — a fact generally ignored in “representation” discussions. With the addition of women to the groups entitled to preferential (or “remedial”) treatment, all the persons so entitled constitute about two-thirds of the total population of the United States. Looked at another way, discrimination is legally authorized against one-third of the U.S. population (Jewish, Italian, Irish, etc., males) — and for government contractors and subcontractors, it is not merely authorized but required.

The shifting of the burden of proof to the employer after a prima facie showing of statistical “underrepresentation” (as administratively defined) was paralleled by a shifting of the burden of proof to the employer whenever a test had differential impact on the officially designated minorities.81 The apparently reasonable requirement that such tests be “validated” is in practice a virtual ban on tests for many employers, because the cost of such validation has been estimated by professional testers as “between $40,000 and $50,000 under favorable circumstances,”82 and many employers simply do not have large enough numbers of employees in each job classification to achieve statistically significant results in any case, even if they were willing and able to spend the money. The EEOC has even gone beyond requiring “validation” to requiring differential validation for each ethnic group — still more costly where possible, and possible in fewer instances. The importance of costs and of placing the burden of proof on the government in legal transactions is amply illustrated by the results in the exceptional area of administrative law, where the accused can be presumed guilty after a meager prima facie case. Under “affirmative action,” as administratively evolved, the prima facie case consists simply of systemic results (“underrepresentation”) legally equated with intentional behavior (“discrimination”). As a well-known scholar in this area has observed: “One may review these enormous governmental reports and legal cases at length and find scarcely a single reference to any act of discrimination against an individual.”83

However much “affirmative action” has come to mean quotas, administrative agencies cannot explicitly assign quotas, since the Civil Rights Act forbids that. What is done instead is to force an employer to confess to “under-utilization” and to design his own specific “affirmative action” plan as a precondition for retaining his eligibility for federal contracts or for doing subcontracting for anyone else receiving federal money. The agency does not tell him what numbers or percentages to hire from each group, but can only disapprove his particular mechanisms and goals until they agree. This raises the cost of communicating knowledge for the agency, the employer, and the economy. These costs are compounded by the overlapping jurisdictions of various federal agencies involved — the EEOC, the Justice Department, HEW, and the Labor Department. An “affirmative action” plan that is acceptable to one agency may not be acceptable to another agency, and even if it is acceptable to all the agencies simultaneously, an individual employee can still sue the employer for “reverse discrimination.” Indeed, federal agencies have sued each other under the Civil Rights Act.84 In short, the policy fails to clearly prescribe in advance what an individual can and cannot do. Part of this ambiguity is inherent in administrative agencies’ covert pursuit of policies that they are legally forbidden to follow.

The flouting of congressional intent brought attempts to return to the initial meaning of “affirmative action” as an attempt to “expand the pool of applicants.”85 This attempt to amend the law failed,86 and its failure illustrates temporal bias as it affects special interest groups. Laws do not simply respond to pre-existing special interests. Laws also create special interests which then affect what is subsequently politically possible. As noted before, special interests are essentially people who have lower costs of knowledge of their own stake in government policy, and in this sense special interests include governmental personnel whose jobs and powers were created by given legislation. The “affirmative action” policy followed had enormous impact on the agencies administering such policies. For example, within a period of three years the EEOC’s staff of lawyers increased tenfold.87 The impact on minority employment has been found to be relatively minor.88 Blacks have rejected preferential treatment 64 percent to 27 percent. Four-fifths of women also reject it. Indeed, no racial, regional, sex, income or educational group studied by the Gallup Poll favors preferential treatment.89 Yet the drive of the administering agencies and the general acquiescence of the courts has been enough to continue policies never authorized by Congress and contrary to its plainly expressed legislative intent.

The insulation of administrative processes from political control is illustrated by the fact that (1) administrative agencies went beyond what was authorized by the two Democrats (Kennedy and Johnson) in the White House who first authorized “affirmative action” in a sense limited to decisions without regard to group identity, and (2) continue to do so despite the two Republican presidents (Nixon and Ford) who followed, who were positively opposed to the trends in agencies formally under their control as parts of the executive branch of government. This political insulation is illustrated even by the first major setback for “affirmative action” — which came from another nonelected branch of government, the Supreme Court, which after more than a decade of support for “affirmative action” was able to put a brake on the policy, which neither the public nor its elected representatives could reverse.

In a five to four decision, with fragmented partial concurrences and partial dissents, the Supreme Court ruled in the Bakke case (1978) that a university cannot establish minority admissions quotas which have the effect of “insulating each category of applicants… from competition with all other applicants.”90 It did not categorically forbid the voluntary use of race as a consideration, where this “does not insulate the individual from comparison with all other candidates,”94 but emphasized that any uses of racial designations by the state were “inherently suspect and thus call for the most exacting judicial examination” under the Fourteenth Amendment.91 The Supreme Court rejected the idea of group compensation for generalized “societal” wrongs — as distinguished from demonstrated discrimination by a given decision-making unit.92 It pointed out that the Fourteenth Amendment grants “equal rights” to individuals — not group rights, and certainly not special rights to one group historically connected with the origin of the Amendment.93 After more than a century of litigation under the Fourteenth Amendment, it is “far too late to argue that the guarantee of equal protection to all persons permits the recognition of special wards entitled to a degree of protection greater than that accorded others.”94 In a multi-ethnic society like the United States, the courts cannot assume the task of evaluating the historic “prejudice and consequent harm suffered by various minority groups.”95 Indeed, the very concepts “majority” and “minority” were challenged, since “the white ‘majority’ itself is composed of various ethnic groups, most of which can lay claim to a history of previous discrimination at the hands of the state and private individuals.”96 Any group rankings by harm suffered and remedies available would be transient, requiring repeated incremental adjustment as the judical remedies take effect, and the “variable sociological and political analysis” necessary for this “simply does not lie within the judicial competence” — even if it were otherwise politically feasible and socially desirable.97

While the above-cited court’s decision written by Justice Lewis F. Powell directly addressed most of the major issues raised by “affirmative action” policies, the closeness of the vote and the partial concurrences that created different sets of majorities for different sections of the decision make the Bakke case less of a precedential landmark than it might be otherwise. That highly diverse and opposing groups greeted the decision as a victory for their particular viewpoints is further evidence of this. Moreover, the four justices who concurred with Powell in striking down the special minority admissions program refused to concur in anything else in his official opinion for the court,98 and observed that “only a majority can speak for the court or determine what is the ‘central meaning’ of any judgment of the court.99 The narrowly limited basis of the concurrence prevented any majority from existing over the range of issues addressed by Powell. The future legal implications of the Bakke decision were further clouded by the four dissenters, who tellingly pointed out how far the Supreme Court had already gone in the direction it was now rejecting.100 The narrowness and tenuousness of the decision in the Bakke case was underscored by an opposite decision in the Weber case just one year later.

After striking down admissions quotas at the University of California, the U.S. Supreme Court upheld job training quotas at a Kaiser Corporation plant in Louisiana. Following criticism of their employment patterns by the Office of Federal Contract Compliance, threatening loss of government contracts, Kaiser and the United Steelworkers Union jointly prepared an “affirmative action” plan modeled after a plan imposed on the steel industry by the government in a consent decree. Half of all places in the training program were reserved for blacks. One of the white workers excluded from the training program in favor of blacks with less seniority was Brian F. Weber, who instituted a lawsuit charging discrimination. Weber won in the trial court and in the Court of Appeals, but lost on a five-to-two decision by the Supreme Court. The four dissenting Justices in the Bakke case (Brennan, Marshall, White, and Blackmun) were joined by Justice Potter Stewart to form the new majority in the Weber case.

In Weber as in Bakke, the majority decision was based on the relevant statutory law — the Civil Rights Act of 1964 — rather than on the Constitution. This meant that both cases avoided the establishment of a broad legal principle. Both cases also construed the applicability of even the statutory law very narrowly. In Bakke, the four concurring Justices declared:

This is not a class action. The controversy is between two specific litigants.101

In Weber a very different majority likewise announced:

We emphasize at the outset the narrowness of our inquiry. Since the Kaiser-USWA plan does not involve state action, this case does not present an alleged violation of the Equal Protection Clause of the Constitution.102

The traditional avoidance of unnecessary Constitutional decisions, when statutory law is sufficient, was in both cases carried to extremes by (1) ignoring government involvement in the substance of both quota decisions and (2) ignoring, and even boldly misstating, Congressional intent in the Civil Rights Act. Bakke had applied to a state-run medical school, and Weber had applied to a training program established under pressure from the Office of Federal Contract Compliance. Yet only Justice Powell addressed the issue of the Constitution’s requirement that government provide “equal protection of the laws.”

As for Congressional intent, the four concurring Justices in Bakke asserted that “Congress was not directly concerned with the legality of ‘reverse discrimination’ or ‘affirmative action’ programs”103 when it was debating the Civil Rights Act of 1964. Yet one of those very same Justices (Rehnquist) later reported at great length, in a Weber case dissent, the numerous Congressional discussions of quotas and preferences, which were repeatedly, decisively, and emphatically rejected by Congress while writing the Civil Rights Act.104 Why, then, the fictitious legislative history in Bakke? Its only effect was to provide a basis for judicial exegesis on a point allegedly neglected by Congress — in this case, creating a right to sue under the Civil Rights Act on a point for which no such right was provided in the Act itself.105 This newly created right to sue made a statutory resolution of the issues possible, avoiding a constitutional precedent. Equally fictitious legislative history was invoked by a different set of Justices in the Weber case as a counterpoise to “a literal interpretation”106 of what Congress had written in the Civil Rights Act, forbidding preferential treatment. Taking instead the “spirit” of that law and its “primary concern” for “the plight of the Negro in our economy,”107 the Weber majority upheld the Kaiser quota which it repeatedly described as “voluntary,” despite the obvious pressure from the OFCC noted by both the trial court and the Court of Appeals.108 The Kaiser quota system was in fact simply the government’s quota system imposed on a contractor.

In short, eight out of nine Justices — in two different cases before the identical court — chose to preserve the Court’s options to pick and choose “affirmative action” plans it liked or disliked, even at the cost of (1) pretending to enforce Congressional intentions it was directly countering, and (2) ignoring government involvement in the creation of the programs at issue. This is a very consistent pattern underlying these differently decided cases, and may have more momentous implications than the actual decision in either case.

The central presumption behind “affirmative action” quotas has not been addressed directly by the courts or by the administrative agencies. That presumption is that systemic patterns (“representation”) show either intentional actions (“discrimination”) or, at the very least, the consequences of behavior by “society” at large — rather than actions for which the group in question may be in any way or to any degree responsible, or patterns arising from demographic or cultural causes, or statistical artifacts. The issue is not the categorical dichotomy between “blaming the victim” and blaming “society.” It is an incremental question of multiple causation and perhaps multiple policy response.

More generally, the presumptive randomness of results selected as a baseline from which to measure discrimination is itself nowhere either empirically or logically demonstrated, and in many places and manners it is falsified. For example, even actions wholly within the discretion and control of each individual — choice of television programs to watch, card games to play, opinions to express to poll takers — show patterns that vary considerably by ethnicity, sex, region, educational level, etc. It is wholly arbitrary to exclude variations which originate within the group from any influence on results for the group.109 It is equally arbitrary to assume that those variables that are morally most important are causally most important.

A major nonmoral, nonsocietal variable that is routinely ignored is age. As already noted, median age differences among American ethnic groups range up to decades. The median age of American Indians is only one-half that of Polish-Americans (twenty versus forty); the median age of blacks is a little less than half that of Jews (twenty-two versus forty-six).110 These differences affect everything from incomes and occupations to unemployment rates, fertility rates, crime rates, and death rates.111 For example, Cuban-Americans average a higher income than Mexican-Americans, who are a decade younger, but in the same age brackets it is the Mexican-Americans who earn more.112 Any attempt to explain gross income differences between these two groups in terms of either discrimination by “society” or by their respective “ability” runs into the hard fact that the gross difference is the opposite of the age-specific difference. Similarly, blacks have lower death rates than whites, but this in no way indicates better living conditions or medical care for blacks, much less any ability of blacks to discriminate against whites in these respects. Blacks are simply younger than whites, and younger people have lower death rates than older people; on an age-specific basis, whites have lower death rates than blacks.113 Age differences also overshadow racial differences in unemployment rates: Blacks in the twenty-four to fourty-four-year-old brackets have consistently had lower unemployment rates than whites under twenty — every year for decades,114 even though whites as a group have lower unemployment rates than blacks as a group. In short, the impact of age on statistical data is so great that to compare groups without taking age into account is like comparing apples and oranges. Yet “affirmative action” comparisons of group “representation” almost invariably ignore age differences.

Ages are important in another way related to “affirmative action” data. When prospective equality of opportunity is measured by retrospective results during a period of increasing opportunity, the gross statistics lump together different age-cohorts subject to the increased opportunities for varying proportions of their work careers — ranging from zero to one hundred percent. Older people whose careers began when there was less opportunity — or even total exclusion from some occupations — will have correspondingly less “human capital” with which to compete with their age peers in the general population. Younger members of the same ethnic group will be less handicapped in this respect, if opportunities have been increasing. Even if the ideal of equal prospective opportunity were achieved, retrospective data would not show statistical parity until decades later, after all members of the older age-cohorts had passed from the scene. This is more than a theoretical point. Black income as a percentage of white income is progressively higher in younger age brackets,115 and while the rate of return on education is lower for blacks than whites, the rate of return is slightly higher for younger blacks than for their white counterparts.116

Locational differences are another nonmoral variable having little relationship to the intentions of “society” but having a substantial impact on statistical data. No American ethnic group has income as low as one-half the national average, but two-to-one differences in incomes from one location to another exist, even within the same ethnic group. The 1970 census showed the average family income of blacks in New York State to be more than double the average family income of blacks in Mississippi. The average income of American Indians in Chicago, Detroit, or New York City was more than double what it is on most reservations. Mexican-Americans in the metropolitan area of Detroit earn more than double the income of Mexican-Americans in the metropolitan areas of Laredo or Brownsville, Texas.117 Given the size and regional diversity of the United States, the geographic distribution of ethnic groups affects the statistical averages that are often blithely quoted, with as little regard for geographic as for demographic differences. Each ethnic group has its own geographic distribution pattern, reflecting a variety of historical and cultural influences,118 and having little to do with the intentions of “society.” Some indication of the combined influence of age and location is that young black working couples living outside the South had by 1971 achieved the same income as their white counterparts in the same region.119 The disbelief and even denunciation which greeted publication of this fact indicates something of the vested interests that have built up in a different vision of the social process — and in programs built on that vision. Subsequent studies have reinforced the finding of income parity among these black and white younger age-cohorts with similar cultural characteristics.120

The point here is not that all is well. Far from it. The point is that both causal determination and policy prescription require coherent analysis, rather than gut feelings garnished with numbers. Many of the hypotheses behind “affirmative action” are not unreasonable as hypotheses. What is unreasonable is turning hypotheses into axioms. The preference for intentional variables (“discrimination”) has virtually excluded systemic variables (age, location, culture) from even being considered. The practical consequences of this arbitrary theoretical exclusion extend far beyond the middlemen — employers — to much larger and more vulnerable groups, notably ethnic minorities themselves. Every false diagnosis of a condition is an obstacle to improvement. When recent studies show the still substantial black-white income differences to reflect conditions that existed before the younger age-cohorts ever reached the employer — reading (or nonreading) habits in the home, education, etc.121 — this has implications for the effectiveness of programs which (1) postulate that discrepancies discovered at the work place are due to decisions made at the work place, and (2) establish legal processes centering on the work place.

The effect of “affirmative action” programs is viewed as axiomatically as its original process. In fact, however, studies have found little or no effect from affirmative action in advancing ethnic minorities, in either incomes or occupations.122 In some particular places — prominent firms, public utilities, and others especially in need of appeasing federal administrative agencies — there have been some changes. But overall, the economic position of minorities changed little since “goals and timetables” (quotas) became mandatory in December 1971.

The ineffective record of “affirmative action” policies is in sharp contrast with the record of “equal opportunity” laws in the years immediately preceding. After passage of the Civil Rights Act of 1964 — and before quotas in 1971 — black income as a percentage of white income rose sharply, with blacks in white collar occupations also rising, along with rising proportions of blacks in skilled and professional jobs.123 One reason for the difference was the different set of incentives presented by the two policies. “Equal opportunity” laws provided penalties for specifically proven discrimination. “Affirmative action” laws penalized numbers that disappointed administrative agencies, and made defenses against “rebuttable presumptions” costly and uncertain.

It might appear at first that “affirmative action” penalties — costs — were “stronger” (higher), but not when costs are recognized as opportunity costs, the difference between following one course of action rather than another. The general unattainability of many quotas means that penalties fall equally on discriminatory employers and nondiscriminatory employers. A discriminatory employer therefore has little to gain by becoming a nondiscriminatory employer, when the characteristics of the target population (age, education, etc.) insure that he will be unable to fill quotas anyway. Moreover, the ease with which a discrimination case can be made makes minorities and women more dangerous employees to have, in terms of future prospects of law suits if their subsequent pay, promotions, or other benefits do not match those of other employees or the expectations of administrative agencies. As in the case of other groups with special rights, as noted in Chapter 5, these rights have costs to the recipients themselves. In short, “affirmative action” provides opposing incentives to hire and not hire minorities and women. It is not surprising that it has been less effective than “equal opportunity” laws which provide incentives in only one direction.

Because “affirmative action” policies apply also to women, it should be noted that there has been a similar unwillingness to look beyond gross statistics for obviously systemic variables having little to do with intentional discrimination. With women the key variable is marriage. Even before “affirmative action” quotas, women in their thirties who worked continuously since high school earned slightly more than men in their thirties who worked continuously since high school.124 In the academic world, where many discrimination charges have been filed under affirmative action, female academics earned slightly more than male academics when neither were married125 — again even before “affirmative action” — and unmarried female Ph.D.’s who received their degrees in the 1930s and 1940s became full professors in the 1950s to a slightly greater extent than did unmarried male Ph.D.’s of the same vintage.126 In short, the male-female differences in incomes and occupations are largely differences between married women and all other persons. Sometimes this is obscured in data for “single” women, many of whom are widowed, divorced, or separated — that is, have had domestic and maternal handicaps in pursuing their careers. The clear-cut income parity (or better) among women who never married suggests once again that systemic variables have more to do with the statistics than the intentional decisions at the work place at which the statistics were collected.

SCHOOL INTEGRATION

The 1954 Supreme Court decision in Brown v. Board of Education set in motion a chain of events that has resulted in a bitter controversy over what one side has characterized by its hoped-for results as “racial integration” in the public schools, and which the other side has characterized by its institutional mechanisms as “forced busing.” Racial integration, in turn, sometimes implied more than statistical mixtures, and suggested at least some improved sense of mutual regard. Forced busing referred to busing categorically imposed by higher — more remote — authorities (usually appointed judges) on locally elected officials, parents, and children, as distinguished from such busing as the latter might voluntarily choose for themselves as incrementally justified by the benefits.

The Brown decision was historic in many respects. It outlawed as unconstitutional a whole political and legal pattern of racial segregation in the South, extending far beyond public schools. It pitted the Supreme Court against the whole political structure of that region for many years, and indeed put the court’s general credibility and general effectiveness at stake on this particular issue. Had the Supreme Court been defied with impunity on this issue, its ability to enforce its other decisions in other areas could have been permanently jeopardized. Last but by no means least, it was the beginning of the era of Chief Justice Earl Warren and the increased judicial activism of the Supreme Court under his leadership. The high political and judicial stakes in the Brown decision are an integral part of the story of how school desegregation metamorphosed over the years into compulsory school busing to achieve prescribed racial proportions.127 Even before the case was decided, Justice Frankfurter pointed out the great danger in a decision that might affirm a principle but be mocked in practice, through local defiance and evasion.128 An immediate and categorical test of strength was avoided by announcing in the decision itself a delay for rehearings, followed by the conclusion after rehearing that the decision was to be implemented “with all deliberate speed” — i.e., incrementally, as political “realities” permitted. This highly unusual legal procedure129 permitted lower courts and the Supreme Court to test the waters before proceeding, to assess and to some extent accommodate local circumstances, especially in the South. It also permitted time for opinion leaders to mobilize public support for “the law of the land,” given that the high stakes included the basic legal framework of the nation and not simply the school system or even race relations alone.

Whatever the strategic merits of this approach, it also had momentous other consequences. It made the Supreme Court a party to an ongoing adversary relationship with institutions over whom it was established to have jurisdiction and to make rulings impartially. Moreover, it was a virtual invitation to evasions and delay, in as many forms as human ingenuity could devise. This in turn meant that the courts had to monitor in detail the laws, plans, regulations, and organizational patterns of institutions ranging all the way down to local school boards. Courts had to go beyond defining legality to determining “good faith.” Among the evidences of good faith were the numbers of black children actually integrated into white schools — numbers that were often zero in some Southern states. For about a decade after the Brown decision, racial segregation by the state public schools remained entrenched in the Deep South.

As time went on, it became clear that courts could effectively enforce their orders on other institutions, that local, state or — if necessary — national government officials would use police or troops to prevent “the law of the land” from being openly defied. Time also permitted the most bitter opponents of racial desegregation to withdraw their children from public to private schools, or to move out to all-white suburban areas, weakening the effective opposition. As the balance of political power turned against their adversaries who had frustrated them for so long, the courts began to issue more and more sweeping orders, involving the courts more and more in the detailed operations of school systems.

Initially, the Brown decision required no more than that the state could no longer use race in assigning children to schools. This was reaffirmed in a later (1963) case where “racial classifications” were “held to be invalid.” This position also appeared in the 1964 Civil Rights Act, which defined “desegregation” as the assignment of public school pupils “without regard to their race, color, religion, or national origin,” and specified that it did not mean assignment “to overcome racial imbalance.”130 Indeed, such language appeared repeatedly in various provisions of the Civil Rights Act and in the congressional debates preceding its passage.131 The congressional intent was, however, turned around in decisions by administrative agencies. The U.S. Civil Rights Commission urged upon the U.S. Office of Education the use of guidelines for the receipt of federal money by school districts, which required that the districts not merely “disestablish” segregated schools but achieve “integrated systems.” These recommendations were acted on in administrative guidelines issued in 1966.132 That same year, the Fifth Circuit Court of Appeals explicitly declared that the “racial mixing of students is a high priority educational goal.”133 This interpretation was unique to the Fifth Circuit, but the Supreme Court reversed the contrary interpretations of other circuits, obliquely establishing the Fifth Circuit decision as a precedent.134 In short, a decision by administrative agencies in effect reversed congressional legislation,135 and an appellate court’s endorsement of that philosophy created a new “constitutional” requirement with neither congressional nor voter sanction and with no such requirement to be found in the Constitution. As a dissenting judge observed:

The English language simply could not be summoned to state any more clearly than does that very positive enactment of Congress, that these so-called ‘guidelines’ of this administrative agency… are actually promulgated and being used in opposition to and in violation of this positive statute.136

Such sweeping changes in policy by oblique means is difficult to explain as the actions of legal institutions impartially carrying out judicial functions, but is much more understandable as actions against long-time adversaries now being routed.

In the 1968 case of Green v. County School Board, the Supreme Court declared unconstitutional a “free choice” enrollment plan because there was now an “affirmative duty” to eliminate dual school systems “root and branch.”137 As in other areas, prospective equality of opportunity was tested by retrospective results. Because only about 15 percent of the black children had chosen to transfer to the formerly all-white school and no white children had chosen to transfer to the all-black school, there was not a desegregated or “unitary” school system, according to the Supreme Court.138 The Green decision was as different from the Brown decision as the two colors in their titles. Brown required pupil assignment without regard to race and Green required pupil assignment specifically with regard to race, so as to eliminate statistical imbalances in the results. Yet the Supreme Court treated the 1968 decision as logically derived from the 1954 decision, though no such derivation was explained — the 1954 decision being only mentioned but not quoted. The Green decision has been aptly characterized as “a masterwork of indirection” and “a rarely equalled feat of sophistry.”139 The court simply pushed on from one victory to a further objective, in the manner of other unconstrained institutions continuing in a given direction, in disregard of diminishing or negative returns.

Under the Supreme Court umbrella provided by the Green decision, lower courts began requiring massive busing,140 not only where there had once been legally segregated school systems,141 but where there had never been legally separated school systems,142 or even in places where racial segregation was forbidden by state law antedating the Brown decision.143 Ability-grouping within schools was sometimes struck down because its statistical effects were different for blacks than whites, and the assignment of teachers by race upheld, along with the firing of white teachers who enrolled their own children in private schools.144 Only with Milliken v. Bradley in 1971 did the Supreme Court put a limit on how widely a court could require busing. By a five to four decision, it overruled a lower court’s order to bus between Detroit and its suburban school districts — an area as large as the state of Delaware and larger than the state of Rhode Island.145 Still, the general principle of interdistrict busing was not repudiated,146 and there was no reversal of the trend toward massive and pervasive retrospective court monitoring of the behavior of school officials, including putting burdens of proof on them to show their innocence after purely statistical prima facie evidence.

The ability of the courts to supersede the authority of other institutions is not the same as the ability to achieve the social results aimed at. The spread of court-imposed busing has been followed by massive withdrawals of white children from the affected schools,147 increased racial polarization among the remaining “integrated” students,148 heightened violence,149 and opposition to busing by both the black and white populations at large.150 None of this constitutes effective feedback to the Supreme Court, whose members have lifetime appointments. Legislative attempts to prevent compulsory busing to achieve racial statistical balance have been turned aside by the Supreme Court by simply denying that the courts are seeking statistical balance151 (though statistical imbalance is their operational definition of “segregation”), thereby implying that the law does not apply to the cases at hand.

The supposed educational or psychological benefits of school desegregation for black children have proved elusive, though many studies have been made to try to find them,152 and some studies have triumphantly announced finding such benefits only to have the data evaporate when challenged.153 The original premise of the historic Brown decision — that separate schools are inherently inferior — was neither supported by fact nor would it stand up under scrutiny. Within walking distance of the Supreme Court was an all-black high school whose eighty-year history prior to Brown denies that principle. As far back as 1899, it had higher test scores than any of the white schools in Washington,154 and its average I.Q. was eleven points above the national average in 1939 — fifteen years before the Supreme Court declared such things impossible.155 There have been other such black schools elsewhere, and indeed NAACP attorney Thurgood Marshall in the Brown case was a graduate of such a school in Baltimore.156 The history of all-Oriental and all-Jewish schools would reduce this ponderous finding to a laughingstock, instead of the revered “law of the land.”

There was never a serious question whether black schools in general had lower average performances than white schools in general. What was an issue was the cause of this. A long history of highly unequal financial support for black and white schools led some to attribute the educational difference to this — but the Coleman Report157 data showed (1) how little difference there was between black and white schools in this regard by the mid-twentieth century, and (2) how little difference financial resources or other characteristics of schools made in educational performances. Obvious genetic differences between blacks and whites led others to attribute educational differences to this,158 but data on various European ethnic groups at a comparable stage of their social evolution in American schools showed I.Q.’s similar to — and in some cases, lower than — those of blacks, even though those European ethnic groups’ I.Q.’s have now reached or surpassed the national average.159 One of the problems in comparing any given group with the “national average” is that the national average is itself simply an amalgamation of highly varying individual and group averages. Therefore a group may vary greatly from the national average without being in any way unique.

Again, as in the case of “affirmative action,” systemic explanations (residential concentration, cultural orientation, etc.) of such social phenomena were discounted in favor of intentional explanations (“segregation,” “discrimination,” etc.), even though black academic performance was not historically unique either in kind or degree. Huge statistical disparities existed among school performances of children from different cultural groups in the past, even when all the groups were white. As of 1911, for example, the proportion of Irish children in New York City who finished high school was less than one-one hundredth the proportion among Jewish children,160 and the Italians did less well than the Irish.161 Schools that were 99 percent Jewish were not uncommon, and attempts to bus the Jewish children from such schools to less crowded schools in Irish neighborhoods across town were bitterly resisted by Jewish parents162 and the Jewish press.163 These earlier busing reforms from above were subject to feedback because they originated with elected officials, unlike later busing schemes initiated by courts and administrative agencies.

The institutional settings and institutional incentives and constraints are crucial to understanding the thrust and persistence of school “integration” or “busing” trends — especially as it has proceeded over the opposition of blacks as well as whites. In the 1960s, Blacks were fairly evenly divided, with a slight majority opposed to busing.164 In later polls in cities like Detroit and Atlanta, where busing has actually been tried on a massive scale, the majority of blacks against it was two-to-one.165 In the well-known Boston busing case, a coalition of dozens of black community groups urged Judge Garrity to minimize busing of their children,166 but neither he nor the NAACP Legal Defense Fund were deterred by such appeals. Indeed, the NAACP had gone against its own local chapters in Atlanta and San Francisco on school busing.167 The head of the NAACP Legal Defense Fund said that his organization cannot poll “each and every black person” before instituting legal proceedings,168 but this sidesteps the larger question of why the organization proceeded in a direction opposed by blacks in general. The answer may be instructive, not only as regards the NAACP Legal Defense Fund but so-called “public interest” law firms in general. The financial costs of the NAACP’s litigation are not borne by its official clients but by third parties, “middle class blacks or whites who believe fervently in integration.”169 In short, “the named plaintiffs are nominal only”170 and the black population in whose name this is all done has little or no effective feedback. The NAACP lawyers “answer to a miniscule constituency while serving a massive clientele.”171

To the outside white world, and especially the mass media, the image of the NAACP officials and lawyers is that of “spokesmen” for blacks as a whole — though there is no institutional mechanism to make that so, and much public opinion evidence on both busing and “affirmative action” to contradict that image. Institutionally, neither blacks as a whole nor even the particular plaintiffs have any control over, or effective input to, NAACP leaders or lawyers. Here, as elsewhere, firms defined by hoped-for results as “public interest” law firms are institutionally simply law firms financed by third party interests. In the case of the NAACP, these third party interests are well insulated from the costs of their activities by the fact that their own children are enrolled in private schools. This includes both direct participants in the school “integration” drive, like Thurgood Marshall and Kenneth B. Clark, political supporters like Senator Kennedy and Senator McGovern172 and media supporters like Carl Rowan.173

The point here is not to make a categorical assessment of the NAACP. Such an assessment would undoubtedly include many valuable and heroic contributions of the NAACP in areas of crying injustices. The question at this point is the incremental movement of the NAACP, and whether that is in the area of diminishing or negative returns. One of the NAACP’s Legal Defense Fund’s staunch supporters and former officials recalls that by the mid-1960s “the long golden days of the civil rights movement had begun to wane”174 and that legal “tools had been developed which now threatened to collect dust”175 unless some new crusade was launched — as it was. Earlier, there was “simply too much else to do.”176 The progression from the urgent to the optional to the counterproductive is one already seen in other organizations with mandated jurisdiction and costs paid by third parties. There is no reason to expect the NAACP to be exempt from patterns discovered elsewhere under such incentives and constraints.

Where third party costs and benefits determine the actions of so-called “public interest” law firms, and where the administrative and judicial resolutions of the issues they raise are insulated from the feedback from those directly affected, then a major shift in political and legal power has occurred away from the actual experiences and desires of the general public and toward the beliefs and dreams of small self-anointed groups — and all this in the name of “democracy” and “the public interest.”

THE SPECIALNESS OF RACE

Racial preferences and antipathies theoretically might be — and historically have been — dealt with by the whole range of social processes and institutions. This plain fact can be expressed, on the one hand, by saying that racism pervades American society, or can be expressed on the other hand by saying that race-based attitudes and behavior, which have affected mankind in every place and time, are handled with varying degrees of effectiveness by this society’s decision-making processes and institutions as well. For “racism” to be an empirically meaningful category, there would have to be a nonracist alternative somewhere. Pending this discovery we are left with the age-old problem of judging institutions by how well they resolve the dilemmas that derive precisely from man’s limitations in knowledge, power, and morality. Presumably, God and the angels do not need institutions.

Clearly, one reason for treating race as special is the historic and traumatic experience of blacks, subject to slavery, discrimination, and degradation in American society. But even if this might justify a special policy for blacks, that is quite different from justifying a general principle, applicable wherever racial differences exist, and readily extendable — logically or politically — to nonracially-defined subsets of the population who choose to call themselves “minorities” (in open defiance of statistical facts in the case of women). This “unreflective extension of policies deriving from America’s racial dilemma to other areas”177 is one of the costs of decision making through those processes which by their nature make their decisions in general and precedent-setting terms. Political, administrative, and especially judicial processes tend to operate in this way. Not only does this “trivialize the historic grievances”178 which served as initial rationale; it multiplies the cost of any resolution of race problems by creating principles applicable beyond the special case used to justify them.

Even within the area of race, it is by no means clear that all historic grievances have a remedy, or who specifically should pay the cost of such remedies as might be attempted. If the purpose is to compensate the pain and suffering of slavery, those most deserving of such compensation are long dead. If the purpose is to restore their descendants to the position the latter would now occupy “but for” the enslavement of their ancestors, is that position the average income, status, and general well-being of other Americans or the average income, status, and general well-being in their countries of origin? The former implicitly assumes what is highly unlikely — a voluntary immigration comparable to the forced shipment of blacks from Africa — and the latter raises the grotesque prospect of expecting blacks to compensate whites for the difference between American and African standards of living. If what is to be compensated is the unpaid economic contribution of slave ancestors to American development, this is an area in which controversies have raged for centuries over the effects of slavery on the American economy — not merely over its magnitude, but over whether slavery’s contribution was positive or negative.179 Without even attempting to resolve this continuing dispute among specialists, it can be pointed out that the case for a negative effect can hardly be dismissed a priori. The South was poorer than the North even before the Civil War, and those parts of the South in which slaves were most heavily concentrated have long been the poorest parts of the South, for whites as well as blacks. Compensation based on the economic contribution of slavery could turn out to be negative. Would anyone be sufficiently devoted to that principle to ask blacks to compensate whites? Or is this simply another “results-oriented” principle, taken seriously only when forwarding some other purpose?

If the basis for special or compensatory treatment of blacks is simply a desire of some segment of contemporary white society to rid itself of guilt for historic wrongs, the question arises as to why this must be done through institutions which extend the cost to other — perhaps much larger — segments of the society whose ancestors were not even in the United States when most of this happened, or were in no position to do anything about it. Even the argument that they or their ancestors were passive beneficiaries of racial oppression loses much of its force when it is unclear that there were any net social benefits beyond the immediate profits of a tiny group of slave owners. If there were ever any net social benefits, it is questionable whether they survived the Civil War, whose costs seemed to confirm Lincoln’s fear that God’s justice might require that the wealth from “unrequited toil shall be sunk” and “every drop of blood drawn with the lash shall be repaid by another drawn with the sword.”180

Individual compassion or a sense of social responsibility for less fortunate fellow men does not depend upon theories of guilt or unjustified benefits, but without such theories it is harder to justify compulsory exactions upon others. Nor do the others accept such exactions without resentment: some “find it just a bit ironic when they demand that we feel guilty for what their ancestors did to the blacks…”181 Moreover, specific compensatory activities may be opposed by the intended beneficiaries themselves — as in public opinion polls which have repeatedly shown a majority of blacks opposed to quotas.182 So it is not clear that guilt-reduction activity is a net social gain. The reduction of guilt, or the expression of social and humanitarian concern, can take place through any number of voluntary organizations, which have in fact made historic contributions to the advancement of black Americans.183

The question of who is to pay compensatory costs often has a perverse answer where such costs are imposed through administrative or judicial processes which permit little or no effective feedback. If compensation were awarded in the generalized form of money, it might at least be possible to make the costs bear some relationship to ability to pay. But much of the compensatory activity takes the form of specific transfers in kind — notably, exemption from standards applied to other applicants for jobs, college admissions, etc. In this form, costs are borne disproportionately by those members of the general population who meet those standards with the least margin, and are therefore most likely to be the ones displaced to make room for minority applicants. Those who meet the standards by the widest margin are not directly affected — that is, pay no costs. They are hired, admitted, or promoted as if blacks did not exist. People from families with the most general ability to pay also have the most ability to pay for the kind of education and training that makes such performance possible. The costs of special standards are paid by those who do not. Among the black population, those most likely to benefit from the lower standards are those closest to meeting the normal standards. It is essentially an implicit transfer of wealth among people least different in nonracial characteristics. For the white population, it is a regressively graduated tax in kind, imposed on those who are rising but not those already on top.

Where racial specialness extends beyond the historic black-white dichotomy, the anomalies are compounded. Americans of Oriental ancestry are often included in special categories. Biology and history may provide some basis for this, but economics does not. Chinese-Americans and Japanese-Americans have long earned a higher income than white Americans. One-fourth of all Chinese employed in the United States are in the highest occupational category of professional and technical workers.184 Historically, Orientals have in years past suffered some of the most extreme discrimination and violence seen in America.185 Past discrimination in schooling, for example, is still visible in the high levels of illiteracy among older Chinese, so that despite the above average education of Chinese-Americans, they also have rates of illiteracy several times that of blacks.186 No amount of favoritism to the son of a Chinese doctor or mathematician is going to “compensate” some elderly illiterate Chinese whose life has been restricted to working in a laundry or washing dishes in a restaurant.

The racial and ethnic mixture of the American population poses still more dilemmas for any attempt to establish institutionalized “special” treatment for race or ethnicity as defined in categorical terms. About half the total American population cannot identify their ethnicity, presumably because of its mixture.187 About 70 percent of black Americans have some Caucasian ancestor(s),188 and a leading social historian estimates the number of whites with some black ancestors in the tens of millions.189 Trying to undo history in this population is like trying to unscramble an egg. Doing justice to individuals in our own time may be more than enough challenge.

CRIME

Criminal law is basically a process for transmitting and evaluating knowledge about the guilt or innocence of individuals suspected of crime. It is also a process for transmitting to actual and potential criminals effective knowledge of the costs of their crimes to others, and the willingness of those others to shift those costs back, in the form of punishments, to the criminals who created them.190 There are costs to the transmission of knowledge of individual guilt or innocence to the legal system, costs to individual defendants caught up in that system, costs to convicted criminals, and of course costs to the victims of crimes and to the general public whose anxieties and precautions against crime are very real costs, whether expressed in money or not. Ideally, the sum of these costs is to be minimized — though not necessarily any one cost in isolation.

In an ideal legal system, the costs of determining guilt or innocence would be held close to the minimum costs of gathering information and determining its veracity to some acceptable level of probability — “beyond a reasonable doubt” in the case of guilt, and to whatever level of probability would socially justify dismissing charges or discontinuing the investigation if the defendant or suspect appeared to be innocent. Since these costs are positive — indeed, substantial — even an ideally functioning legal system would not wholly eliminate crime, but there would be some optimal quantity of crime191 based on costs of knowledge, costs of precautionary measures, and the inconveniences imposed on innocent parties as a result of rules, arrangements, investigations, and suspicions incident to crime-prevention or crime-detection. While the concept of an “optimal” quantity of crime may be uncomfortable, it is also clear that no one is prepared to devote half the Gross National Product to stamping out every residual trace of gambling. Nor are we even prepared to reduce the murder rate at all cost — when that would mean such stringent administration of homicide laws and such low levels of proof required for conviction as to cause some physicians to avoid accepting some or all patients who might die while under their care. There would be no social gain from allowing thousands to perish needlessly for lack of timely medical care, in order to reduce murders by one hundred. Obviously, no one would advocate going to such extremes regarding gambling, murder, or any other crimes, but the point here is to indicate the reasons why — reasons that apply, to some degree, across a much wider range of situations.

In crime control, as in other social processes, decisions and evaluations must be incremental rather than categorical. It is pointless to argue that this or that action will or will not stop this or that crime.192 Nothing short of capital punishment will stop even the individual criminals already caught and convicted, much less others, and no one is prepared to use capital punishment for all crimes. The balancing of social costs implied by incremental decision making on crime control includes costs to all parties, including criminals. Virtually no one is prepared to impose unlimited costs — penalties — for petty crimes or disproportionate penalties even for serious ones. Costs (penalties) are imposed on criminals to reduce the costs they impose on others. If a wrist slap would deter murder, then that would be the socially optimal punishment, in the sense of minimizing the total social costs associated with crime. The argument for some harsher punishment is that a wrist slap will not reduce murders as much, if at all. That is, minimizing the costs to criminals is not minimizing social costs but only externalizing more costs to victims.

Changes in the criminal law change the effectiveness with which knowledge can be transmitted to those deciding innocence or guilt, to criminals contemplating crime, and to the voting public assessing their experience and assessing the protection offered — or not offered — by the criminal justice system.

There are many sources of knowledge, and the behavior of legal authorities puts a higher or lower cost on its transmission or effectiveness. The simple knowledge that a crime has been committed can vary in its availability to the criminal justice system according to the costs imposed on victims, witnesses, or informants. The costs of reporting rape can obviously be increased or decreased substantially by the way police respond to rape victims, by the way opposing attorneys are permitted to cross-examine the victim in court, and by the likelihood that a convicted rapist will be either turned loose soon (perhaps to retaliate against the plaintiff or witnesses) or given a retrial on a technicality. In the landmark Mallory rape case,193 for example, the retrial ordered on appeal was the same as an acquittal, because the victim could not bear to go through the emotional trauma again. The abstract knowledge of guilt — from the defendant’s confession as well as the victim’s accusation — was not socially effective knowledge. Rape is a dramatic and readily understood example of a crime whose very existence can be socially and effectively known only according to the costs imposed by the legal institutions’ behavior. But the same principle applies more generally, and includes laws and practices regarding publication of the identity of informants or the addresses of plaintiffs and witnesses.

The interpretation and administration of rules of evidence also controls or restricts the flow of knowledge necessary to determine innocence or guilt. American law is unique in the extent to which it excludes evidence.194 Evidence can be excluded either because it is considered qualitatively less certain than other evidence, or because of the procedures by which it was obtained. Information that is incrementally less certain is often treated as categorically nonexistent under “hearsay” exclusionary rules in Anglo-Saxon law, though the same quality of evidence could be heard in courts in other Western countries or in Japan.195 “Hearsay” does not mean simply gossip, but includes many official documents whose authenticity and veracity are unchallenged.196 In addition to directly reducing the flow of knowledge into the criminal justice system, Anglo-Saxon “hearsay” rules have been held “responsible for most of the procedural quibbling that takes up so much time in American and British courts.”197 By adding to court congestion and trial delay, it indirectly reduces the flow of knowledge in other cases as well.

One of the most important ways in which knowledge is screened out of the criminal justice system is either by excluding it from trial or reversing the conviction in the appellate courts because it was not excluded. Evidence acquired without following minutely prescribed procedures can also be excluded, without regard to how accurate, verifiable, or relevant it may be. The great fear behind this initially was that police would beat confessions out of innocent people, reducing the reliability of the confession as well as being a crime in itself. But even after coerced confessions were ruled inadmissible, the Supreme Court went further to exclude independent evidence of guilt, if that evidence was found as a result of information obtained from a coerced confession. The meaning of “coercion” was also expanded from physical beatings to psychological pressures to “unnecessary” detention to police failure to describe all the suspect’s legal options.198 There may be enough independent evidence to convict a murderer, if his confession leads police to the scene of the crime, where they find the corpse, and the murder weapon bearing the defendant’s fingerprints all over it — but all of this evidence must be discarded by the criminal justice system if the original confession was procedurally incorrect.199 Even the British do not go nearly that far.

In short, the social costs of effective knowledge of guilt or innocence is multiplied by the restrictions placed on gathering the knowledge in the first place, and by the many ways of having the effectiveness of the knowledge cancelled by appellate courts. It is the same net result if costs of knowledge are directly tripled or if only one-third the knowledge gathered survives the screening processes involved in restrictive rules of evidence, procedural technicalities, and the exhaustion of witnesses through delays and retrials.

In criminal law, as in other social processes, there are inherent constraints of circumstances and human beings, and these constraints entail trade-offs. The repugnance and pain which a conscientious person feels at the thought of imprisoning or executing an innocent man, or letting a guilty sadistic murderer go scot-free back into society on a technicality, in no way removes the constraints or relieves the essentiality of trade-offs. The ideal of “a government of laws and not of men” implies an established process rather than ad hoc judgments of what is right in each case. Inherent in this are deviations between the particular consequences of a systemic process and the individual results most in accord with the principles that the process was meant to embody. The more effective the legal processes, the smaller are these deviations, but in any process conceived and carried out by human beings there will be deviations — and in some cases, extreme deviations. Legal systems try to reduce these extreme deviations by allowing appellate courts to review cases. But to some extent this recreates the original dilemmas of trial court systems at the appellate court level.

If appellate courts are to be part of a coherent legal system, rather than arbiters armed with power to decide each case anew in whatever way they choose, then what is decided in one case must be part of a legal pattern applicable to other cases with similar objective factors involved. What is decided in extreme cases becomes a precedent for other cases. In this kind of social package deal, often “hard cases make bad law” for the future. For example, blatant racial bias in trials and sentencing in some cases in some states may cause the whole federal legal system to involve itself in the minute details of state courts in all states.200 As a result, a white, Anglo-Saxon criminal caught in the act in California may go free because of legal procedures created when an innocent black was railroaded to jail by an all-white jury in Mississippi. Appellate courts can adjust the application of their decisions to some extent, but there are limits to how far this can go and still retain the rule of law and the role of appellate courts as rule-making organizations, rather than roving commissions with sovereign powers to decide each case as they please. This is neither a criticism nor a defense of appellate courts, but simply an indication of the momentous legal trade-offs involved.

The Constitution of the United States limits how far these trade-offs can go in one direction — that is, how high the cost can go for a criminal defendant, or even for a convicted criminal. There are no comparable limits on the costs which the legal system can impose on a crime victim seeking to prosecute the criminal. In the case of rape victims these costs are obvious not only for the victim, but also for the larger society, which has its own interests in keeping rapists off the street. But there are no victim’s counterpart of the defendant’s constitutional protections against double jeopardy, self-incrimination, or cruel and unusual punishment. In particular, the right to a speedy trial applies only to the defendant, not to the victim or to witnesses who can become exhausted, disgusted, fearful, or forgetful in crucial details as repeated trial delays stretch out for months or even years. Indeed, victims or witnesses may die or move out of state as legal processes drag on, quite aside from the financial losses imposed in taking off from work repeatedly to go to court for a trial that is again and again postponed at the defendant’s request. Criminal lawyers are well aware of the advantages of sheer delay in wearing down plaintiffs and witnesses, or even a district attorney with a limited budget and limited time. In short, “due process” has a social cost, and that cost can — in particular cases — rise to levels which in effect negate the law in question. This may or may not be inherent in any form of constitutional law. What is important here is to be aware of such cost relationships — the central reality of trade-offs — as we turn from this brief static sketch of criminal law and appellate courts to a consideration of the trends in criminal law recent decades. These include trends in crime rates, in arrest procedures, in trials, and appeals.

CRIME RATES

Crime rates per 100,000 persons more than doubled during the decade of the 1960s — whether measured by total crime, violent crime, or property crime.201 How much of this represents an actual rise in crime, and how much an increased reporting of crime, remains a matter of controversy. However, there is general agreement among people who agree on little else, that murder has generally been accurately reported, since it is hard to ignore a corpse or someone’s sudden disappearance.202 Trends in this widely reported crime are also rising dramatically. Murder rates in large cities doubled in less than a decade between 1963 and 1971. The probability that someone living his whole life in a large city today will be murdered is greater than the probability of an American soldier in World War II being killed in combat.203

Crime is no more random than any other social activities. Murder rates in the big cities are more than four times as high as in the suburbs.204 More than half of all serious crime in the United States is committed by youths from ten to seventeen years old.205 Moreover, juvenile crime rates are increasing faster than adult crime rates.206 The number of murders committed by sixteen-year-olds tripled in four years in New York City.207

These patterns have some bearing on popular explanations for crime. For example, crime has been blamed on “poverty, racism and discrimination”208 and on “the inhumanity of our prisons.”209 As already noted, poverty and racial discrimination (whether measured in incomes, education, or segregation laws) were greater in the past, and their continuing effects are more apparent among older blacks than the younger. Crime, however, is greatest among youthful blacks210 and hostility to police is greatest among upper income blacks.211 As for harsh punishment as a source of repeated crimes, (1) those persons arrested and released or acquitted are rearrested more often than those that are imprisoned212 and (2) the escalation of crime rates during the 1960s occurred while smaller and smaller proportions of people were going to prison — indeed while the conviction rate was falling213 and the prison population was going down as the crime rate soared.214 Insofar as poverty, discrimination, and imprisonment are variables believed to be correlated with crime rates, the evidence refutes the hypothesis. Insofar as these constitute an axiom, it is of course immune to evidence.

The level and trend of American crime rates may be put in perspective by comparison with those of other nations. Murder rates in the United States have been several times those in such comparable societies as those of Western Europe and Japan.215 Robbery rates are also higher.216 Crime rates in general are only moderately higher in the United States than in Europe,217 but it is in the violent crimes that the difference between the U.S. and other countries is greatest. For example, New York, London, and Tokyo have comparable numbers of inhabitants (Tokyo the most), but there are eight times as many murders in New York as in Tokyo,218 and fifteen times as many murders as in London.219 Intertemporal comparisons show a rise in crime rates around the world220 — with the notable exception of Japan. What is different about Japan may provide some factual basis for testing competing theories of crime control.

The rising murder rate in the United States is largely a phenomenon dating from the mid-1960s, and continuing to escalate in the 1970s221 — a rise generally coinciding with the sharp dropoff in executions.222 This rise in murder rates reversed a long-term decline in the murder rate in the United States. The absolute number of murders in American urban centers of 25,000 or more remained relatively constant from 1937 through 1957,223 even though the population in such centers was growing rapidly over that span.224 Urbanization, as such, apparently had not entailed rising murder rates. Demographic and socioeconomic changes in the population have been too gradual to account for the sudden reversal of a downward trend and its replacement by an escalating upward trend. The only apparent variable that has changed dramatically in the 1960s and 1970s has been the procedures and practices of the criminal law.

CRIMINAL LAW PROCEDURE

One of the basic questions about criminal law procedure is simply how much of it there is, in purely quantitative terms. In England, the longest criminal trial on record lasted forty-eight days.225 In the United States, there have been criminal trials in which the selection of a jury alone has taken months.226 In England the selection of a jury “usually takes no more than a few minutes.227 A criminal trial length that would be “routine” in California228 would be record-breaking in England. The British example is particularly appropriate, not only because of general similarities between the two countries but more particularly because American law grew out of British law, the two countries have similar notions of fairness, and England is not regarded as either a police state or a place where innocent defendants are railroaded to jail.

Delays in American courts did not just happen. A procedural revolution in criminal law was created by the Supreme Court in the 1960s — the decade when crime rates more than doubled. Much attention has been focused on the specifics of these procedural changes — warnings to suspects, restrictions on evidence, etc. — but it is also worth noting the sheer multiplicity of new grounds for delay at every stage of criminal procedure, from jury selection all the way to appeals to the Supreme Court.

Contrary to a long legal tradition, the Warren Court interpreted the Fourteenth Amendment as applying many federal rights and practices to the states in general, and the state courts in particular.229 Quite aside from the question of whether this was justified constitutional interpretation, or even whether the specific federal practices were better or worse than existing state practices, this created dual channels of legal appeal, between which a defendant could go back and forth — repeatedly adjudicating each of numerous new rights in two whole systems of multiple courts. The lowest federal district judge could now overturn the decision of a state supreme court, and the federal courts in general now assumed jurisdiction over procedures used in state trial and appellate courts. Moreover, some of these newly discovered or newly created rights were made retroactive, so that a criminal could, for example, challenge prior convictions on grounds that the state court could not prove that they had supplied him with a lawyer, thirty years before the Supreme Court required them to supply him with a lawyer — or to keep records of such things.230 Similarly, the Supreme Court’s 1968 ruling that it was unconstitutional to allow a jury to hear the unedited confession of a codefendant was made retroactive, and was then used in 1969 to overturn a 1938 felony conviction in which that had happened.231

The increased litigation made possible by the decisions of the Warren Court was litigation over procedures — not guilt or innocence. Premeditated murderers, witnessed in the act, were able to continue appeals for more than a decade without even claiming to be innocent, but merely challenging legal procedure.232 A murderer-rapist of an eight-year-old child, whose confession was corroborated by both evidence and other testimony, was set free by federal courts on procedural grounds — and the state courts forbidden to re-try him — even though his confession was found to be voluntary, the facts of the crime undisputed, and the evidence “overwhelming” in the judgment of the state supreme court.233 Nor were these procedural matters anything as serious as police beatings or even threats, but turned instead on fine legal points on which appellate judges often divided four to three or five to four.

The social costs of the Warren Court’s procedural changes were not simply those particular instances of freeing dangerous criminals which outraged the public, but also included an exponential increase in litigation which backed up other criminal cases and necessitated plea bargaining. The number of state prisoners applying for writs of habeas corpus in the federal courts increased from less than 100 in 1940 to more than 12,000 in 1970.234 Nor were these cases newly discovered miscarriages of justice. A federal appellate judge observed:

For all our work on thousands of state prisoner cases I have yet to hear of one where an innocent man had been convicted. The net result of our fruitless search for a nonexistent needle in the ever-larger haystack has been a serious detriment to the administration of justice by the states.235

A California appellate judge likewise observed:

It is with almost melancholy nostalgia that we recall how only five years ago it was possible to sustain a judgment of conviction entered in such a clear case of unquestionable guilt and to accomplish it without undue strain. Today, however, the situation is vastly changed.236

While the extent to which procedural complexities and ambiguities impede criminal justice processes may be unique to the United States, elements of this trend have spread beyond the American borders. Even though the British courts do not exclude illegally seized evidence, and will not turn a felon loose merely because of police failure to follow procedural rules,237 there has been some movement in the direction of “the ‘Americanization’ of English criminal justice”:238 less chance of imprisonment,239 more lenient sentencing,240 more release into the community,241 and activities described by their hoped-for results as “rehabilitation” programs. How much things have changed in England may be indicated by the fact that in the 1930s a murder conviction meant a two-out-of-three chance of execution within two months,242 whereas in 1975 the death penalty was abolished.243 Along with these American procedures have come American results — court congestion,244 delayed trials,245 and rising crime rates.246

British intellectuals, like their American counterparts, have been preoccupied with the presumed social causes of crime247 — the “root causes” in American intellectual terminology. The usually presumed social “causes” of crime — poverty, unemployment, and broken homes — are wholly uncorrelated with the rise in crime in Britain. There has been no increase in poverty or broken homes there, and there has been a reduction of income inequality and a “virtually nonexistent” unemployment rate in Britain during the period of rapidly increasing crime rates.248 The criminal justice system has simply become slower and more uncertain.

By contrast, the only major nation in which crime rates have been going down over the past generation is Japan, where more than 90 percent of all violent crimes lead to arrest and 98 percent of all defendants are found guilty. Plea bargaining is illegal in Japan,249 as it is in many other countries. The sentences are no greater in Japan,250 but the chance of getting away scot-free are less. Various supposed causes of crime — television violence, urbanization, crowding — are at least as prevalent in Japan as in the United States.251 There are, however, far more policemen per square mile in Japan than in the United States, though somewhat fewer per number of people.252 There is no evidence, however, that Japan has discovered the “root causes” of crime, much less eliminated them — or, indeed, is putting forth much effort in that direction.

Both international and intertemporal comparisons indicate that criminal law procedures affect crime in the way that common sense suggests: punishment which comes quicker and/or with higher probability deters more than punishment that can be delayed or evaded. The tendency of the Supreme Court in the Warren era has been to expand the number and scope of the grounds on which criminals can appeal — delaying (and thereby diluting) a given punishment, reducing the probability of conviction for the actual offense (more plea bargaining) and reducing the probability of being convicted at all. The fact that guilt becomes largely irrelevant when the police do not follow specified procedures allows corrupt policemen to convey legal immunity to criminals by deliberately violating such procedures.253 The cost of groundless appeals to the criminal is zero if he has a lawyer supplied by the state or by third-party-financed (“public interest”) law firms. Even if he has to act as his own attorney, the costs are negligible if he is in jail with nothing else to do. The repugnant task of rationing justice is no less inescapable for its repugnance. Unless unlimited resources are available for criminal justice procedures — and congested courts imply that they are not — then one man’s right to appeal means a sacrifice of someone else’s right to a speedy trial and/or the sacrifice of innocent third parties victimized by the backlog of other criminals free on bail while awaiting trial in a congested court system.

In recent years criminal law procedures have often been viewed, not as social institutions for transmitting knowledge about guilt or innocence, but as arenas for contests between combatants (prosecution and defendants) whose prospects must be to some degree equalized. In particular, the power of the state is depicted as so disproportionate to that of the defendant that some kind of equalization is in order. There is even great concern for intracriminal equity — equalizing the prospects of criminals with varying sophistication to escape prosecution or conviction. If experienced criminals, gang members, and Mafiosi know how to “stonewall” police questions, then “elemental fairness”254 requires that similar sophistication be supplied by the government to less sophisticated criminals as a precondition for a guilty verdict to stand up in the appellate courts.255 To do otherwise, according to this view, is to “take advantage of the poor, the ignorant, and the distracted.”256 Thus, intracriminal equity supersedes criminal-victim equity in this formula — or rather, the second kind of equity is ignored. This is a special case of the “fair contest” approach, which emphasizes the great power of the government vis-à-vis the individual criminal. But to judge “power” by physical artifacts — numbers of officials, sums of money, quantity of weapons, etc. — is to ignore the relationship of those things to their intended objects. A motor that is far too powerful for a lawn mower may be grossly inadequate for a truck. The individual criminal need only be concerned with saving himself from conviction, while the government must safeguard a whole population from his acts and the acts of other criminals, and from the fears and precautions due to those acts. Empirically, the evidence is that criminals as a group are more than able to hold their own against the government. Few crimes in the United States lead to anyone’s being imprisoned.257

Intracriminal equity, like any form of equity, is equity only along a given dimension and conflicts with equity along other dimensions. For example, if people are to be paid according to an equitable principle of how much effort they put into their work, that conflicts with sharing equitably in the employer’s earnings, or receiving an equitable portion of total national output — quite aside from the conflict of equity in general with various economic and other principles. Intracriminal equity likewise cannot be extended indefinitely without conflicting with equitable considerations regarding the victims of crime or the public in general. However, no institutional mechanism forces federal appellate courts to weigh these other considerations. And because federal courts supersede all state courts, the latter — though elective and therefore subject to feedback — are bound by the federal precedent. In short, the only constraints on how far intracriminal equity can be carried are constraints the federal judges choose to impose on themselves. When a U.S. Attorney General and a Chief Justice of the Supreme Court both argue for judicial equalization of legal prospects as between less sophisticated criminals and more sophisticated criminals, so that “hardened underworld types” will not have an unfair advantage over “unwary”258 or “distracted”259 criminals, clearly intracriminal equity is a principle enjoying a vogue in high places. The principle has been extended well beyond the idea that a court must not create categorical inequities of its own to the idea that it must redress certain preexisting inequities in criminal endowments of sophistication in eluding the law. Since courts cannot equalize downward by reducing the cleverness of the most accomplished criminals, all that is left is to equalize upward by increasing the ability of less clever criminals to evade punishment for their acts — regardless of what that means in terms of equity to victims and the public.

Intracriminal equity extends even to groundless appeals. If privately paid lawyers make frivolous appeals based on unsubstantiated claims of “insanity,” then a court-appointed attorney who fails to do so for his client has, in this view, denied the client his constitutionally guaranteed right to counsel260 — a right expanded during the Warren Court years to mean free provision of counsel, whose conduct of the defense can then be retrospectively evaluated by appellate courts to insure that he attempted enough technicalities to satisfy their conception of “competent” representation. It is not that the appellate court actually found the defendant insane — or even regarded that as a likely possibility — but that they second-guessed the defense strategy of the court-appointed attorney and thought that was a tactic he might have tried. Such extrapolations and improvisations from the simple constitutional right to use a lawyer illustrate again the law of diminishing returns, and the tendency of unconstrained institutions to extend themselves past the point of counterproductive inputs from the standpoint of their mandated purpose — in this case, determining guilt or innocence and meting out justice.

PUNISHMENT

Trends in the punishment of criminals can be readily summarized: over the past generation, punishments for convicted criminals have become less common, less severe, and less honestly reported to the public. In the American legal system, punishment is less common than in the British legal system from which it evolved. California alone has six times as many robbers as England, but more people were in prison for robbery in England than in California.261 On paper, the United States has “the most severe set of criminal penalties in its law books of any advanced Western nation,”262 but they are seldom put into practice. Less severe penalties — that are actually enforced — have produced a long-term reduction of serious crime (including hard drug usage) in Japan, over the same decades during which American crime rates have been soaring. Studies in various American cities show that most felons with prior convictions are placed on probation rather than going to jail.263

Harsh penalties on paper and probation in practice are part of a more general pattern of duplicity. “Life” sentences in many states mean “eligibility for parole in three to five years.”264 “First offenders” include long-term criminals whose prior convictions are not technically admissible in court because of the age at which these crimes were committed. Supposedly successful “rehabilitation” programs have repeatedly been found on closer scrutiny to have been ineffective, or even counterproductive.265 These are not random divergences between theory and practice. They are systematic biases overstating to the public what punishment is being applied or understating either the crime (reduced charges under “plea bargaining”) or the nature of the criminal (“first offender”). Concurrent sentences mean that there are no sentences for additional contemporaneous crimes. Parole boards mean that even the few sentences handed out in court are grossly overstated. So-called “supervised” probation or parole consists of “a 10- or 15-minute interview once or twice a month”266 between a criminal who is on his own otherwise and an official who, in two-thirds of felony probations, is responsible for more than one hundred cases at a time.267

These systematic biases in the transmission of knowledge insulate decision makers, advisers, and others who influence the criminal justice system from feedback from the actual experience of the public with the fruits of their decisions. Central to the duplicity and the insulation are vast differences between the beliefs of criminal law “insiders” and the public — and the determination on the part of insiders that public influence is to be minimized. It is a point of honor to have ignored “public clamor.” In short, criminal law decision making is insulated from feedback, not only institutionally but ideologically. No insulation is ever perfect, so that public outrage in some egregious cases that happen to come to light has occasional effect on the law. Nevertheless, the history of trends in criminal law over the past generation is essentially the history of intellectual fashions among a small group of theorists in law and sociology. These fashions include several key premises: (1) punishment is morally questionable, (2) punishment does not deter, and (3) sentences should be individualized to the criminal rather than generalized from the crime.

The moral questionability of punishment derives from the premise that “vengeance” is a “brutalizing throwback to the full horror of man’s inhumanity in an earlier time…”268 This argument from location in time is buttressed by claims that a personified “society” itself causes crime. According to this theory, “healthy, rational people will not injure others,”269 so that crime is the result of a social failure to create such people or to rehabilitate the criminal into becoming someone who “will not have the capacity — cannot bring himself — to injure another to take or destroy property.”270 Neither blueprints nor examples are provided. Moreover, these quotations are not from a sophomore term paper, but from a book widely hailed by legal scholars, practicing lawyers, and leading newspapers and magazines.271 In a similar vein, Chief Justice Earl Warren found crime “in our disturbed society” to be due to “root causes” such as “slum life in the ghettos, ignorance, poverty” and even — tautologically — the illegal drug traffic and organized crime.272 “Root causes” are prominently featured in this literature,273 and confidently spoken of as if they were well-documented facts, rather than arbitrary assertions at variance with the empirical relationship between the rising crime rates and reduced poverty and discrimination. The idea that people are forced to commit crimes by bad conditions of one sort or another also ignores thousands of years of history during which kings and emperors, raised in the midst of luxury, committed the most brutal atrocities against their subjects.

The argument that punishment does not deter takes many forms. At the most primitive level, failure of punishment to deter is claimed on the ground that various crimes — or crimes in general — have not been categorically eliminated. From this standpoint, the very existence of crime is proof of the futility of deterrence, for “criminals are still with us.”274 By parallel reasoning, we could demonstrate the futility of food as a cure for hunger, since people get hungry again and again despite having eaten. An old joke has a small child decrying baths as futile because “you only get dirty again.” Similar reasoning by a grown man who was also the top law enforcement officer in the country seems somewhat less humorous, though no less ridiculous.275

The meaningful issue is not categorical deterrence but the incremental effect of punishment on crime rates. It is easy to become bogged down in the question as to how much the environment is responsible for crime as compared to individual volitional responsibility. But even if we accept, for the sake of argument, that environment is largely responsible — or even solely responsible — it does not follow that punishment is futile, either incrementally or categorically. Punishment is itself part of the environment. The argument that environmental forces influence or control the incidence of crime in no way precludes punishment from being effective, though that theory has often been put forth for that purpose. This is ultimately an empirical rather than a philosophical question, but commitment to the social reform or “root causes” approach has meant that few legal or sociological theorists “are even willing to entertain the possibility that penalties make a difference.”276 Only in relatively recent years have there been a few serious statistical analyses designed to test the empirical question — and they have indicated that punishment does deter.277

Arguments for “individualizing” the punishment to the criminal, rather than generalizing punishment from the crime, presuppose a result rather than specifying a process. Whether or not such a result is desirable, the question must first be faced whether courts can in fact do it. Merely varying sentences is easy, but to do so in a manner related to the actual personalities of each criminal is neither easy nor necessarily even feasible. As noted in Chapter 2, formal institutions have great difficulty acquiring accurate knowledge about individual personalities. Everything in the criminal justice setting provides incentives for concealment and deception on the part of the criminal, his family and friends — i.e., those actually possessed of the fullest and most accurate knowledge. Banks tend to leave the financing of new small businesses to their founders and the founders’ family and friends for similar reasons. Courts are not institutionally constrained from speculating about personality traits, the way banks are constrained by the prospect of financial losses, but the social costs of such speculation can be even greater when courts rely on either mechanical criteria or psychological guesswork to “individualize” sentences. Moreover, so-called “individualized” sentences in practice mean reduced sentences. No psychological findings or other evidence will legally justify life imprisonment or execution for a petty thief, no matter what deadly personality characteristics are uncovered. It is a wholly asymmetrical process, and should be judged for what it is — one more way of reducing or eliminating punishment.

What are the social costs of this asymmetrical process of sentence-reduction? Insofar as sentences are reduced (or eliminated) to match the presumed personality of each offender, they do not convey as clear or as definite a message as deterrence to others. Moreover, even to the individual criminal, they present punishment not as their fellow man’s assessment of the seriousness of their crime, but as a happenstance deriving from the personality or mood of a particular judge, or the criminal’s own performance in impressing or maneuvering with psychiatrists, psychologists, or probation or parole officials. The social costs also include still more delay introduced into the courts, while all sorts of information, “findings,” and “recommendations” are assembled — a process that can go on for months, even in relatively simple cases where all this activity does not change the end result.

As in other cases of attempted “fine tuning” of social decisions, the question in criminal justice is not what decision we should make if we were God, but what decisions we can make effectively, given that we are only human, with all that that implies in terms of limitations of individual knowledge. Juvenile criminal sentencing is particularly subject to “individualizing” tendencies — in some states, it is solely the well being of the individual young criminal that can be legally taken into account in the disposition of his case — and it is perhaps revealing that it is here that the failure of the system is most apparent in especially rapidly escalating crime rates.

In the emotion-laden area of capital punishment, a recent study indicates several murders deterred for every execution.278 This conflicts with an earlier and cruder study, and the reception of the two kinds of studies by legal and social theorists is revealing. The earlier capital punishment study, by Thorsten Sellin, compared states with and without death penalty laws on their books.279 The later study, by Isaac Ehrlich, compared actual executions rather than laws seldom put into effect. Clearly it is the executions rather than the words in law books which constitute capital punishment, and the question of deterrent effects is a question about executions. It is Ehrlich’s study of actual executions that shows a deterrent effect. Yet the earlier and cruder study continues to be cited as proof that capital punishment is ineffective as a deterrent, while the later study is either ignored or subjected to far more critical scrutiny than the earlier.280 It is clear which conclusion is preferred by legal and social theorists, but the policy preferences of “experts” do not become empirical facts by consensual approval or by sheer repetition.

Virtually all researchers on both sides of the capital punishment controversy are agreed that there are problems inherent in the data281 and problems inherent in the choice of statistical techniques to analyze the data.282 The very definition of “murder” creates problems. Data are usually available on “homicide,” which includes accidental vehicular homicide and negligent manslaughter as well as murder and nonnegligent manslaughter, and “records are not generally separated according to the type of homicide committed.”283 No one expects the death penalty for first-degree murder to deter automobile accident fatalities, which are also included in the data being analyzed. Moreover, the drastic decline and — in some years — total disappearance of executions over the past generation284 creates statistical problems due to small (or nonexistent) samples of one of the variables. There has been no period of history with both good data on first-degree murder and also a substantial number of executions. Finally, the period in which the death penalty declined and virtually disappeared was also a period when the risk of any punishment was also declining. In short, there is no factual proof either way, despite the consensual dogma that capital punishment does not deter.

As in other policy areas, however, the question is not what should be decided, but who should decide what should be done. Courts have largely appropriated that legislative function under the guise of “interpreting” the Constitution. What is far more clear is that a declining incidence of punishment in general (and capital punishment in particular) over the past generation — but especially during the 1960s — has been accompanied by a rising rate of violent crime in general, and murder in particular. International comparisons buttress this conclusion and are also consistent with the conclusion that it is not the words on law books which constitute deterrence. American laws are among the most severe in the Western world in theory and the least applied in practice, and the United States has far higher rates of violent crime (especially murder) than countries with less severe laws that are applied more often. Various historic, cultural and other differences among nations make international comparisons more difficult, but it is significant that the spread of American legal theories and practices to other countries has been accompanied by American results in court congestion and rises in crime rates. The influence of legal and social theorists on criminal law practices has also spread beyond the United States, and these “experts” are apparently no more open to factual evidence counter to their consensual beliefs abroad than in the United States.285

Throughout the Western world, capital punishment has been either explicitly abolished or has dwindled to the vanishing point in practice. The United States was already part of the general pattern of a declining use of capital punishment when the Supreme Court in 1972 declared the death penalty unconstitutional as “cruel and unusual punishment” forbidden by the Eighth Amendment — in some instances.286 Since the Eighth Amendment consists of only one sentence287 and contains no exceptions, the partial outlawing of the death penalty is even more obviously a judicial improvisation than the decision itself. The fallacy of confusing decisiveness with exactness runs through much of the Supreme Court testimony and questioning as to what exactly was meant by “cruel and unusual.”288 What clearly was not meant was the death penalty. The Fifth Amendment, passed at the same time as the Eighth Amendment, recognized the death penalty and required only that “due process” precede it. The states which ratified both amendments had death penalty laws which they — and others — applied for almost two centuries before they were stopped by a five to four Supreme Court decision (with nine separate opinions) saying that it was unconstitutional in some circumstances. The particular circumstances that would make it unconstitutional have themselves varied, so that in practice death penalties are unconstitutional when a particular Supreme Court chooses to object to the procedures used to reach verdicts. It is, in effect, the laws and the verdicts which have been ruled unconstitutional, under the guise of ruling the punishment unconstitutional as “disproportionate to the offense” or as capriciously applied — neither of which are characteristics of punishment itself.

Several arguments have been emphasized by opponents of the death penalty: (1) it is immoral for the state to deliberately kill, (2) capital punishment does not deter, (3) errors are irrevocable, (4) the application of the death penalty has been arbitrary and capricious in practice, and (5) blacks have been disproportionately overrepresented among those executed, showing the racial bias of the system.

The immorality of execution is based on a parallel between the first-degree murderer’s premeditated killing of his victim and the law’s subsequent premeditated killing of the murderer. In this view, we must “put behind us the notion that the second wrong makes a right…”289 The two events are certainly parallel as physical actions, but if that principle determines morality, it would be equally immoral to take back by force from a robber what he had taken by force in the first place. It would be equally immoral to imprison someone who had imprisoned someone else. It is another case of the physical fallacy — regarding things which are physically the same as being the same in value; in this case, moral value. By this standard a woman who uses force to resist rape would be as immoral as the would-be rapist. Insofar as he is successfully beaten off, all that has happened physically is that two people have been fighting each other. No one would regard the physical equivalence as moral equivalence. When the physical parallel involves human life, the stakes are higher, but the principle does not change. The morality of execution does not depend upon physical parallels.

Sometimes the claim of immorality is based on a supposedly inadvertent revelation of shame by the unwillingness of most people — even advocates of capital punishment — to witness an execution.290 But most people would not want to witness an abdominal operation, and yet no one regards that as evidence of immorality in such operations. Nor would a philanthropist who donated money to a hospital to advance such operations be considered a hypocrite if he declined an invitation to watch the surgery. Such arguments are even more difficult to take seriously, when the very same proponents claim that it was immoral for people to watch executions when they did,291 and that it is immoral for us not to watch them now.292

The argument that capital punishment does not deter glosses over some important distinctions. Any punishment may deter either by incapacitating the criminal (temporarily or permanently) from repeating his crime, or by using him as an example to deter others. Clearly capital punishment incapacitates as nothing else. The obviousness of this in no way reduces its importance. It is especially important because the attempt to incapacitate by so-called “life sentences” means nothing of the sort, and can mean that a first-degree murderer will be back on the street within five years legally, and of course sooner than that if he escapes. He can also kill in prison. Arguments about the supposedly low recidivism rates of murderers in general are beside the point. They would be relevant if the issue were whether all murderers must always be executed regardless of circumstances. But that is not the law at issue, nor have American judges and juries followed any practice approaching that. What is at issue is whether courts shall have that option to apply in those particular cases where that seems to be the only thing that makes sense.

The irrevocable error of executing the wrong person is a horror to anyone. The killing of innocent people by released or escaped murderers is no less a horror, and certainly no less common. The recidivism rate among murderers has never been zero, nor can the human error in capital cases ever be reduced to zero. Innocent people will die either way. If there were some alternative which would prevent the killing of innocent people, Virtually anyone would take it. But such an alternative does not come into existence because we fervently wish it, or choose to assume it by closing our eyes to the inherent and bitter trade-off involved. Trying to escape these inherent constraints by arguments that “a society which is capable of putting a man on the moon” is “capable of keeping a murderer in jail and preventing him from killing while there”293 is using an argument that would make us capable — seriatim, at least — of accomplishing almost anything we wanted to in any aspect of life. It is the democratic fallacy run wild.

Because executions take place in only a fraction of the convictions for capital crimes, opponents of capital punishment have claimed that the condemned were chosen “capriciously,” “freakishly,” “arbitrarily,” or at random, or with no logic or justice.294 Justice, of course, has many dimensions, of which intracriminal equity is only one; nor is it obvious why intracriminal equity should be the sole or overriding consideration. If this argument were taken seriously and applied seriously, it would be impossible to punish any criminals for any crime, in a system with different juries — which is to say, in all possible legal systems, as long as human beings are mortal. Barring a single, immortal, jury to hear all criminal cases, intracriminal equity can never be carried to perfection, but only into regions of negative returns, in any system of justice concerned also with other kinds of equity, including victims and the public.

To argue that the degree of intracriminal equity can be directly deduced from numbers and percentages is to repeat the fallacy in “affirmative action” cases of presupposing that numbers collected at a given institution are caused by that institution. If people differ in the quantity or manner of their crimes, they will differ also in their conviction and sentencing statistics, even if judges and juries were all totally impartial and just. We know that such perfection is not to be found among judges and juries, any more than among other groups of human beings, and in particular cases — blacks facing all-white juries in the South being the classic example — the reality has sometimes been very remote from the ideal. However, this is based on history and observations, not on the statistics cited as evidence and used to give it all a “scientific” appearance. If statistics, as such, are to be taken seriously, then a much ignored statistic must also be included: more black people are murdered than whites — that is, there are more black murder victims in absolute numbers than white murder victims,295 even though blacks are only about 12 percent of the population. Moreover, murder is usually not across racial lines, involving as it often does family members and friends. Against that background, the statistic that blacks are overrepresented among those executed assumes a different dimension, since blacks are also grossly overrepresented among the victims. A recent study in the north found persons who commit murder about equally likely to be executed, whether they are black or white.296 It is one thing to lament historic injustices; it is another to use them to misrepresent current empirical data.

Even in racially homogeneous societies there are undoubtedly differences in murder rates among very different social groups. Indeed, in the United States there are vast differences in murder rates between men and women.297 Even in the absence of such evidence, however, anyone with any humility or sense of common humanity must recognize that, if raised under sufficiently bad conditions — taught no difference between right and wrong, and growing up in an environment where violence was not only accepted but admired — that he, too, could have grown up into the kind of person with whom no society can cope. In some ultimate ethical sense, “there but for the grace of God go I.” It would be inexcusable even to shoot a mad dog if we knew how to catch him readily and safely, and cure him instantly. We shoot mad dogs only because of our own inherent limitations as human beings. There is no need to apologize for this — and certainly no need to pretend to more knowledge than we have, whether to “rehabilitate” a murderer or to eliminate “root causes” of crime. We do not play God when we act — as we must — within our limitations. We play God when we pretend to an omniscience and a range of options we do not in fact possess.

The notion that the death penalty is applied with caprice — as distinguished from bias — is an argument from ignorance. Observers do not know why some juries decided one thing and another jury decided something else. Since there is no institutional provision for juries to articulate their reasons — much less coordinate the articulation of one jury’s findings with those of other juries — the absence of such a pattern is hardly surprising. To say that an observer does not see a pattern is not to say that there is no pattern. A motorist driving down a highway or through town may see no pattern in the location of hamburger stands, but an executive in the headquarters of McDonald’s or Burger King might be able to show him that these locations are by no means random or capricious. Indeed, the mark of a specialist in any field is the ability to discern patterns which escape common observation. For many areas of human experience, there are no specialists or experts because no one is prepared to invest the time and effort needed to discover patterns in those areas. In an area such as jury verdicts, where reasons would be difficult to accurately articulate, where they are not required to be articulated, and where there are indeed restrictions on such articulation in public, to consider the absence of an apparent pattern among juries a sign of “freakish” decisions and arbitrary choices is the arrogance of asserting that what one does not discern does not exist. And to make that the basis of a constitutional ruling is to impose the arrogance of an elite on the rest of the country as “the law of the land.”

CONSTITUTIONAL INTERPRETATION

Over and beyond questions of the wisdom, effectiveness, or efficiency of legal decisions regarding free speech, race, crime, and other vital concerns, is a larger question of the role of law, and particularly of “a government of laws and not of men.” Considering the centuries of human suffering, struggle, and bloodshed to escape arbitrary tyranny, it is hardly surprising that there should be profound anxiety about the erosion or circumvention of that ideal. At sporadic intervals in history, the Supreme Court of the United States has been the center of storms of controversy, involving not only the merits of particular decisions, but also the fear that its role of constitutional interpretation was being expanded to judicial policy making — representing a threat to the very rule of law which it is supposed to epitomize. Such apprehensions go back to Marbury v. Madison in 1803, which established the Supreme Court’s power to invalidate the laws of Congress as unconstitutional, and have surfaced again in such cases as the Dred Scott decision in 1857, the “court packing” controversy of the 1930s and Brown v. Board of Education in 1954. But while modern controversies surrounding the Supreme Court are not historically unique, what has been unique is the frequency, scope, and sustained bitterness of controversy engendered by a whole series of court decisions reaching into every area of American society. What has also been unique is that Warren Court partisans — notably in the law schools — have not only accepted but advocated judicial policy making as a Supreme Court function, urging it to more openly pass judgment on the wisdom and morality of congressional and presidential actions, under broadly conceived constitutional “values” rather than narrowly explicit constitutional rules.298

The issues involved in controversies over constitutional interpretation reach beyond the American legal system to questions about social processes and human freedom in general. The extent to which it is possible for central decision makers to wisely foresee and control the consequences of their decisions in a complex social process is seen very differently by those who want the court to act boldly from the way it is seen by those who want the court to construe the Constitution as a set of specific rules, interpreted as closely as possible to the sense in which they were written.299 The extent to which either of these modes is desirable depends also on the value assigned to the freedom of the many as against the presumed wisdom of the few — though the latter presumption has itself been seriously challenged,300 and the earlier discussion in this chapter may at least raise some questions in that regard. Finally, the substantive content of Supreme Court decisions has obviously influenced positions taken by observers or critics. Some Warren Court partisans have sweepingly dismissed its critics as “segregationists and security-mongers,”301 “military fanatics,”302 “reactionary interests,”303 “bigots,”304 or “crackers.”305 But historically, opponents of sweeping judicial interpretation have varied across the political spectrum, and in the wake of the Dred Scott decision, its opponents were among the strongest advocates of the cause of blacks, notably Thaddeus Stevens.306 Even in our own time, severe critics of the Warren Court have included men who opposed racial segregation years before Brown v. Board.307 Indeed, as the court pushed further and further into judicial activism, some of its own early partisans, such as Alexander Bickel, began to question its basic philosophy, and found themselves being heaped with the kind of scorn308 which they had once poured onto others.309 Even a dedicated civil rights lawyer who had braved the dangers of Mississippi violence310 was denigrated as a sellout when he later questioned busing.311 Legal insurgency has exhibited the same kind of pattern found in other forms of insurgency.

The constitutional provisions which provided the point of departure for the legal revolution of the Warren Court were the “due process” clauses of the Fifth and Fourteenth Amendments, and the “equal protection” clause of the latter. Those who favor “strict construction” of the Constitution find these technical legal phrases to have limited and highly specific meanings,312 while those who favor “judicial activism” find them to be phrases which “were designed to have the chameleon’s capacity to change their color with changing moods and circumstances.”313

JUDICIAL ACTIVISM

The case for judicial latitude or activism in interpreting the Constitution rests on several assertions: (1) the specific application of constitutional generalities inherently requires judgments, including value judgments,314 (2) the original meaning or intent of constitutional clauses are often lost in the mists of time, or were never intended to be very specific in the first place,315 (3) even when the original, historical meaning is discernible, it need not be blindly accepted as against later insights and experience, (4) courts are in a better position than are legislative or executive institutions to judge the morality or the consequences of broad social principles,316 (5) courts are “the least dangerous branch” of government because they lack the power of arms or money,317 and (6) courts are a last resort for achieving social goals not achievable in other institutions.318 These claims will be considered in order.

The limitations of language alone require some use of judgment in interpreting any set of rules, including the Constitution. At various times value judgments may also need to be made in finely balanced cases or when constitutional provisions conflict in a particular application. Virtually no one on either side of this controversy denies either of these points, though some proponents of judicial activism have set up as a straw man “literalists” who are “wedded” to “ever-irresistible simplicities.”319 But because certain inputs (judgments, value judgments) into the decision-making process are incrementally productive in some cases does not mean that they are categorically necessary or desirable in all cases or in general. An appellate court may be compelled to resort to these inputs in particular cases, but that in no way means that the Supreme Court has a general mandate to “evolve and apply”320 such principles of its own as it finds “rational” or in the “spirit” of constitutional “values.” Although the view that it does takes on an air of modernity, it is in fact quite old. Such ideas were set forth — and rejected — in the nineteenth century. In 1873 the Supreme Court declared that “vague notions of the spirit of the Constitution” are no basis on which to declare void “laws which do not square with those views,” and the “spirit” of a constitution “is too abstract and intangible for application to courts of justice, and is, above all, dangerous as a ground on which to declare the legislation of Congress void by the decisions of a court.”321 The idea of applying the spirit or values instead of rules is not new. What is new is the extent to which the tendency to do so has been indulged. It rests ultimately on the non sequitur that what is necessary in some cases is authorized, justified, or beneficial as a general principle. It is as if an argument for the existence of justifiable homicide as a legal category proved that laws against first-degree murder were unnecessary.

The above argument that the Supreme Court should abandon the original meaning of the constitutional rules is often supplemented with the claim that it cannot follow the original meanings of those rules because they are too vague and imprecise, or their original meaning has somehow been lost in history. However, there are voluminous, detailed, verbatim records of the debates preceding the adoption of the Constitution and of its various amendments, so sheer lack of historical materials is not a real problem. The difficulties of ascertaining the original meaning or intention of constitutional provisions often turns on what can be called “the precisional fallacy” — the practice of asserting the necessity of a degree of precision exceeding that required for deciding the issue at hand. Ultimately there is no degree of precision — in words or numbers — that cannot be considered inadequate by simply demanding a higher degree of precision. If someone measures the distance from the Washington Monument to the Eiffel Tower accurately to a tenth of a mile, this can be rejected as imprecise simply by requiring it in inches, and if in inches, requiring it in millimeters, and so on ad infinitum. On the other hand, even a vague request by an employer for an employment agency to send him a “tall” man may be enough for us to determine that the agency has disregarded his instructions when it sends him a man who is 4 feet 3 inches tall. The vagueness of “tall” might be enough to cause interminable discussions about men who are 5 foot 11 or 6 foot 1, but if in the actual case at hand the man is “short” by any common standard, then vagueness is a red herring for that particular case.

The precisional fallacy is often used polemically. For example, an apologist for slavery raised the question as to where precisely one draws the line between freedom and involuntary servitude, citing such examples as divorced husbands who must work to pay alimony.322 However fascinating these where-do-you-draw-the-line questions may be, they frequently have no bearing at all on the issue at hand. Wherever you draw the line in regard to freedom, to any rational person slavery is going to be on the other side of the line. On a spectrum where one color gradually blends into another, you cannot draw a line at all — but that in no way prevents us from telling red from blue (in the center of their respective regions). To argue that decisive distinctions necessarily require precision is to commit the precisional fallacy.

In the law, the question is not precisely what “due process” or other constitutional terms mean in all conceivable cases, but whether it precludes certain meanings in a given case. No one knows precisely the original meaning or boundaries of the constitutional ban on “cruel and unusual punishment” — but it is nevertheless clear from history that it was never intended to outlaw capital punishment. Therefore its “vagueness”323 is not carte blanche to substitute any standard that Supreme Court justices happen to like. In the same vein, Chief Justice Earl Warren’s remark in Brown v. Board of Education about the “inconclusive nature” of the Fourteenth Amendment’s history “with respect to segregated schools”324 confused the crucial point that there was no evidence that the writers of the Amendment intended to outlaw any kind of segregation, and much evidence that social policy issues were outside the scope of the Amendment.325 Because we do not know precisely what the boundaries of the Fourteenth Amendment are does not mean that we cannot know that certain things are outside those boundaries. A border dispute between Greece and Yugoslavia does not prevent us from knowing that Athens is in one country and Belgrade in another. Decisiveness is not precision.

The precisional fallacy — the confusion of decisiveness with exactness — runs through the literature advocating judicial activism: the Constitution lacks “precision” or is not “exact,”326 and is “muddy”327 or “clothed in mystery.”328 The self-serving nature of “convenient vagueness” was exposed by Felix Frankfurter long before he became a Supreme Court Justice. The question he asked was “‘convenient’ for whom and to what end?”329 While genuine agnosticism might be associated with caution, tolerance, or indecisiveness in the area of uncertainty, judicial avowals of agnosticism are frequently preludes to revolutionary changes in the interpretation of the Constitution. Even some supporters of judicial activism recognize the judicial tendency “to resort to bad legislative history” as an excuse to reinterpret the law.330 A fictitious legislative history may even be fabricated out of whole cloth, as when the Supreme Court majority in Bakke claimed that Congress had not considered “reverse discrimination” when writing the Civil Rights Act of 1964,331 even though it is a matter of record that reverse discrimination issues came up again and again during the debates.332 Much of what has been done under the claim of vagueness has been directly counter to intentions that were quite clear as regards those particular interpretations, regardless of how unclear it might have been on other things. It is the kind of judicial approach that has been called “statesmanlike deviousness”333 and “dissimulation” that is “unavoidable”334 by a partisan of judicial activism and “merely window dressing”335 by a critic who considers it “a Marxist-type perversion of the relation between truth and utility.”336

More fundamental than the question as to whether original constitutional meanings and intentions can be discerned is the question whether those meanings and intentions should be sought and followed as rules for present-day judicial decisions. Admirers of judicial activism emphasize the need for “the evolution of principles in novel circumstances,”337 that the Constitution is “a complex charter of government, looking to unforeseeable future exigencies”338 and virtually “an invitation to contemporary judgment.”339 The framers of the Constitution “did not believe in a static world”340 or in a constitution “forever and specifically binding,”341 and we must use “our own reasoned and revocable will not some idealized ancestral compulsion.”342 Therefore we must “update the Constitution”343 to “keep the Constitution adjusted to the advancing needs of time.”344 In this context, the original interpretations of the framers of the Constitution are merely “artifacts of verbal archeology”345 and to take them seriously is a “filiopietistic notion”346 which would allow the founders of the republic “to rule us from their graves.”347

As in the case of precision, so in the case of change, a great amount of effort (and airs of “realism”) go as into arguing something that is both obvious and irrelevant to the conclusion actually reached, in the situations in which it is applied. To argue about “change” in generalized terms is to argue with oneself, for no sane person denies change since the writing of the Constitution. The question is — what kind of change: technological, verbal, philosophic, geographical, demographic, etc., and in what specific way does the change affect a particular constitutional provision or its application? This the activists shy away from. Clearly there are technological changes, such as electronic listening devices, which raise questions about the constitutional right to privacy in a context unforeseen by the writers of the Constitution. But the great controversies raging around the Warren Court’s judicial activism have involved things that have existed for hundreds or thousands of years — the death penalty, the segregation of racial groups (the very word “ghetto” derives from the Jewish experience in centuries past), the arrest of criminals, the power of bureaucracy (both the Roman Empire and ancient China developed stifling bureaucracy), the gerrymandering of political districts, and the different weighting of votes. In this particular context, the constant reiteration of the word “change” is little more than a magic incantation. It is hard to imagine why the writers of the Constitution would have set up a congress or a president as decorative institutions if they thought there would be nothing for them to do in meeting the evolving needs of the nation. Incantations about “change” cannot drown out the central question in any social process — not what is to be done, but who is to decide what is to be done, and under what incentives and constraints? This question is at the heart of constitutional government, and no amount of insistence that something be done — or that something new be done — can be allowed to obscure it.

Words and “original intentions” become important as constraints — not as historical or archaeological artifacts, nor as pious ways of showing reverence for the Founding Fathers. Knowledge costs are crucial in conveying “the law of the land” across a vast and diverse nation, and through time across the centuries. What is crucially different about the original meaning of a given permutation of words in the Constitution (compared to alternative meanings that might accord just as well with a dictionary or a grammar book) is that that particular meaning has been documented, reiterated, analyzed and diffused throughout a vast decision-making network, and major public and private commitments made within the framework of that meaning. Frameworks sometimes have to be changed despite enormous losses, but the issue is who is to decide when and how. Shall it be elected officials subject to feedback from those who actually pay the many costs of changes in the social framework, or shall it be an appointed judiciary influenced only by those particular viewpoints to which it is arbitrarily responsive (known as “moral conscience”) and arbitrarily oblivious to other views (known as “public clamor”)? Shall the change be made openly, weighing the costs and benefits in the light of all the knowledge and experience diffused among all the people, or shall it be accomplished by verbal sleight-of-hand in the Supreme Court chambers and in the light of the constricted experience of nine individuals? Important as these issues are in particular constitutional decisions, they are truly momentous when considering a general policy of judicial activism which throws doubt over the whole framework of laws, not merely those particular laws arbitrarily changed by judicial fiat. The “above the law” thinking implicit in judicial activism can also spread beyond the courts to other branches of government, as the Watergate episode illustrates. The very rhetoric of a “flexible” constitution which can be interpreted ‘‘in the light of modern needs” was used in the Nixon inner circle.348 The extralegal transfer of the constitutional war-making power from Congress to the president, so bitterly resented during the Vietnam War, was in the same tradition. The selective indignation of the press and the intellectual community generally to these very similar usurpations for very different purposes is part of the environment within which judicial activism flourishes.

When it is not deemed sufficient to simply glide from the need for “change” to an assumption that courts are the chosen vehicles of change, arguments are advanced that courts are either the best or the only governmental institutions capable of making a certain necessary social change. In this approach, evolving social morality replaces explicit constitutional rules, as the court “makes value choices under the broad provisions” of the Constitution,349 and this is deemed “a principled process”350 of judicial decision making because judges are not simply making subjective rulings or even deciding issues ad hoc,351 but are following some general rule, one sensed in society rather than found in the explicit language of a constitution. Even a justice so identified with “judicial restraint” as Felix Frankfurter reflected this view. Although Justice Frankfurter rejected any idea that he would “sit like a kadi under a tree dispensing justice according to considerations of individual expediency,” he could still say that he was enforcing “society’s opinion” rather than his “private view” and that society’s opinion was the relevant standard “enjoined by the Constitution.”352 To sense the evolving social morality, Frankfurter felt that a judge should have “antennae registering feeling and judgment beyond logical, let alone quantitative, proof.”353 In this vision of judicial restraint, as further expressed by Frankfurter’s former law clerk, Alexander Bickel, the court which is liberated from the explicit constraints of the written Constitution judicially restrains itself to be the mouthpiece of evolving social morality and makes “experiential judgment” on the state of society in making its rulings.354

It may seem strange that an institution deliberately insulated from the popular feedback which constrains the legislative and executive branches of government should choose to adopt that constraint for itself and to put it in place of the explicit constraints of the written Constitution. However, as in the case of the argument from precision or “change,” this is simply not quite ingenuous. The judicially-restrained court is not binding itself to respond to the general public at large, by any means. Although there is some talk that the Supreme Court “represents the national will against local particularism”355 the judiciary is more often spoken of by exponents of judicial activism as an “educational institution”356 a “defender of the faith”357 and “a leader of opinion, not a mere register of it.”358 In short, the court is to be in the vanguard of moral change, able to act when other institutions run by elected officials are constrained by an amorphous and somewhat tainted entity called political “reality,” which among other things, makes amending the Constitution difficult. What all these lofty and vague phrases boil down to is that the court can impose things that the voters don’t want and the Constitution does not require, but which are in vogue in circles to which the court responds. Paradoxically, these are called “democratic” things in terms of what people would, should, or ultimately will want, though perhaps “counter-majoritarian” at a given time.359 The court is to cut itself off from both the words of the past and the public beliefs of the present and be general (principled) rather than ad hoc in its decisions. Thus, this approach can, with statesmanlike balance, reject the notion of direct, arbitrary, ad hoc rule by courts,360 and the limited role of interpreting constitutional rules.

Perhaps the most telling commentary on this vision is that its most eloquent exponent, Alexander Bickel, turned against it after he saw it in action for a few years.361 Instead of glorying in the courts’ freedom to shape events, the later Bickel found it “a moral duty” to “obey the manifest constitution, unless it is altered by the amendment process it itself provides for.”362 Judicial amendment by “interpretation” and “educating” society were no longer envisioned, and the “benevolent quota” to which he had been sympathetic earlier363 was now seen as “a divider of society, a creator of castes” and “all the worse for its racial base.”364 The events of the Watergate era were merely “the last straws” of a “results” oriented way of thinking that went back to the Warren Court.365

Ironically, the much-disdained “original intentions” of the framers of the Constitution foresaw the problems which the twentieth-century sophisticates had to discover from hard experience. Thomas Jefferson regarded judicially activist judges as a “subtle corps of sappers and miners” of the foundations of the American form of government,366 who would concentrate power in the federal government, because that would “lay all things at their feet…”367

DUE PROCESS

The Constitution of the United States twice declares that a person shall not be “deprived of life, liberty, or property, without due process of law” — either by the federal government (Fifth Amendment) or by state governments (Fourteenth Amendment). According to Alexander Hamilton, “the words ‘due process’ have a precise technical import, and are only applicable to the process and proceedings of the courts of justice; they can never be referred to an act of the legislature.”368 At the very least, the two fateful words already had a long history in Anglo-Saxon law as of the time they were first placed in the American Bill of Rights in 1791.369 An even longer history of arbitrary power — of lands and even lives confiscated by royal or imperial decrees, and of heads cut off by peremptory order — lend momentous importance to the requirement that only prearranged legal procedures may deal with the fundamental rights of individuals. Centuries of struggle and bloodshed lay behind those two words.

The first historic attempt to make “due process” mean something more than adherence to legal procedures occurred in the Dred Scott case in 1857. The Supreme Court declared that “an Act of Congress which deprives a citizen of the United States of his liberty or property merely because he came himself or brought his property into a particular Territory of the United States, and who had committed no offense against the laws, could hardly be dignified with the name of due process of law.”370 Here the issue was not whether regularized procedures had been followed in the passage or administration of the law, but whether the substance of the legislation was valid. In many other very different issues, the battle would be joined again and again over the next century as to whether “procedural due process” was enough to satisfy the constitutional requirement, or whether the Supreme Court should also consider “substantive due process” — i.e., pass judgment on the validity of the substance of duly passed laws and duly established judicial proceedings.

The first historic judicial activist interpretation of “due process” as calling for Supreme Court approval of the substance of duly enacted legislation declared that property — a slave named Dred Scott — would be taken without due process of law if the slave were freed simply because he had been transported into a territory where Congress had outlawed slavery under the Missouri Compromise. Therefore it was ruled that it would be unconstitutional to set him free. The easy assumption that judicial activism is on the side that twentieth-century liberals regard as moral or socially forward-looking does not square with the history of the due process clause.

There was an historically brief respite from the “substantive due process” interpretation after the Supreme Court in 1873 refused to consider the substantive merits of a state-created slaughterhouse monopoly in Louisiana, on grounds that to rule on the substantive merits “would constitute this court a perpetual censor upon all legislation of the states.”371 It continued to resist the efforts of those unsuccessful elsewhere to use the Supreme Court to review the substantive justice of lower court decisions or “the merits of the legislation on which such a decision may be founded.”372 However, less than two decades later, a new Supreme Court declared in 1887 that it would look beyond “mere pretenses” to “the substance of things.”373 By the turn of the century, the era of “substantive due process” was launched — in which the Supreme Court repeatedly invalidated as unconstitutional laws regulating businesses or working conditions. The “substantive due process” era lasted longer than the Warren Court era. It was, of course, lamented in retrospect by those who supported the Warren Court’s activism.

Courts in the “substantive due process” era — roughly 1905 to 1937 — regarded property not as simply the physical things themselves, but as the options pertaining to those things, and recognized that to destroy options was exactly the same as confiscating property — even though the physical objects as such might be left in the possession of the owners. The economic validity of their reasoning is demonstrated perhaps most dramatically in the case of New York City rent-controlled buildings, whose value is often reduced to negative levels (note abandonment despite the risk of legal penalties), by simply reducing the landlord’s options, while leaving him in sole possession of the physical structure itself. Conversely, working men possessing no physical property nevertheless had options of employment alternatives, and to reduce these alternatives was also considered by the Supreme Court to be a deprivation of property in violation of the Constitution.374 The economic reasoning is as valid here as in the case of business property, for it is essentially the same principle that property rights are basically options rather than physical things. A more fundamental constitutional question regarding the Supreme Court’s role in the “substantive due process” era was whether the protection of property under the Fifth and Fourteenth Amendments required the courts to monitor the economic substance of legislation. In short, the economic argument shows only that there has in fact been a confiscation of property, while the legal question is — was it under due process of law? Later decisions repudiating economic “substantive due process” either deny or sidestep the confiscation of property.

Post-1937 Supreme Court decisions somewhat ostentatiously cited decisions of the economic “substantive due process” era as examples of what it was not going to do.375 Paradoxically, it was Justice William O. Douglas, a leading judicial activist, who wrote opinions sweepingly rejecting the use of “notions of public policy”376 and declared that “we do not sit as a super-legislature to weigh the wisdom of legislation.”377 The apparent paradox turns on the addition of clauses restricting this judicial restraint to areas of “economic and social programs,”378 “the business-labor field,”379 or “business, economic, and social affairs,”380 or “business and industrial conditions.”381 In short, a constitutional double standard was created by the court, relieving itself of the burden and the political responsibility for liberal social legislation, while pioneering in new judicial activism in criminal law, civil rights, and political power areas. Far from signalling a reduction in Supreme Court inquiry into the substance of “due process,” it marked the expansion of such substantive issues on an unprecedented scale. “Due process” became the phrase by which federal restrictions — both explicit constitutional provisions and judicial extrapolations — were imposed on state courts and state law enforcement agencies,382 in defiance of the Constitution and its judicial interpretations for nearly two hundred years. The exclusion of evidence,383 the requirement of government paid defense lawyers,384 restrictions on questioning suspects,385 on search warrants,386 on confessions,387 and even the desegregation of the District of Columbia schools388 and the nullification of Connecticut’s anticontraception law,389 were all based on substantive rather than procedural “due process.” Only the phrase “substantive due process” had been stricken from judicial interpretation.

SUMMARY AND IMPLICATIONS

Trends in American law in the twentieth century — and especially in the Warren Court era — have included (1) a growing volume of law and litigation in general, and especially of laws and litigation growing out of decisions made by institutions insulated from feedback — especially administrative agencies and the federal judiciary, (2) a changing role of appellate courts from defining the boundaries of other institutions’ discretion to second-guessing the substance of the decisions made by those other institutions, and (3) an ever more apparent social partisanship, as distinguished from biased principles, in applying the law.

Insulation from feedback takes many forms, not the least of which is duplicity. Administrative agencies have turned the Civil Rights Act’s equal treatment provisions into preferential treatment practices. Laws prescribe severe criminal penalties vastly in excess of what is in fact carried out. A “results”-oriented Supreme Court creates constitutional “interpretations” that horrify even those who agree with the social policy announced. There is even duplicity imposed upon others, as when “affirmative action” requires employers to confess to being guilty of “under-utilization” of minorities and women, and to promise — in their “goals and timetables” — to achieve numbers or percentages which all parties may know to be impossible. Quite aside from the moral issues, doctrines which cannot be openly argued — quotas, judicial policy making, nonenforcement of criminal laws — cannot be subject to effective scrutiny.

Ironically, “results”-oriented legal policies have achieved largely intermediate institutional results, rather than their social goals. Appellate courts have successfully imposed their will on other institutions — school boards, trial courts, universities, employers — without achieving the social end results expected. For all the countless criminals freed on evidentiary technicalities, there is no evidence that the police practices the courts attacked have been eliminated or even reduced.390 For all the costly and controversial procedures imposed by “affirmative action” quotas, there is little or no evidence that such policies have advanced blacks beyond what was achieved under the previous “equal opportunity” policy.391 For all the bitterness surrounding the busing controversy, there is no overall evidence of any social, educational, or psychological gains from these policies,392 and even purely statistical “integration” has been offset to a great extent by “white flight” to the suburbs.393 In short, legal sacrifices of principles to get “results” have often been a oneway trade-off with no social gain, in terms of the avowed goals. That little or nothing has been achieved does not mean that there has been no cost. The purely financial costs of busing can run into the hundreds of millions of dollars for just one school system,394 not to mention the hundreds of millions of dollars nationally in school closings alone,395 and such social costs as increased racial antagonism,396 and a disruption of school children’s social life and reduced parental input into local schools.397 An “affirmative action” report can cost an employer hundreds of thousands of dollars, not to mention its costs in morale to officials,398 white male employees, and even minority and female employees feeling the backlash.

None of this is evidence of special ignorance or culpability in the individuals in appellate courts and administrative agencies who impose these policies. Rather, it is evidence of the inherent limitations of such institutions, and ultimately of human knowledge, as it exists in any one place. The elaborate, overlapping, knowledge-transmitting networks which constitute the various institutions of a complex society demonstrate both the wide diffusion of relevant knowledge and the high cost and high value of its transmission and coordination. For political institutions, especially for those insulated from effective feedback, to persistently override the decisions of other institutions and millions of individuals is virtually to insure results that are unproductive or counterproductive, even in terms of the preferences of the overriding institutions.

The virtual impossibility, in many circumstances, of having any real knowledge beforehand has created a demand for surrogates for knowledge — the so-called “findings” of “experts.” In Brown v. Board of Education, for example, Chief Justice Earl Warren confidently referred to psychological findings “amply supported by modern authority,”399 and cited as his particular authority a study subsequently devastated as invalid, if not fraudulent.400 Even the attorneys who used the study regarded it skeptically among themselves, and one said, years later, “I may have used the word ‘crap’…”401 Courts, like other institutions, often fail to make the crucial distinction between (1) opinions in vogue among intellectuals, and (2) empirical evidence, based on recognized analytical procedures, such as controlling for variables other than the ones at issue. “Affirmative action,” for example, abounds with numbers and percentages which consistently ignore such gross demographic differences as age, and discussions of capital punishment repeat as dogma the findings of a superseded study which defined “capital punishment” as words in law books, rather than executions. To lump all these things together under the ponderous name of “expertise” is to add self-deception to insulation from the firsthand knowledge so readily dismissed as “public clamor.”

The purely institutional, factual, or methodological, deficiencies of legal decision making might explain random variations but not systematic bias. Indeed, bias is not quite the right word, insofar as it implies a preference for a particular principle, such as a Marxist’s preference for socialism or a teetotaler’s preference for non-alcoholic drinks. A court with a biased approach might, for example, consistently insist on an extremely stringent standard of proof, or — if biased in the other direction — consistently accept rather low levels of evidence as proof. The courts have done neither of these things. They have applied extreme standards of proof before accepting the convictions of some categories of defendants and have made other categories of defendants virtually have to prove their innocence. This is not a principled bias but social partisanship.

A court that believed in the principle of either “procedural” due process or “substantive” due process might consider following either principle or — if unable to make up its mind — vacillate randomly between them. The courts have done neither of these things. They have applied the principle of procedural due process to some social categories of litigants (property owners, for example) and substantive due process to others (criminals, for example). A court biased in principle for or against overriding the decisions of other institutions might consistently move in either of these directions, but the Supreme Court’s consistency is only in which kinds of institutional processes it would defer to (administrative agencies), and which kinds it would review and monitor in detail (state courts, businesses). Courts biased for or against the principle of extended accountability for the consequences of one’s actions might go in either of these directions, but only socially partisan courts would extend the principle to unprecedented lengths of “product liability” for businessmen402 while reducing it by unprecedented amounts in libel immunity for newspapers.403 When the post-1937 Supreme Court ostentatiously repudiated the “substantive due process” doctrine in economic matters, it simultaneously began an extensive and unprecedented expansion of its scrutiny of the substantive nature of “due process” in criminal, civil liberties, and racial cases. This might appear to be “compartmentalized thinking”404 from the standpoint of reconciling principles, but it is perfectly consistent as social partisanship. Indeed, there is remarkable consistency in social partisanship across the various areas of inconsistent principles.

Repudiation of the economic version of substantive due process meant allowing politically liberal legislation and administrative agencies a free hand to control businessmen with little judicial scrutiny of constitutional issues, such as confiscation of property. Relaxed standards of proof — including de facto burdens of proof on the accused — facilitated the same policy at the expense of the same social group, with judicial “deference” to the “expert” findings of administrative agencies in issues from antitrust to “affirmative action.” The findings of trial courts of judges and jurors selected for impartiality were given no such deference as the findings of administrative agencies staffed by personnel selected for their zeal on one side of an issue. Even proof of a criminal defendant’s guilt in court was not enough to sustain a conviction at the appellate level if any of a number of newly created and sometimes retroactive technicalities were not observed — even though the technicalities might be a matter of close dispute among expert appellate judges,405 and therefore far from obvious to policemen on the street.

The problem with social partisanship is not simply the particular selection of groups to be favored or disfavored, but (1) its general inappropriateness in a system of law, (2) the duplicity necessary to sustain it in the guise of legal principles which appear and disappear rapidly and unpredictably, (3) uncertainty and demoralization where the legal system provides, not a framework within which to place and utilize knowledge best known to those involved, but instead a continual threat of second guessing which may cause decision makers to act in ways most likely to appear plausible to outsiders, rather than in ways judged best by those who actually know. Even those groups supposedly favored by the social partisanship of the courts lose as members of the general society, so that what is involved is not simply a judicial transfer of benefits but a set of policies which can become so counterproductive that everyone loses. It is perhaps indicative when polls show blacks opposed to busing or to “preferential” treatment (quotas), and declaring that the law is too “lenient” with criminals.406

Despite the tendency of intellectuals, “experts,” and policy makers to view the functioning of society as a series of issues and problems to be directly “solved” from an implicitly unitary viewpoint, the real problem is to locate decision-making discretion in the respective social processes most able to resolve the particular considerations arising in different areas of human life. The same diversity of values which makes this desirable also makes it difficult to achieve. Those in the higher, more powerful, and more remote institutions face the constant temptation to prescribe results rather than define the boundaries of other institutions’ discretion. Nothing is easier than to confuse broader powers with deeper insight. But, almost by definition, those with the broadest powers are the most remote from the specific knowledge needed for either deciding or for knowing the actual consequences of their decisions.

Various feedback mechanisms serve to limit the impact of errors, moderate the presumptions of the powerful, and remedy the essential ignorance of social “expertise.” These feedback mechanisms may be formal or informal, and social, economic, or political. Their effectiveness varies with the extent to which they convey not only information, but also a degree of persuasion or coercion which cannot be ignored by those whose decisions must be reconsidered. In the intimacy of the family, or in other important informal relationships, the value of the relationship itself forces some mutual accommodation. In economic organizations, the life-and-death power diffused among customers makes ignoring their preferences a folly in which few can indulge, and which even fewer can survive. Political organizations are constrained by elections, but the courts — which is to say, ultimately, the Supreme Court — are constrained only by history and by “a decent respect for the opinions of mankind.”

Because history is by definition tardy, and the opinions that matter to judges may be far more restricted than those of mankind, courts are especially inappropriate for making “results”-oriented decisions, as distinguished from decisions of principle or decisions which demarcate the boundaries of other institutions’ discretion. The relative lack of flexibility of courts is an asset for decision making in those areas where we want very little flexibility — i.e., in areas dealing with the security of our persons, possessions, and freedom. In venturing beyond such areas, courts are venturing beyond their institutional advantages.

As the legislative and executive branches of government demarcate the boundaries of private decision making, so the courts have confined the scope of the government’s activities. Constitutional guarantees encumber the state precisely so that the state may not encumber the citizen. Imposing outsiders’ rules to supersede insiders’ understanding and flexibility is questionable even as social policy, aside from its constitutional problems. When something similar was suggested for the Supreme Court itself, in the modest form of a case prescreening panel to reduce its work load, the institutional needs of the court were expressed in terms which go to the heart of what the court’s own decisions have done to other institutions across the country. According to Justice Brennan, “flexibility would be lost”407 in an “inherently subjective” process408 with “intangible factors”409 that are “more a matter of ‘feel’ than of precisely ascertainable facts,”410 and which involve a “delicate interplay” of “discretionary forces.”411 The tragedy is that he apparently considered this to be an institutional peculiarity of the Supreme Court,412 rather than a pervasive fact of decision making in general.

Chapter 10 Trends in Politics

Among the prominent political currents of the twentieth century are (1) a worldwide growth in the size and scope of government, (2) the rise of ideological politics, and (3) the growing political role of intellectuals. In addition, it has been an “American century” in terms of the growing role of the United States on the world stage, particularly during two world wars and in the nuclear age. This does not imply that international events have followed an American blueprint or have even been favorable on the whole to American interests or desires. It does imply that the fate of the United States has become of world historic, rather than purely national, significance. These developments will be considered here in terms of their implications for the effective use of knowledge in social processes, and in terms of the even more important question of their implications for human freedom.

THE SIZE AND SCOPE OF GOVERNMENT

SIZE

By almost any index, government has grown in size and in the range of its activities and powers over the past century, throughout the Western world. This has been true of governments at all levels, but particularly of central or national government. In the United States, there were less than half a million civilian employees of the federal government as late as the onset of World War I, but there are now more than six times that number,1 and even this understates the growth of the federal payroll, because “most government activities are carried out by workers who are not included in the federal employment statistics”2 — employees of federal contractors or subcontractors, and state and local programs financed and controlled from Washington. In addition, “about one person in every four in the U.S. population receives workless pay from government sources”3 — relief, unemployment compensation, and innumerable benefits of various other social programs. The expenditures of the federal government in 1975 were more than double what they were in 1965, and these in turn were nearly twice what they were in 1955.4 To compare this with pre-New Deal expenditure patterns, 1975 federal spending was more than one hundred times federal spending in 1925.5 Moreover, the budget of HEW alone is roughly equal to that of all fifty state governments combined.6

One of the problems in trying to comprehend federal spending is that the units involved — billions of dollars — are so large as to be almost meaningless to many citizens. To visualize what a billion dollars means, imagine that some organization had been spending a thousand dollars a day every day since the birth of Christ. They would not yet have spent a billion dollars.7 In the year 2000 they would still be more than 250 million dollars short of one billion dollars. Government agencies of course spend not one but many billions of dollars annually. HEW alone spends about 182 billion dollars annually.8 To get a figure comparable to what the entire federal government spends annually, change the one thousand dollars per day to half a million dollars per day, every day since the birth of Christ. At the end of two thousand years the grand total would amount to less than three quarters of what the federal government spent in 1978 alone.

The size of government has grown, not simply by doing more of the same things but by expanding the scope of what it does. At the extreme of this development, a new political phenomenon has made its appearance in the twentieth century — the totalitarian state. Undemocratic, despotic, or tyrannical governments have existed down through the ages, but the totalitarian state is more than this.

TOTALITARIANISM

It is not simply the origin or basis of political power that defines totalitarianism, nor even the amount of power or its ruthless application. A tyrant is not automatically a totalitarian. It is the political blanketing of the vast range of human activities — from intimate personal relations to philosophical beliefs — that constitutes “totalitarianism.” The founder of fascism and originator of the term “totalitarianism,” Benito Mussolini, summed it up: “All through the state, all for the state, nothing against the state, and nothing outside the state.”9 Totalitarianism “recognizes the individual only insofar as his interests coincide with those of the State.” Nongovernmental entities, whether formal or informal, had no place. “No individuals or groups, political parties, associations, economic unions, social classes are to exist apart from the state.”10 It is the exclusion or suppression of autonomous sources of orientation that is the defining characteristic of totalitarianism.

A military dictator may hold power through force of arms and mercilessly kill every political rival, and yet care little how children are raised, or whether the people are religious or not. In the Roman Empire before Christianity became the state religion, religious toleration was widespread,11 as was a certain amount of general toleration, accommodation, and social mobility in a large multiracial, multicultural domain.12 At this juncture, the Judeo-Christian religions were dealt with harshly precisely because they refused to accommodate other religions, which they denounced as idolatry.13 Yet the Roman Empire was an autocracy, and at various times a military dictatorship in which the emperor exercised arbitrary powers of life and death over the masses and the aristocracy alike. It was not totalitarian, however.

Totalitarian governments reach into every nook and cranny of private life, among the masses as well as the elite. Children are indoctrinated with the official ideology, taught to betray even their parents to the state, and as adults live in an atmosphere in which even the most intimate relationships are subject to state scrutiny and carry the threat of mutual betrayal or official retaliation against lovers or family members for the actions of an individual who has displeased the political authorities. History, science, and the arts are all made subject to political direction. Hitler’s “pseudoauthoritative judgments about everything under the sun”14 were matched by Stalin’s pronouncements that extended to linguistics and his disastrous imposition of Lysenko’s genetic theories on Soviet agriculture, and by Mao’s “sayings” which seemed to cover every aspect of human existence. It is not the source or the ruthlessness of power alone which defines totalitarianism, but the unprecedented scope of the activities subjected to political control.

A concentration camp is the ultimate in totalitarianism, with political decisions determining such routine things as eating and sleeping, as well as personal relations (dehumanization) and death (extermination). Slave plantations in the antebellum South have been analogized to concentration camps,15 but their paramount nonpolitical objective of economic gain meant that slave owners had to make far more concessions to slaves than concentration camp commanders ever made to their inmates. Concentration camps in both Nazi Germany and the Soviet Union were far less economically efficient than the totalitarian societies of which they were a part,16 but they were maintained despite this, for political purposes. Slave plantations were profit-making enterprises,17 inherently limited by that fact in how far they could go in oppressing or destroying the sources of their wealth. Whatever moral equivalence may have existed between the two kinds of institutions, they were neither politically nor economically equivalent.

A unifying ideology is essential in a totalitarian state, if only so that its multitudes of organizations do not work at cross purposes to such an extent as to be self-destructive. In the intentional terms of totalitarian belief or propaganda, power is exercised in the service of the ideology. However, in view of the ease with which Nazi officials became Communist officials after World War II, it is also possible that the ideology is exercised in the service of power. Certainly it is hard to imagine totalitarian state power without a unifying ideological theme, and history presents no examples.

The particular ideology may be a creation of the totalitarian leader, as in Hitler’s National Socialism, or may have an historical tradition, as in Marxism. However, even in the latter case, the ideology may still be instrumental rather than controlling. Certainly people following Marxism — as distinguished from using Marxism — could never set up a totalitarian state. Marx and Engels opposed autocracy, much less totalitarianism.18 The whole point of the proletarian revolution — i.e., a revolution from the bottom up — was that revolution from the top down implied a post-revolutionary dictatorship over the proletariat.19 Lenin’s revolution from the top down confirmed the Marxian fears, but Lenin was not bound by the “original meaning” of Marxism and in fact reinterpreted Marx to justify what he had done.20

Ideology is not only instrumental, or a producer’s good, for the government; it is also a consumer good for the populace, or segments thereof. Totalitarian ideology typically features (1) the localization of evil — in Jews, capitalists, or some other group — so that comprehensive political solutions to age-old human problems seem feasible within a reasonable time horizon by surgically removing the offending group, leaving a healthy body politic intact, (2) the localization of wisdom, to explain why this miraculous cure has escaped so many minds for so many centuries, as well as explaining the necessity for superseding democratic institutions and beliefs, (3) a single scale of values by which priorities may be arranged in every field of human endeavor, to be achieved “at all cost,” (4) the presupposition of sufficient knowledge to achieve whatever goal may be projected, (5) the urgency of the “problem” to be “solved” so that ruthlessness is the lesser of two evils, and (6) a psychic identification with millions, whose opinions may nevertheless be disregarded and whose lives may be sacrificed in the cause, without feelings of guilt. Finally, the totalitarian ideology must be a self-enclosed system, to exclude alternative views and visions which are — regardless of their substance — inherently antithetical to a single totalitarian ideology. It is therefore central to totalitarian ideology that it convert questions of fact into questions of motive.21 Facts are a threat because they are independent of the ideology, and questioning the motives of whoever reports discordant facts is a low-cost way of disposing of them.

An ideology may be viewed as a knowledge-economizing device, for it explains complex empirical data with a few simple and familiar variables. It is hardly surprising that ideological explanations should have a special appeal to those with higher costs of alternative knowledge — the inexperienced (“youth”) and the previously politically apathetic (“masses”). As a leading student of totalitarianism has observed:

It was characteristic of the rise of the Nazi movement in Germany and of the Communist movements in Europe after 1930 that they recruited their members from this mass of apparently indifferent people whom all other parties had given up as too apathetic or too stupid for their attention.22

It is also in keeping with the concept of ideology as a knowledge-economizing device that there should be defections with age as discordant knowledge forces itself on one’s attention, until a point is reached where the cost of reconciling it with the ideological vision exceeds the cost of discarding the vision itself. Explaining complex reality with simple and familiar variables is a low-cost process initially, but this cost tends to rise over time, as ever more complex relationships must be postulated between the simple variables and the accumulating complex reality — much like the Flat Earth Society explaining away phenomena which have long ago convinced others that the earth is round. Indeed, when theories are viewed instrumentally, rather than as literal reconstructions of reality, the reason for preferring the round earth theory is basically an intellectual economizing process: the incremental investment in a slightly more complex initial assumption than a flat earth is later repaid by lesser intellectual effort in reconciling the results with empirical observation. It is a question of cost-effectiveness rather than of reaching ultimate, immutable truth. For the initiate in totalitarian ideology, however, cost-effectiveness may lie with the simple assumptions, because authentication is a sequential process in which the full costs will be revealed only in the course of time. He may also be more interested in the power than in the cognitive advantages to be derived from totalitarianism — or may become so oriented in the course of time.

This consumer good aspect of totalitarian ideology is an essential part of the phenomenon. The hypnotic fascination and exhilaration with which Hitler’s followers listened to his speeches was an integral part of Nazism. Among Communists, the vision of the ideology itself — the “wretched of the earth” creating “a new world” — substitutes for oratorical genius, and has in fact proven far more effective with intellectuals. The “intellectual delight” and “intellectual bliss” on reading the Marxian vision,23 the sense of revelation when “the whole universe falls into a pattern like the stray pieces of a jigsaw puzzle assembled by magic at one stroke,”24 the thrill when the “revolutionary words leaped from the printed page and struck me with tremendous force”25 — these are part of the psychic rewards for the total commitment that characterizes totalitarian movements.

Because Marx and Engels had already paid the high fixed costs of creating the vision, latter-day Marxists could achieve ideological results at lower incremental costs. They need not possess Hitler’s genius for oratory or for discerning exploitable human susceptibilities. It is only in the light of such ideological visions that it is possible to understand the “confessions” to nonexistent crimes which have been produced not only in Soviet courts but even in Communist movements in Western democracies — movements possessing no tangible power to punish their members. The ideological context dwarfs the particular characteristics of the particular individual, as in this description of an internal party “trial” among American Communists in the 1930s:

… there had to be established in the minds of all present a vivid picture of mankind under oppression… At last, the world, the national, and the local picture had been fused into one overwhelming drama of moral struggle in which everybody in the hall was participating. This presentation had lasted for more than three hours, but it had enthroned a new sense of reality in the hearts of those present, a sense of man on Earth… Toward evening the direct charges against Ross were made…

The moment came for Ross to defend himself. I had been told that he had arranged for friends to testify in his behalf, but he called upon no one. He stood, trembling; he tried to talk and his words would not come. The hall was as still as death. Guilt was written in every pore of his black skin. His hands shook, he held onto the edge of the table to keep on his feet. His personality, his sense of himself, had been obliterated. Yet he could not have been so humbled unless he had shared and accepted the vision that had crushed him, the common vision that bound us all together.

“Comrades,” he said in a low, charged voice, “I’m guilty of all the charges, all of them.”

His voice broke in a sob. No one prodded him. No one tortured him. No one threatened him. He was free to go out of the hall and never see another Communist. But he did not want to. He could not. The vision of a communal world had sunk into his soul and it would never leave him until life left him.26

Conversely, without the commitment to the ideological vision, even the horrors of slave labor camps could not silence Solzhenitsyn, Sakharov, or other Soviet opponents of totalitarianism.

Ironically, the first book that Marx and Engels wrote together, in 1843, contained a scathing indictment of the practice of first breaking down individual self-respect and personality, and then attempting to reconstruct a human being according to some preconceived plan. The hero of a contemporary novel had made a religious conversion in that way. Marx and Engels pointed out that with his “smooth, honeyed curse” he had first “to soil her in her own eyes” in order to make her receptive to the redemption he would offer.27 The lofty motives with which this was done were simply camouflage for the zealot’s “lust” for “the self-humiliation of man”28 Even in a political context, Marx had no use for the idea of state indoctrination.29

“Confessions” to nonexistent crimes illustrate another characteristic of totalitarianism — the concept of “political truth.” Not only people and organizations are subject to total control, so too is the truth. Hitler’s use of the reiterated big lie, and numerous Soviet revisions of official history (complete with air brush erasures in historic photographs) are part of a pattern of control that extends to the basic data itself. This is more than the usual political lying common to systems of various sorts. It is monopolistic lying, with the exclusion of alternative sources of information. Moreover, it is lying on principle — or rather, it is a philosophy that regards what is said as largely instrumental, so that the very distinction between lying and the truth becomes blurred or even regarded as trivial or naive.30 Political truth is whatever will advance the interests of the cause or movement. Quite aside from ethical questions, this approach makes the same assumption of omnicompetence that is central to totalitarianism as a whole.

The philosophic postulate that statements are instrumental may be necessary, but by no means sufficient, as the basis for lying as a principle. It is not that philosophical postulate but the empirical presupposition of virtually zero incremental knowledge costs (omniscience) for some subset of people (“leaders”) that is crucial for the conclusion. Even viewed from a wholly instrumental perspective, the ethical norm of truth is a cost-saving social institution for people for whom knowledge is not a free good. If the set of such people includes all of humanity, then instrumental lying has social costs which cannot be assumed to be less than whatever benefits are contemplated — either for society at large or even for the subset who engage in this wholesale disinvestment in credibility. The presumption is indeed the other way. The systemic evolution of ethical norms of truthfulness in the most diverse and separated cultures — around the world and down through history — suggests something of the instrumental value of truth. Similar ethical norms in this regard originating in the prehistory of the human race, when the species was even more separated and fragmented than today, hardly seem the product of coincidental philosophic intentions rather than of systemic universalistic experience. It is difficult even to conceive theoretically of a society that could survive if statements had no more probability of being true than if they were generated by a process that was random with respect to truth as a value in itself. Even totalitarian governments invest substantially in the production of truth, including secret police and torture, from an instrumental point of view.

The substitution of instrumental consequences for empirical truth as the criterion for statements is by no means the substitution of a more manageable standard. “The usefulness of an opinion is itself matter of opinion: as disputable, as open to discussion and requiring discussion as much as the opinion itself.”31 The sweeping scope and arbitrariness of the assumption that one can trace the instrumental consequences of particular words and deeds may be indicated by asking whether anyone could have foreseen the consequences of a certain Italian explorer’s theory that he could reach India by sailing west — a set of words and deeds that led to the discovery of half the planet and changed the course of history in both halves. It is especially ironic for totalitarianism to assume such omniscience, since it was precisely totalitarian oppression which drove from Germany and Italy the men who gave America the decisive military weapon of World War II and ushered in the nuclear age — Albert Einstein and Enrico Fermi.

Conversely, imagine a being with zero incremental knowledge cost — someone able to discern the remotest ramifications of his every statement. Why should such a being be bound by ethical norms of truth, either from the standpoint of self-interest or even if making the interests of humanity the paramount determinant of his behavior? If he knows to a certainty that saying A would on net balance (in all its ramifications) be more beneficial to mankind than saying B, would it not be blind, fetishistic, traditionalism for him to say B? Would it not be self-indulgence to say B in order to salve his own conscience at the known expense of perhaps millions of his fellow creatures, now and in the future? This is only to say that if human beings were entirely different creatures, entirely different principles might well apply. More practically, a choice among principles involves an understanding of the inherent limitations of the species and its surrounding circumstances, rather than a comparison of what would be the best mode of operation in an unconstrained world.

The instrumental case for truth is the instrumental case for human institutions in general — ultimately knowledge costs, which is to say, the unattainability of omniscience. Courts are preferred to lynch mobs even when it is known to a certainty in the particular case that the accused is guilty, and even if the lynch mob inflicts exactly the same punishment that the court would have inflicted. The philosophic principle that we “should not take the law into our own hands” can be viewed instrumentally as the statement that, however great our certainty in the particular case, we cannot supplant legal institutions as cost-saving devices because we cannot assume equal certainty in future cases. If we could know with certainty (zero incremental knowledge cost) in all cases who was guilty, would it not be blind, fetishistic traditionalism to maintain legal institutions to determine such matters? If man were indeed able to take in all existence at a glance — including past, present and future existence — would there be any reason for any institutions? Even if some of these omniscient beings preferred antisocial behavior, why would it be necessary to have rules existing beforehand (and that is what institutions are) to deal with them, when the necessary actions to deal with them could be determined ad hoc — and indeed the potentially antisocial people would know this themselves and be deterred.

Totalitarian institutions would be a contradiction in terms, if the central assumption of omnicompetence were universalistic. But totalitarian movements and institutions are based on a belief in differential knowledge costs (their leader or doctrine supposedly giving them vast advantages over others) and therefore one-way lying. The instrumental value of truth in the other direction is recognized by totalitarian nations’ pervasive surveillance of the population, monitoring of the effectiveness of their indoctrination, and sorting and labeling of the populace according to their perceived instrumental value to the state. All these assessments are intended to be as true as possible, even by the most lying totalitarian state. Soviet economic statistics are generally assumed to be technically correct, even if selectively and misleadingly published,32 simply because it is instrumentally essential that Soviet decision makers have the truth as far as they can get it themselves, and a multitude of copies of two different sets of statistics (one true for internal use and one false for the outside world) would be unfeasible, just from the virtual certainty of leaks in such a massive undertaking in duplicity.

The instrumental case for truthfulness rests ultimately on the same assumption as the instrumental case for human institutions in general, and for free institutions in particular. That assumption is that, because we cannot know all the ramifications of whatever we say or do, we must put our faith in certain general or systemic processes (morality, constitutions, the family, etc.), whose authentication by social experience over the centuries is more substantial than any particular individual revelation or articulation. This is not to say that no social processes should be changed or even abandoned. On the contrary, their history has been largely a history of change — usually based on social experience, even when marked by individual revelation or articulation. What is at issue is: who should decide the nature of these changes, subject to what incentives and constraints? An enduring framework — morality or a constitution — does not preclude change but may well facilitate it, by reducing the fears that might otherwise be aroused by reforms if their full ramifications were literally unbounded and unimaginable. Countries may change faster because they have certain institutional limitations, just as cars travel faster because they have brakes.

The social and political differences between the United States today and two centuries ago are staggering, though all within the same general legal and moral framework. Totalitarian governments can make more rapid changes of personnel (“purges”) and policies (the Nazi-Soviet pact, changes in Sino-Soviet relations, etc.) as of a given time, but the fixed purposes of all such changes may mean less fundamental social and political change within the country than in a democratic or conventionally autocratic system. Certainly it would be difficult to argue that the Soviet Union today is as socially and politically different from the Soviet Union fifty years ago as is the contemporary United States from what it was half a century ago. The change in the status of the American black population alone has been dramatic, in addition to changes in the role of government in the economy and society, and countless shifts in the balance of social and political power among a variety of regional, economic, and philosophic groups.

Change is one of the great promises of totalitarian movements — whether Hitler’s “New Order,” Mussolini’s “new departure in history,”33 or a variety of Marxist-Leninist-Stalinist variations on the same theme. Initially profound changes in political power are indeed characteristic of totalitarianism. But whatever the intentional forces at work among the original insurgents, the systemic effects have been centered on retaining the totalitarian power, at whatever cost in terms of violating the original program or ideology. This has typically necessitated, at some point, a purge of those attracted by the original insurgent program that is now being discarded when in power. Hitler’s 1934 purge of his storm trooper leaders from insurgent days34 and Stalin’s purge of Trotsky (and many others) were part of a pattern that has been characteristic of totalitarian governments around the world. While national dangers have been used to justify such actions, they have in fact typically occurred after a consolidation of power, when there was considerable evidence (including statements within the regime) that the dangers to the government had lessened.35 Perhaps these events mark the transition from a totalitarian movement’s seeking of power for a purpose to a situation in which power has itself become the purpose. For at least some unfortunate segments of totalitarian movements, it is clear that they could not predict the ramifications of the forces they set in motion as insurgents.

CONSTITUTIONAL DEMOCRACY

As noted in Chapter 5, a government whose source of power is democratic may promote either freedom or tyranny. The rise of popularly elected government in the American South toward the end of the nineteenth century marked the spread of Jim Crow laws and an unprecedented terror against the black population, both inside and outside the law. By contrast, most of the personal rights which are loosely referred to as “democratic” rights were pioneered in England under governments that were democratically elected only within the past century — the popular franchise being a consequence rather than the cause of these developments, which go back to Magna Carta. In short, despite a general, historical association of freedom and democracy, they can be independent of each other in theory, and have at times been so in practice. Indeed, Hitler came to power through democratic and constitutional processes.

Freedom cannot be made definitionally a part of democracy. The democratic process is a mode of political decision making. Freedom may occur under this or other modes. The more autocratic the government, however, the more freedom depends on the benevolence, indifference, or inefficiency of the authorities. Such freedom can readily be suspended or revoked when it threatens the existing authorities or the existing form of government. By contrast, democratic freedom typically means recognition as a practical matter — and/or as an ethical principle — that freedom is difficult to maintain for most when it is not maintained for all. Thus democratic freedoms include the freedom to denounce freedom and to advocate and even carry out its destruction, as in the rise of Hitler in the Weimar Republic. In short, the movement from freedom to totalitarianism tends institutionally to be a one-way movement, since despotism recognizes no popular right to move back toward freedom. Historically, the movement from despotism to freedom has taken place after despotism’s self-destruction (Hitler being the clearest example) through either internal or external force, aroused by the excesses of despotism itself. The immediate incremental costs of moving in the totalitarian direction are, however, asymmetrical. It is easy to give up freedom and hard to get it back. Only a general horror of loss of freedom acts to convey these future costs into present-day decision-making processes.

In the perspective of world history, constitutional democracy is a very late arrival. Autocratic, aristocratic, and dynastic governments all go back for thousands of years, but the first time in history when a national government voluntarily relinquished power to an alternative set of political leaders as a result of a popular vote was 1800, when the Federalists turned power over to Jefferson’s Democratic Republicans. Constitutional democracy is a new — and indeed, fragile — form of government. Yet its appeal is so widespread that even some totalitarian governments create its outward appearances to win supporters (or at least, neutralize critics) at home and abroad.36

While freedom antedates constitutional democracy, both are rooted in a division of power. A constitution intentionally creates institutionally what has occurred fortuitously or systemically at various times in history — such a division of the decision-making power as to preclude one faction’s complete domination and to necessitate their courting of popularity. “Despotism itself is forced to truck and huckster,” under such circumstances, and even an absolute monarch “governs with a loose rein that he may govern at all…”37 Freedom as a result of division prevailed among the Arabs before Mohammed united them,38 and religious freedom existed among the diverse peoples of the Roman Empire before Christianity united them by conversion or through force. Much of the freedom of colonial America and the early United States was a fortuitous freedom, born of the sheer diversity of local despotisms, too numerous and widespread to unite or overcome one another. A leading American historian has observed: “In none of the colonies was there anything that would today be recognized as ‘freedom of the press.”’39 Religious freedom was equally scarce. In 1637 the Massachusetts Bay Colony “passed an ordinance prohibiting anyone from settling within the colony without first having his orthodoxy approved by the magistrates.”40 A Puritan leader declared that other religionists “shall have free Liberty to keep away from us.”41 The banishment of Roger Williams,42 and the public whippings and brutal imprisonment of the Quakers who came to Massachusetts43 indicate that this was no idle statement. Nor was Massachusetts unique, or Quakerism the only proscribed religion. In late colonial America, “the only place where the public exercise of Catholic rites was permitted was Pennsylvania, and this was over the protest of the last governor.”44 It was from this “decentralized authoritarianism” that a “great diversity of opinion” came, not from toleration in principle but from “the existence of many communities within the society each with its own rigid canons of orthodoxy.”45

Systemically evolved freedom in colonial American later became intentionally preserved freedom, in the Constitution of the United States. The Constitution relied on institutionalized divisions of power to preserve the freedom created by fortuitous divisions of power. It was the social equivalent of a chance mutation being preserved because it proved valuable. In addition to the classic division of powers into legislative, executive, and judicial, the Constitution divided powers into federal and state — with the state power being the predominant power in most areas, superseded by federal power primarily in interstate or international matters. This created as many independent power centers as there were states. States’ rights, like some other rights, exist not so much to benefit the actual holders of these rights, but to serve larger social purposes.

The dominant theme of the Constitution itself and of the writings of those who created it was the danger of power concentrated in a single decision-making unit or in a few decision making units operating in concert. What Madison called a system of “opposite and rival interests”46 was built into the American government. Each branch of government was given “the necessary constitutional means and personal motives to resist encroachments of the others.”47 Freedom was not trusted to the morality of leaders but to their conflicting drives: “Ambition must be made to counter ambition.”48 Government was not to create divisiveness but to utilize the inherent conflicts “sown in the nature of man” as a means of preserving freedom.49 Perhaps the point is most easily illustrated in reverse: the one area in which a united national majority was easily identified in colonial America was race, and it was here that the loss of freedom was carried to its extreme in slavery. Although it is known when Africans were first brought to America (in 1619), it is not known when slavery began, because the first captured Africans became indentured servants, like an even larger number of contemporary whites.50 But slavery evolved as systemically for blacks as freedom for whites, and in both cases the legal system later ratified what was already an accomplished fact. In short, the connection between freedom and the presence of offsetting powers is shown both by the presence and the absence of freedom in colonial America.

Over the years, but especially in the twentieth century, the constitutional division of powers has been eroded or destroyed in a number of ways. The intentional combination of the constitutionally-divided legislative, executive, and judicial powers in administrative agencies is only one of these ways, though perhaps the most blatant. The Civil War and its aftermath for generations set up federal-state confrontations in which “states’ rights” were almost invariably interwoven with racial oppressions increasingly rejected by the country at large. The preservation of the historic division of powers has been dependent upon the interpretation of the Constitution by a Supreme Court which itself stands to benefit from the concentration of power in the federal government, and by extending judicial power into executive and legislative areas. Moreover, the sheer growth in size of the federal government has given it new powers derived neither from the Constitution nor from any statutes, but inherent in the disposition of vast sums of money, many important jobs, and great discretionary powers of enforcing a massive and ever growing amount of laws and regulations. Finally, the ideologizing of politics has made the preservation of the constitutional framework a matter of reduced importance in the face of passionately felt urgencies. These various forces can be summed up as the moral and the institutional reasons for the erosion of the constitutional divisions of power.

How does the sheer size of government affect constitutional democracy or freedom? First of all, the size of the government affects the ability of the citizens to monitor what it does — or even the ability of their elected political surrogates to monitor the activities of a far-flung administrative empire, with officials who may dispose of sums of money greater than the gross national products of many nations. The congressional committee system attempts to cope with the problem by assigning a segment of each house to concentrate on particular policy issues — banking and currency, the military, labor, etc. — and make reports to the full Senate or House of Representatives, to guide the votes of individual members. However, as the government has expanded the scope of its activities, each Senator or Representative has to serve on so many committees and subcommittees (about ten subcommittees per Senator, for example51) dealing with matters of such complexity that no unaided individual could stay abreast of it all. This in turn means that political surrogates themselves are forced to resort to other surrogates — their staff aides, whose influence is so pervasive that they have been referred to as a second set of lawmakers.52 Committee staffs do not simply acquire factual information; they influence the substance and thrust of legislation, and often write its provisions. The high cost of knowledge also adds weight to lobbyists for special interests, who have incentives to become knowledgeable in a narrow but often complex area. Like the committee staffs and lobbyists, career bureaucrats owe much of their influence to the high cost of knowledge. The career bureaucrats both write and interpret federal regulations, which in 1975 occupied more than 60,000 pages of the Federal Register — three times the number of pages in 1970.53 In short, escalating knowledge costs reduce the representativeness of government. There are also huge financial costs of government programs, which tend to be argued over in terms of their individual merits or demerits, without regard to their effect on the size and responsiveness of government.

The growth of administrative agencies is not merely the growth of an arm of government performing assigned tasks. It is the growth of a sector with its own political initiatives and its own external constituencies developed as a result of its initial mandate, constantly pushing for an expansion of its activities and benefits. It is the creation of an external constituency that is politically crucial, and this means that one segment of the electorate receives — in addition to whatever current direct benefits are involved — the enduring advantage of mutual knowledge of who constitutes the beneficiaries at a lower cost than the average citizen’s cost of knowledge of who pays in money and in other ways. The net result is that programs whose costs exceed their benefits may not only continue but expand, due to different costs of knowledge between the created constituency and the general public. In the light of these different knowledge costs, it is understandable that between 1950 and 1970 government payments to farmers increased tenfold, even though the number of farms was reduced about 50 percent,54 that heavily criticized programs like Urban Renewal had their appropriations tripled in less than a decade,55 or that expenditures on elementary and secondary education have risen exponentially while both the numbers and performances of students have been declining.56 It is difficult to imagine any of these things happening in a world of zero knowledge cost or even of equal knowledge cost as between bureaucratic constituencies and the voting public.

The knowledge cost differential is exploited in various ways. One is the “entering wedge” approach to political innovation, in which the initial stakes are so low as to cause opposition fears to seem so exaggerated as to be discredited as outlandish. Later, the scope of the innovation can manifest itself in growing sums of money and/or burgeoning powers, after public interest has waned or turned to other things. For example, HEW began with less than a six billion dollar appropriation, which has since increased to more than thirty times that amount. The income tax began in 1913 with a maximum tax rate of 6 percent on incomes of a million dollars per year and over; now higher rates than that are paid on incomes of two thousand dollars per year.57 Temporary concealment pays big political dividends because of the high cost — and differential costs per unit of benefit — to the public of trying to continuously monitor all ongoing programs. Building subsidies in various government housing programs are routinely understated at the outset, even though it will obviously be impossible to conceal them indefinitely, because, as one federal official said (in justification), “if you put these huge capital contributions up front there’s no way any administration would propose it or any Congress would approve it.”58 In other words, the voters would never stand for it if they knew. That it will eventually become “public knowledge” in some sense means little in practical political decision-making terms, if “eventually” lies beyond the time horizon of political incumbents and/or if the “public” which eventually knows the facts is substantially less than the electorate.

Many economic devices and accounting tricks which do nothing more than postpone the transmission of financial knowledge to the public depend for their political effectiveness on knowledge cost differentials between the public and “insiders.” One such device is simply mislabeling as “loans” expenditures which no one expects to be repaid. These may be “loans” to individuals, businesses, municipalities, other nations, or international organizations. Even better for concealment purposes are “loan guarantees” in which both the federal government and the recipient can boldly state (without fear of immediate demonstrable contradiction) that there is “no handout” involved but only federal good offices used in obtaining private loans from banks. Everyone directly involved may know — as in the case of federal loan guarantees to New York City — that there is no rational hope that the private loans will ever be repaid, and that the banks will collect from the U.S. Treasury, eventually. In the meantime, it is not carried on the books as an expenditure or as a liability (economically or politically) of the incumbent administration. This is not a new phenomenon historically. It has long been commonplace in the deficit financing of Italian cities by the central government in Rome.59 Its political acceptance in America is relatively new because previously there was a strong but generalized and largely unarticulated suspicion of subsidies in any form. With the emergence of an onus of articulated rationality for all positions taken, such low-cost political protection was no longer available to the public.

The political advantages to “insiders” of postponed knowledge availability are more readily seen in economic terms, but the same principle applies in noneconomic policy areas as well. One can produce “peace in our time” as British Prime Minister Neville Chamberlain did in 1938, at costs that become manifest in later times — though not late enough for Chamberlain’s political career in this particular case. Japan’s militarists produced exhilarating triumphs at Pearl Harbor and Bataan, whose ultimate costs were paid at Hiroshima and Nagasaki. Hitler likewise produced a great national exhilaration with a series of triumphs for Germany at later costs that included German cities more devastated than Hiroshima or Nagasaki, though by pre-nuclear technology. It was not simply that Tojo or Hitler miscalculated. Rather, they took calculated risks whose magnitudes (costs) were insufficiently understood by their respective peoples during the decision-making period. More politically successful cost concealments abound, however. On a smaller scale, social experiments of various sorts have produced immediate political benefits for their partisans at costs only much later manifested in demonstrable consequences.

The classical criticism of the growth of government has been that it threatens both efficiency and freedom — that it is “the road to serfdom.”60 While many inefficiencies of government are too blatant to deny, the big-government threat to freedom has been denied and ridiculed. It is claimed that “nothing of the sort has happened.”61 “Nor need we fear” that “increased government intervention” will mean “serfdom. ”62 It is pointed out that “in none of the welfare states has government control of the economy — regardless of the wisdom and feasibility of the regulatory measures — prevented the electorate from voting the governing political party out of power. ”63 Such views are not confined to the liberal-left portion of the political spectrum. A leading economist of the “Chicago School” has stated: “hardly anyone believes that any basic liberties are seriously infringed today. ”64

Part of the problem with the argument that freedom has not been impaired by big government is the arbitrarily restrictive definition of “freedom” as those particular freedoms central to the activities of intellectuals as a social class. But the right to be free of government-imposed disabilities in seeking a job or an education are rights of great value, not only to racial or ethnic minorities — as shown by the civil rights movements of the 1960s — but also to the population at large, as shown in their outraged (but largely futile) reaction to “affirmative action” and “busing” in the 1970s. Even aside from the question of the substantive merits or demerits of these policies, clearly people perceive their freedom impaired when such vital concerns as their work and their children are controlled by governmental decisions repugnant to, but insulated from, the desires of themselves and the population at large. This loss of freedom is no less real when others make the case for the merits of the various social policies involved or denounce as immoral the opposition to them. Freedom is precisely the right to behave contrary to the values, desires or beliefs of others. To say that this right can never be absolute is only to say that freedom itself can never be absolute. Much of the loss of freedom with the growth of big government has been concealed because the direct losses have been suffered by intermediary decision-makers — notably businessmen — and it is only after the process has gone on for a long time that it becomes blatantly obvious to the public that an employer’s loss of freedom in choosing whom to hire is the worker’s loss of freedom in getting a job on his merits, that a university’s loss of freedom in selecting faculty or students is their children’s loss of freedom in seeking admission or in seeking the best minds to be taught by. The passions aroused by these issues go well beyond what would be involved in a simple question of efficiency, as distinguished from freedom. Nor can the passionate opposition be waved aside as mere “racism.” Not only are minorities themselves opposed to quotas and busing: so are others who have fought for racial equality long before it became popular. Nor are racial issues unique in arousing passions. Even such an apparently small issue as mandatory seat belt buzzers created a storm of protest against government encroachment on the freedom of the individual. The quiescence of intellectuals as long as their freedom to write and lecture remained safe may be less an indication of the preeminence of these particular freedoms than of the insularity of intellectuals.

The argument that the ability to vote to put political leaders out of office remains unimpaired by the growth of government is somewhat beside the point. Democracy is not simply the right to change political personnel, but the right to change policies. The reduced ability of the electorate to change policy is one of the consequences of growing government — and particularly of government whose power is growing most in its most insulated institutions, the federal courts and administrative agencies. The judicial and administrative nullification of congressional attempts to stop quotas and busing65 are only the most striking contemporary examples. The undeclared war in Vietnam was another short-circuiting of public control over major national policy. Public opinion against leniency to criminals has had little effect, and the growing public support for capital punishment66 has paralleled a growing outlawing of its use by the Supreme Court. Even policies nominally under the control of elected officials have gone counter to the philosophy of those officials. “Affirmative action” quotas and massive school busing both developed under the Nixon-Ford administrations which were opposed to them. So too did the rapid growth of federal welfare expenditures, which finally surpassed military expenditures under Nixon.67 The substantive merits of these developments are not at issue here. The point is that this illustrates the increasing difficulty of public control of governmental policies, even with changes of officials, even at the highest elected levels.

None of this is historically unique. In the late stages of the Roman Empire its civil servants “felt able to exhibit a serene defiance of the emperor.”68 Roman emperors had the power of life and death, but Roman bureaucrats knew how to run a vast empire that had grown beyond the effective control (or even knowledge) of any individual. The same was later true of Czarist Russia, for John Stuart Mill declared: “The Czar himself is powerless against the bureaucratic body; he can send any one of them to Siberia, but he cannot govern without them, or against their will.”69 The experience of imperial China was very much the same.70

Freedom to act in economic matters is neither a negligible kind of freedom in itself nor unrelated to other freedoms. The “McCarthy era” attacks on people associated with left-wing causes was primarily an attack or their jobs rather than any attempt to get direct government prohibitions or restrictions on what people could say or believe. Yet both sides recognized the high political stakes in this basically economic restriction. But even as regards issues where both the ends and the means are economic, freedom may yet be involved. When people living in homes and neighborhoods that pose no threat to themselves or others are forced to uproot themselves and scatter against their will, leaving their homes to be destroyed by bulldozers, they have lost freedom as well as houses and personal relationships. This loss of freedom would be no less real if it were justifiable by some national emergency (military action) or locally urgent conditions (epidemic). That it is more likely to be a result of some administrative agency’s preference for seeing a shopping mall where the neighborhood once stood only adds economic and sociological issues. It does not eliminate the issue of freedom. Indeed, serfdom itself was largely an economic relationship, but that did not prevent its disappearance from being a milestone in the development of freedom. The oft-noted political “cowardice” of big business corporations may in fact be prudence in light of the many costly processes through which government can run them. The constitutional protections against government punishment-by-processing (independent of ultimate verdicts) do not apply where economically punitive actions are not legally interpreted as punishments, or where administrative agencies can drain their time and money, subject neither to restrictions of impartial judiciary concepts nor to governmental bearing of burdens of proof. What is “euphemistically called social responsibility” may in fact be simply the “threat of law”71 — or of extralegal powers derived from institutions set up for entirely different purposes. For example, the Internal Revenue Service can (and has) threatened to revoke the tax-exempt status of organizations whose policies displease the government, even though such organizations violated no explicit statute. In addition, political hostility to philanthropic foundations found expression in the 1969 Tax Reform Act which both drained and constrained the use of foundations’ financial resources.72

Though the Constitution was intended as a barrier against the concentration of power in the federal government, it has been construed by the Supreme Court in ways that facilitate such concentration. Despite the impartiality expected of the judiciary, the Supreme Court is itself an interested party in any case concerning the constitutional division of power, either between state and federal governments or among the executive, legislative, and judicial branches of the latter. Public opinion long stood as a barrier to judicial activism, and the “court-packing” threat of Franklin D. Roosevelt in the 1930s which forced the Court to retreat from “substantive due process” doctrines was evidence of the limits of political toleration and the Court’s reluctance to face a constitutional showdown. Less than twenty years later, however, the Supreme Court was launched on a course of judicial activism which made the earlier courts seem very tame — and there was no similar reaction of public opinion or political leaders. Attempts at restraining the Court or impeaching particular justices — Warren and Douglas being prime targets — were ridiculed for their futility. Partly this may have been due to the fact that the courts were, initially at least, moving with the currents of the time, especially in desegregation. Partly, too, it reflected the growing influence of political and legal “realism” about the impossibility of objective “interpretation” of the Constitution as distinguished from judicial policy-making. As in other contexts, “realism” here meant the acceptance of incremental defects as categorical precedents. A continuum between objective “interpretation” and subjective policy-making was arbitrarily dichotomized in such a way that everything fell on the subjective side. Having proven the impossibility of perfect universally objective and neutral interpretation, it was a short step to acceptance of a growing subjective component in what was increasingly regarded by even the Supreme Court’s friends and partisans as judicial policy-making. It was another triumph of the precisional fallacy, that because a line could not be precisely drawn, there were no decisive distinctions among any parts of the relevant continuum.

Whatever the mixture of reasons and their respective weights, the courts were no constitutional barrier to the concentration of power. In the jargon of the times, they were not part of the solution, but part of the problem.

Historic events also promoted the concentration of power. The Civil War and its racial aftermath, in the South especially, ranged many of the most conscientious people in the nation on the side of federal power against “states’ rights.” The principle of “states’ rights” was generally available only in a “package deal” with racial bigotry, cynical discrimination, and lynchings. In such a package, the principle had no chance of long-run-survival on its own merit vis-à-vis the principle of unrestrained federalism. But every decision increasing federal power at the expense of state power applies to all the states — not just the South — and reduces the states from autonomous power centers toward the status of administrative units of the national government. This is most apparent in federal-state joint programs, ranging from “revenue sharing” to specific “matching grants” or other Washington-financed and Washington-controlled activities in which federal money sustains state activities — contingent on state subordination of its decision-making discretion to federal “guidelines.” However, even in activities solely administered by the state or local government — public schools, for example — federal “guidelines” control not only the hiring of teachers and the placement of students but a host of other decisions down to such minute considerations as the number of cheerleaders for girls’ and boys’ athletic teams.73 That the physical administration remains wholly in state and local hands in no way changes the fact that the decision making has moved to Washington. In this way the physical fallacy conceals an historic shift of power.

Even more of an historic landmark in political development was the Great Depression of the 1930s. Though liberal and conservative scholars alike have traced the origin of the Depression to catastrophic governmental monetary policies,74 the popular interpretation and the political consensus both treat the Great Depression as showing the failure of the economic market and the inherent flaws of capitalism, demonstrating an “objective” need for government economic intervention. However disputable this belief, what is not seriously disputable is that the belief itself marked a turning point in the political and economic thinking of an age. It would be hard to explain how post-World War II America, in an age of unprecedented prosperity, widening opportunities, and virtually nonexistent unemployment became preoccupied with government guaranteed security, without realizing that only a decade earlier this generation went through a traumatic economic and social experience. The 1930s left more than a psychic legacy, however. Enduring institutions were created to deal with an episodic crisis. The severity of that crisis need not be underestimated because it was episodic. Millions of American farmers and homeowners found themselves on the verge of losing what they had worked and sacrificed for a lifetime to have, when monetary contractions beyond their control or foresight increased the real burden of their mortgages at a time when their incomes were sharply cut or lost altogether. When mortgage foreclosures were resisted by armed and desperate people, the government’s options were bloodshed or relief measures. However prudent, wise, or humane it may have been to aid destitute farmers, for example, to aid them by establishing enduring institutions meant that, decades later, billions of dollars would still be spent under entirely different conditions — much of it going to agricultural corporations.

Agriculture was, of course, only one of many areas in which permanent institutions were established to cope with an episodic crisis. Labor, aviation, electric power generation, public housing, dairy products, and a host of “fair-traded” items all became subjects of newly created federal agencies. The fiscal policies of the federal government were also permanently altered. Whereas years of government budget surpluses outnumber years of deficits in both the eighteenth and nineteenth centuries, and though the 1920s were a solid decade of surpluses, the 1930s were a solid decade of deficits — setting the stage for the general prevalence of deficits ever since.75 The inflationary effects of these deficits can be seen in the doubling of the wholesale price level between 1931 and 1948, whereas it declined between 1831 and 1848, and, in fact, prices were lower at the end of the 19th century than they were at the beginning.76 But aside from their economic effects, budget deficits have the political effect of insulating expenditures from immediate taxpayer knowledge.

The New Deal administration of the 1930s also introduced intellectuals into the government on a large scale — enlisting in the process not only those intellectuals actually in office but to a considerable extent also enlisting as natural partisans their fellow-intellectuals in the academy and elsewhere. This too has remained an enduring and expanding feature of political decision-making. The beliefs and fashions of intellectuals entered political decision-making, not under the open and challengeable banner of interest or ideology, but in the insulated guise of “expertise.” In short, it was another force tending toward the insulation of governmental decision making from effective public feedback. The opening of political careers (usually nonelective) to intellectuals also provided intellectuals inside and outside of government with an incentive for favoring the concentration of power. As Tocqueville observed more than a century ago:

It may easily be foreseen that almost all the able and ambitious members of a democratic community will labor unceasingly to extend the powers of government, because they all hope at some time or other to wield those powers themselves. It would be a waste of time to attempt to prove to them that extreme centralization may be injurious to the state, since they are centralizing it for their own benefit. Among the public men of democracies, there are hardly any but men of great disinterestedness or extreme mediocrity who seek to oppose the centralization of government; the former are scarce, the latter powerless.77

RATIONALES FOR POWER

The discussion thus far has been primarily in terms of the manner in which government has expanded more so than the underlying rationales behind such changes. Perhaps the simplest rationale for expansion of the areas and powers of governmental decision-making is that a crisis has thrust new responsibilities upon the government, and it would be derelict in its duty if it did not expand its powers to meet them. Among the more prominent ideological rationales for expanded government is a “maldistribution” of status, rights, or benefits — any existing process or result constituting “maldistribution” to those who would prefer something else. For example, equality can be a maldistribution of status from the standpoint of racists, and the correction of this “maldistribution” was in fact a central feature of Hitler’s program. Power may also be sought on the rationale that it is needed to offset already existing power. Yet another rationale for expanded government is the creation of national “purpose” — consensus being viewed as a consumer good (implicitly, worth its cost).

CRISIS

Even the most democratic and constitutional governments tend to expand their powers during wartime, and in natural disaster areas it is common to station troops and declare martial law even in peacetime. Such buildups of governmental power tend to dissipate with the passage of the emergency, which is generally easily recognized by the public at large.

An enduring concentration of governmental power requires either that the public perception of crisis be deliberately prolonged or that the crisis be used to establish institutions which will outlast the crisis itself.

A deliberately prolonged crisis atmosphere can be managed indefinitely only by a totalitarian state, able to depict itself to its people as threatened on all sides by enemies — and able to exclude contrary interpretations of events. This has in fact been the basic posture of totalitarian states in general. For example, the reiterated theme of “peace,” renunciations of expansionism in general and in particular, and an outright ridicule of foreign fears to the contrary were common to Hitler78 and to Stalin in the 1930s — though the latter annexed even more territory than the former from the beginning of World War II to the Nazi’s invasion of the U.S.S.R.79 Even the most aggressive totalitarian state can claim to be threatened by others — and can even cite evidence, since its aggressive military preparations are sure to stimulate at least some military preparedness on the part of other countries. Hitler in the 1930s was perhaps the classic example of this propaganda inversion of cause and effect, though certainly not the last.

In a constitutional democracy, a crisis cannot be made to last indefinitely because alternative versions of events cannot be suppressed. Real crises must be utilized to establish enduring institutions. The Great Depression of the 1930s was a landmark in this respect. The monetary system — the gold standard — was permanently changed. Labor-management relations were permanently changed by the Wagner Act, adding legal sanctions against employers to other union powers. The permissible limits of price competition were permanently reduced by the Robinson-Patman Act, “fair trade” laws, and a host of special restrictions and subsidies applying to sugar, the maritime industry, and others. All these political developments enhanced governmental power, either directly, as with regulatory laws, or indirectly by freeing government from previously existing restraints, as with the abandonment of the gold standard and relaxed standards of constitutionality for the hybrid executive-legislative-judicial agencies created by the New Deal. There was not only an extraordinary growth of governmental power but an unprecedented political swing. Roosevelt’s electoral victory in 1936 was the greatest ever achieved at that point: he carried all but two states. Moreover, it was part of a larger, historical pattern, which ultimately included an unheard of string of four successful presidential elections, along with one political party’s control of both houses of congress for more than a decade — also unprecedented in American history.

The demonstrable political value of crises was not lost upon subsequent governments or politicians. So many things have since been called a “crisis” that the word has virtually become a political synonym for “situation,” and indicates little more than something that someone wants to change.

In recent decades, there has been a trend toward superseding individual decision making based on behavioral assessments with decision making based on ascribed status. There have been laws proposed and enacted, administrative rulings, judicial decisions, and other political directives prohibiting various kinds of private decision makers from sorting and labeling on the basis of innate biological characteristics (race, sex), transient conditions (childhood, old age) or even volitional behavior (homosexuality, drug use, criminal record). In addition, there have been costs of various sorts and magnitudes imposed by government on those attempting to sort people by various performance characteristics (test scores, work evaluations). For example, letters of reference have been forced to become nonconfidential, and together with the increasing ease of initiating lawsuits, this means that they have become so bland and noncommital as to lose much of their value as transmissions of information on which to sort and label job applicants or seekers after various other kinds of benefits. The imposition of “due process” concepts on public school administrators has similarly reduced the ability of decision makers on the scene to sort out students preventing other children from learning, either by direct disruption of classes or by creating an atmosphere of random terrorism and/or systematic extortion.80

Sometimes these governmental activities have been accompanied by admonitions to judge each person individually, rather than by sorting and labeling selected characteristics, but such advice is little more than gratuitous salt in the wound, given the cost differentials involved in these two methods. Sometimes the ascribed status is preferential, so that sorting and labeling that is biased in the prescribed direction is legal but any bias in a different direction is not.

Many decisions which involve status ascription might be regarded from some other points of view as ordinary social decisions involving efficiency or other such mundane considerations. However, what is striking about recent times is precisely the growth of an ideological passion which regards particular decisions and decision making processes as symbolic of status rather than simply instruments of social expediency. One of the more extreme examples of this was the insistence of French-Canadian authorities in Canada’s Quebec Province that airline pilots landing at their airports converse with the control towers only in French. Even though hundreds of lives are at stake in conversations between pilots and control towers, this social expediency consideration was subordinated to status ascription issues involved — the general controversy over the preeminence of French language and culture in Quebec. Only a concerted refusal of international airline pilots to fly into Quebec forced the government to reconsider this policy. In the United States, various groups have regarded various laws and policies (private and public) as involving the status of its members — their ultimate value as human beings — rather than simply questions about the best way to get a given job done or the social expediency of particular processes. Even where there are demonstrable behavioral differences between groups — e.g., a decade’s difference in longevity between men and women — the law has forbidden employee pension plans to treat men and women differently, as a violation of their equal status.81 The separation of boys and girls in athletic and social activities is also challengeable in courts, even where such separation is by nongovernmental, voluntary organizations like the Boys Club, and even though there are numerous demonstrable behavioral differences between boys and girls, including not only physical strength but maturation rates as some of the more obvious examples. Yet the passion behind objections to differences in treatment turns on status questions rather than behavioral questions. Moreover, the issue is often posed as if it were inherently and solely a status issue — as if there is no conflict between behavior-based and status-based decisions, and therefore opponents of particular status-based decisions are depicted as advocates of inferior status for a group in question. Even groups defined by behavioral differences (homosexuals, alcoholics) claim denial of their equal status when treated differently by others. Carried to its logical conclusions, this trend would argue that social processes should make decisions solely on the basis of status rather than behavior: if there are homes for unwed mothers, there should be homes for unwed fathers. While few would go that far, the point is that the principle invoked — and the categorical way it is invoked and its opponents smeared — provides no logical stopping point short of that. The only practical limit is what status ascription advocates find intuitively plausible or politically feasible at a given time — and neither of these considerations provides any long-run constraint on carrying the principle into regions of diminishing or negative returns.

The link between status ascription and political power is apparent in the “redistribution” of income and other economic benefits. While growing governmental control over the output generated by private activity is often described by its hoped for result as “income redistribution,” statistical data show that the actual “redistribution” of money and power from the public to the government vastly exceeds any “redistribution” from one income class to another. The percentage of the aggregate American income earned by the top fifth, bottom fifth, etc., has remained almost unchanged for decades82 while governmental powers and welfare state expenditures have expanded tremendously. There has been “less a redistribution of free income from the richer to the poorer, as we imagined, than a redistribution of power from the individual to the State.”83 International comparisons show the same result as intertemporal comparisons: “In all the Western nations — the United States, Sweden, the United Kingdom, France, Germany — despite the varieties of social and economic policies of their governments, the distribution “of income is strikingly similar.”84 What the national differences in “welfare state” policies actually affect is the distribution of money and power between the public and the government.

So-called “income redistribution” schemes substitute status for behavior as the basis for receipt of income: Because of one’s status as an equal citizen of the country, one has a “right” to at least a “decent income,” and perhaps an “equitable share” in the nation’s output or even an “equal share” where this doctrine is carried to its logical conclusion. In short, personal income should not be based on behavioral assessments by users of one’s services but by ascribed status as determined by a given set of political authorities. Implicit in this latter process is a concentration of power, for “distributive justice” as a hoped-for ideal means distributor's justice as a social process.

In an uncontrolled economy it is possible for all individuals to become more prosperous, each acquiring more of his own preferred mixture of goods. But because “justice” is inherently interpersonal, it is not similarly possible for everyone to acquire more justice. More “social justice” necessarily means more of one conception of justice overriding all others. The economic inefficiencies involved in such a process are less important politically for their own sake than from their effect on freedom. An imposed social pattern that leaves many unrealized economic gains to be made from mutually beneficial transactions must devote much political power to preventing these transactions from taking place, and must pay the cost not only economically and in loss of freedom, but in a demoralization of the social fabric as duplicity and/or corruption become ways of life. The demoralizing experience of attempting to prevent mutually preferred transactions in only one commodity — alcoholic beverages under Prohibition — suggests something of the magnitude of the problem involved.

Justice of any sort — criminal justice as well as so-called “social justice” — implies the imposition of a given standard on people with different standards. Ironically, many of those politically most in favor of “social justice” are most critical of the loss of personal freedom under the authority of criminal justice, and most prone to restrict the discretion and power of police and trial judges in order to safeguard or enhance personal freedom. The imposition of criminal justice standards, however, usually involves far more agreement on values — the undesirableness of murder or robbery, for example — than is involved in standards of “social justice,” and should therefore require less loss of freedom in imposing one standard on all. Certainly it would be hard to argue the opposite, in view of the broad similarity of criminal justice across nations and ages, and their disparities as regards the distribution of income and power (“social justice”).

What is in fact being sought and achieved under the banner of “social justice” is a redistribution of decision-making authority. Decision makers acting as surrogates for others in exchange for money or votes are being either replaced or superseded by decision makers responsible largely or solely to the pervasive social vision of their clique. This redistribution is often advocated or justified on the basis of the supposed amorality of the first decision makers, who are depicted as solely interested in money or votes. But insofar as this depiction is correct, such decision makers are only transmitters of the preferences of the public, not originators of their own preferences, and so exercise no real “power,” however much their decisions affect social processes. It is the second — more moral or ideological — set of decision makers who originate and impose standards, i.e., who reduce freedom. Their passionate arguments for particular social results tend to obscure or distract attention from the question of the social processes by which these hoped-for results are to be pursued.

This is nowhere better illustrated than in John Rawls’ Justice, which speaks of having a society somehow “arrange”85 social results according to a given conception of justice — the bland and innocuous word “arrange” covering a pervasive exercise of power necessary to supersede innumerable individual decisions throughout the society by sufficient force or threat of force to make people stop doing what they want to do and do instead what some given principle imposes. Even Rawls’ principle of restricting “economic and social inequalities to those in everyone’s interests”86 requires forcible intervention in all transactions, quite aside from the difficulties of the principle as a principle. On a sinking ship with fewer life preservers than passengers, the only just solution is for everyone to drown. Yet virtually anyone would prefer to save lives, even if those saved had no more just claim to such preference than anyone else. This example is extreme only in the starkness of the alternatives. More generally, social decisions are not a zero-sum process, so the “distribution” of benefits (“justice”) cannot be categorically more important than the benefits themselves, as Rawls’ central thesis suggests. There must be some prior value to the things distributed in order to have their distribution mean anything. No one cares if we each leave the beach with different numbers of grains of sand in our hair.

THE POLITICAL ROLE OF INTELLECTUALS

One of the fundamental problems in any analysis of intellectuals is to define the group in such a way as to distinguish a class of people from a qualitative judgment about cognitive activity. Intellectuals will be defined here as the social class of persons whose economic output consists of generalized ideas, and whose economic rewards come from the transmission of those generalized ideas. This in no way implies any qualitative cognitive judgment concerning the originality, creativity, intelligence, or authenticity of the ideas transmitted. Intellectuals are simply defined in a sociological sense, and a transmitter of shallow, confused, or wholly unsubstantiated ideas is as much of an intellectual in this sense as Einstein. It is an occupational description. Just as an ineffective, corrupt, or otherwise counterproductive policeman is still regarded as having the same occupational duties and authority as the finest policeman on the force, so the inept or confused intellectuals cannot be arbitrarily reclassified as a “pseudo-intellectual” in an occupational sense, however much he might deserve that classification in a qualitative cognitive sense. Qualitative questions about the intellectual process are another matter entirely, and will also be considered — but separately.

The distinction between the intellectual class and the intellectual process is crucial. One might, for example, be anti-intellectual in the sense of opposing the social views of that particular class of people, and yet be very intellectual in the sense of having exacting standards in the cognitive process. Conversely, a totalitarian dictator might be anti-intellectual in the sense of disdaining and discrediting cognitive processes that would otherwise undermine the ideological mind conditioning that is central to totalitarianism, and yet provide unprecedented political power and/or economic rewards to those intellectuals willing to serve the regime. Lysenko achieved a degree of prominence and dominance under Stalin that no contemporary geneticist could achieve in a free society.

The hoped-for results of the intellectual occupation — creativity, objectivity, authenticated knowledge, or penetrating intelligence — cannot be incorporated into the very definition of the occupation. Whether or to what degree they in fact exist in the occupation are empirical questions. One definition of intellectuals is that they are “professional second-hand dealers in ideas”87 — incorporating a negative assessment of their creativity in the very definition. Truly creative intellectuals may in fact be rare, but empirical results of whatever sort do not belong in the definition itself. Intellectuals may choose to believe that they are purveyors of knowledge, but there is no reason to assume that the bulk of what they say or write consists of ideas sufficiently authenticated in either empirical or analytic terms to qualify as “knowledge.” Such a general assumption would itself be cognitively unsubstantiated, and (as social policy) politically dangerous.

Many occupations deal with ideas, and even with ideas of a complex or profound order, without the practitioners being considered intellectuals. The output of an athletic coach or advertising executive consists of ideas, but these are not the kind of people that come to mind when “intellectuals” are mentioned. Even the designers of television circuits, mining equipment, or parlor games like “Monopoly” are less likely to come to mind than professors, authors, or lecturers. Those occupations which involve the application of ideas, however complex, seem less likely to be regarded as intellectual than occupations which consist primarily of transmitting ideas. Moreover, even those transmitting ideas that are highly specific — a boxing manager telling his fighter how to counter a left jab, or a printer explaining the complexities of his craft — are not considered to be intellectuals in the same sense as those who deal with more sweepingly general ideas such as political theory, economics, or mathematics. The most narrowly specialized physicist bases his work on generalized systems of analytic procedures and symbolic manipulations common to economics, chemistry, and numerous other fields. He is an intellectual because his work deals in generalized ideas, however narrow the focus of his particular interest. By the same token, a drugstore clerk is not considered an intellectual, though dealing with a wide range of products and people, but with the work itself not requiring mastery of a generalized scheme of abstractions. Nor is it complexity or intelligence that is central. Even if we believe (like the present writer) that being a photographic technician requires more intelligence and authenticated knowledge than being a sociologist, nevertheless the sociologist is an intellectual and the photographic technician is not, because one transmits generalities and the other uses ideas that are far less general.

The point here is not to illustrate an arbitrary definition, but to show that the definition is far from arbitrary, and reflects what is a general pattern of usage, even if unarticulated. Moreover, as will be seen, these definitional distinctions correspond to empirical distinctions in the political and social viewpoints of the various groups as categorized. Even on university faculties, agronomists and engineers have very different political opinions from those of sociologists or the humanities faculty.88 In defining the intellectual occupation, the purpose is not so much to make hard-and-fast boundaries as to define a central conception and to recognize different degrees of approximation to it. Thus there is some sense in which an agronomist or engineer is less likely to be classified as an “intellectual” than is a sociologist or a literary critic, or is thought to fit in the category less fully or less well.

The incentives and constraints of intellectual processes are quite different from the incentives and constraints of intellectual activity as an occupation. For example, intellectual processes are highly restrictive as to the conclusions that may be reached, requiring painstaking care in the formulation of theories, rigorous discipline in the design and carrying out of experiments, and strict limitations of conclusions to what the evidence can logically support. By contrast, intellectuals as a social class are rewarded for presenting numerous, sweeping, plausible, popular and policy-relevant conclusions. Criminology may be at a stage of highly disparate speculation,89 but public policy pressures to “solve” the crime “problem” mean that large sums of government money are available to criminologists who will claim to know how to “rehabilitate” criminals or discover the “root causes” of crime. How many criminologists or intellectuals in general succumb to the incentives of their class, as distinguished from the incentives of their cognitive process, is not at issue here. The point is that they are very different incentives.

THE INTELLECTUAL PROCESS

Intelligence may take many forms, from the incrementally imperceptible and partially unconscious modifications of behavior over the years that we call “experience” to the elaborately articulated arguments and conclusions that are central to the intellectual process. Intelligence and intellectual are two different things. The hoped-for result is that the latter will incorporate the former, but whatever the facts may be about their overlap, they are not conceptually congruent.

Explicit articulation — in words or symbols — is central to the intellectual process. By contrast the enormously complex information required to make life itself possible, which has systemically evolved and exists in unarticulated form in the genetic code, is not intellectual, though the efforts to transform the genetic code into an articulated form is a challenging if uncompleted intellectual process. Conversely, the forms of articulation may be elaborate and impressive and yet the substance of what is elaborated simple or even trivial. There is nothing either intrinsically difficult or profound about the proposition that LIX times XXXIII equals MCMXLVII. Children in the fourth grade perform this kind of arithmetic every day. The symbols alone make it formidable. Graphs, Latin phrases, and mathematical symbols likewise create an air of complexity or profundity in the process of elaborating ideas that may contain little of complexity or substance, much less validity.

However limited the scope of articulation, within those limits it serves a vital role in the intellectual process. A mere isolated idea, or arbitrary constellation of ideas — a vision — is metamorphosed into an empirically meaningful theory by the systematic articulation of its premises and the logical deduction of their implications. This does not in itself produce either truth or creativity. It aids in detecting error or meaningless rhetoric. The more rigorously formalized the reasoning, the more readily detectable are shifting premises or other internal inconsistencies, or a discord between the implications of the theory and observable events. In short, articulation is crucial to the intellectual process, however limited (and sometimes confusing) it may be in the social decision making process.

Articulation, indeed, readily loses information, as noted in Chapter 8 in discussions of price control and central planning. The definition or articulation of product characteristics by third parties seldom covers as many dimensions as are unconsciously coordinated in unarticulated market processes, so that (for example) an apartment typically has more auxiliary services when there is less articulation (in private housing markets) than when there are more elaborate articulations (in public housing regulations). The characteristics of even relatively simple things like an apartment or a can of peas cannot be exhaustively articulated, or even articulated enough in most cases to match the systemic control of characteristics through voluntary transactions. In more elaborate or subtle things, such as deeply felt emotions, articulation often seems so wholly inadequate as to be discarded for symbolic gestures, looks, and tones of voice, which may be less explicit and yet convey more meaning. Resort to poetry, music, and flowers on highly emotional occasions is evidence of the limited transmission capacity of articulation.

Because nothing can be literally exhaustively articulated, the process of articulation is necessarily to some extent also a process of abstraction. Some characteristics are defined, to the neglect of others which may be present but which are deemed less significant for the matter at issue. This purely judgmental decision may of course prove to be right or wrong. The point is that abstract intellectual models — “mimic and fabulous worlds”90 as Bacon characterized them — are inherent in intellectual activity, whether these models be explicit and highly formalized (as in systems of mathematical equations) or informal or even implicit. In the implicit models, however, it is possible to ignore the fact that one is abstracting and theorizing, to call the premises or conclusions “common sense” and to shift one’s premises without being aware oneself and without alerting others to the shift. For example, one may use the public witnessing of executions as evidence for the immorality of capital punishment in one part of an informal and implicit argument, and pages later also use the public’s not witnessing executions as more evidence for its immorality. Were all the arguments reduced to equations, the inconsistent premises would at the very least be located nearer one another in a more condensed presentation, would be more readily detectable and more conclusively demonstrable by universally recognized mathematical principles. In a celebrated episode in the development of modern economic theory, a set of instructions given to a draftsman preparing a graph proved impossible to execute, leading to the later discovery of a substantive economic principle inherent in that impossibility.91 Had the same theory been presented in a purely informal and verbal manner, nothing would have compelled the recognition of the inconsistency. Indeed the particular inconsistency in question is still common among “practical” men, though analytically discredited decades ago.92

The enormous value of articulation, abstraction, and formalized rationality in the intellectual process is as part of the authentication process. They are neither part of the creative act nor of the empirical evidence which determines its ultimate applicability. The essentially negative role of articulated rationality in filtering, modifying, and eliminating ideas on their way to becoming knowledge is teachable in schools because it is formally demonstrable. But the creative performance — the “preanalytic cognitive act”93 as it has been called — is not. The most highly trained products of the leading universities are therefore better equipped to demolish ideas than to generate them. This is a systemic characteristic to be understood rather than an intentional choice to be criticized. It must be kept in mind, however, when considering such people as potential creators of “solutions” for social “problems.” Insofar as they are being creative, they are not doing what they were taught, but are instead professionals acting in an amateur capacity. The maxim that “experts” should be “on tap but not on top” expresses an appreciation of their valuable but largely negative role in filtering policy alternatives.

The very concept of “solving” social “problems” extends academic practices to a completely different process. The academic process is a process of pre-arrangement by persons already in possession of knowledge which they intend to articulate and convey unilaterally. Social processes are processes of systemic discovery of knowledge and of its multilateral communication in a variety of largely unarticulated forms. To “solve” an academic “problem” is to deal with pre-selected variables in a prescribed manner to reach a pre-arranged solution. To apply the academic paradigm to the real world is to arbitrarily preconceive social processes — the whole complex of economic, social, legal, etc., activities — as already comprehended or comprehensible to a given decision maker, when in fact these very processes themselves are often largely mechanisms for coping with pervasive uncertainty and economizing on scarce and fragmented knowledge. Resolutions of conflicting desires and beliefs may emerge from social processes, through the communication and coordination of scattered and fragmented knowledge, but that is wholly different from a solution being imposed from above as “best” by a given overriding standard in the light of a given fragment of knowledge.

What is a social “problem”? It is generally a situation which someone finds less preferable than another situation that is incrementally costlier to achieve. If the alternative situation is no costlier, it would already have been chosen, and there would be no tangible “problem” remaining. In both theory and practice, a social problem is likely to be one of the higher valued unfulfilled desires — one that is almost but not quite worth the cost of satisfying. Such situations are inherent in the incremental balancing of costs and benefits, which is itself inherent in the condition of scarcity and trade-off. A “solution” to such “problems” is a contradiction in terms. It is of course always possible to eliminate all unfulfilled desires of a given sort — that is, extend the consumption of some benefit to the point where its incremental value is zero — but in a system of inherent scarcity (i.e., unlimited human desires) that means denying some other benefit(s) even more. Much political discussion of problem-solving consists of elaborately demonstrating the truism that extending a given benefit would be beneficial in that particular regard — more airports, day-care centers, rental housing, etc. — without any concern with the incremental value of sacrificed alternatives. A variation on this theme is that some set of people “need” a particular benefit but cannot “afford” it — i.e., its incremental value to group A exceeds its incremental cost to group B. Whatever the plausibility or perhaps even merit of this argument with particular benefits and particular descriptions of people, it clearly loses validity as group A approaches a state of being identical with group B. Yet very similar political arguments for “solving” some “problems” are used when A and B are identical. For example, the American people cannot afford the medical care they need, and so should have national health insurance (paid for by the American people).

To “solve” some social “problem” is (1) to move the locus of social decision making from systemic processes of reciprocal interaction to intentional processes of unilateral or hierarchical directives, (2) to change the mode of communication and control from fungible and therefore incrementally variable media (emotional ties, money, etc.) to categorical priorities selected by a subset of a population for the whole population, and (3) because of the diversity of human values, which make any given set of tangible results highly disparate in value terms (financial or moral), pervasive uncompensated changes through force are likely to elicit pervasive resistance and evasion, which can only be overcome by more force — which is to say, less freedom. Moreover, the very concept of a “solution” involves some given standard by which one situation will be regarded as a “solution” of another. These standards may be moral or material, or anywhere in between, but there must be a standard for there to be a “solution.” With diverse people making diverse trade-offs, however satisfying the results they reach may be for them respectively, it can only be “chaos” or a “problem” requiring “solution” to anyone applying a single standard.

The undemocratic implications of applying the academic paradigm in politics are exacerbated by the tendency of many intellectuals to favor — or indirectly insist upon — decision making processes cast solely in the mold of explicit articulation. In this view, social decisions must require articulation before government commissions, administrative agencies, courts, parole boards, school committees, advisory groups to corporations, police departments, and all other social decision makers. Unarticulated decision making is equated with “irrationality.” “Why do we need four gas stations at a single intersection?” asks an intellectual painting a picture of “wasteful” decision making in America by “a thousand little kings” motivated by “greed.”94 The more fundamental question is why articulated justification to third parties must be the mode of determining business location or any other decisions by any other segment of the population? To the extent that decision makers are motivated by “greed” rather than an a priori preference pattern, their decisions are constrained by the decisions of competing bidders who are in turn surrogates for alternative sets of particular resources, including locations.

That a set of decisions is not articulated is not evidence that they are either irrational or undemocratic. On the contrary, the need to articulate to a tribunal of third parties applying their own standards is a reduction in both democracy and freedom, and often involves a loss of effective knowledge transmission in decision making. Moreover, it is socially biased in favor of those more skilled in articulation, even if their skills in other respects are lacking. Given the advantages of specialization, there is no reason to expect that those skilled in articulation will be more skilled in particular fields than those specialized in those fields. Systematic location patterns — gas stations and doctors offices being near each other and liquor stores and stationery shops often being dispersed from one another — suggests that there is nothing as random as “irrationality” behind it, nor anything as widespread as the desire for an improved economic condition responsible for one particular pattern. That a decision is called “greed” when it is found in some groups but “aspirations” or “need” in others is an incidental characteristic of fashions among intellectuals.

The virtues of the intellectual process are virtues within the intellectual process, and not necessarily virtues when universalized as paramount in other social processes. Articulation, formalized rationality, and fact-supported conclusions are central features of the intellectual process when determined by its own inner incentives and constraints. To what extent such considerations characterize the behavior of intellectuals as a social class in the political arena is another question. So too is the extent to which these intellectual virtues survive even in intellectual matters when the personal or political rewards available to intellectuals as a social class provide incentives to do otherwise.

INTELLECTUALS AS A SOCIAL CLASS

Intellectuals — persons who earn their living by transmitting generalized ideas — have incentives and constraints determined by the peculiarities of their social class, as well as incentives deriving from the nature of the intellectual process. Questions about resolving conflicts between the two — how to be honest while political, ethical while an advocate — only highlight the existence of two disparate sets of incentives and constraints. Such conflicts are defined out of existence when intellectuals are categorized as people who “live for rather than off ideas.”95 Such may be the hoped-for ideal, but the actual observable characteristic of the group is that they live off ideas. The extent to which they ignore that fact and regard purely cognitive incentives as overriding is an empirical question that can be examined after first determining the incentives created by their social class and those created by their cognitive activity.

It is to the self-interest of intellectuals as a social class to benefit themselves economically, politically, and psychically, and for each intellectual to benefit himself similarly. Among the ways in which this can be done is by increasing the demand for the services of intellectuals and increasing the supply of raw material used in their work. The output of intellectuals — ideas — is a product supplied in abundance by all other members of society, so that a prerequisite for increasing the demand for specifically intellectuals’ ideas is to differentiate their product. Certificates from authenticating institutions (universities, learned societies, research institutes, etc.) help, but the intellectual differentiates his product most distinctively by its manner of packaging — the choice of words, organization of the material, and observance of cognitive principles and scholarly form. The intellectual who does these things can even dispense with degrees entirely, as John Stuart Mill did, or the degree may be wholly incidental, as in the case of Karl Marx (a law degree) and Adam Smith (a degree in philosophy). It may well be that most contemporary intellectuals are degree-holders, but that is hardly their defining characteristic.

The conflict between cognitive and occupational incentives is particularly clear in the choice between existing knowledge and newly created ideas. An intellectual is rewarded not so much for reaching the truth as for demonstrating his own mental ability. Recourse to well-established and widely accepted ideas will never demonstrate the mental ability of the intellectual, however valid its application to a particular question or issue. The intellectual’s virtuosity is shown by recourse to the new, the esoteric, and if possible his own originality in concept or application — whether or not its conclusions are more or less valid than the received wisdom. Intellectuals have an incentive to “study more the reputation of their own wit than the success of another’s business,” as Hobbes observed more than three centuries ago.96 As part of this product differentiation, it is essential that alternative (competing) social inputs be discredited cognitively (“irrational”) or morally (“biased,” “corrupt”), that competing elites be discredited (“greedy,” “power hungry”), and that the issues at hand be depicted as too unprecedented for application of existing knowledge inputs available to intellectuals and nonintellectuals alike, and too urgent (a “crisis”) to wait for systemic responses, which are also alternatives that compete with intentional intellectual “expertise.” More generally, the meaning of knowledge must be narrowed to only those particular kinds of formalized generalities peculiar to intellectuals. Assertions of the gross inadequacy of existing institutions and ideas likewise increase the demand for intellectuals by discrediting alternatives. The rewards are both psychic and financial.

The demand for intellectuals’ services is also increased by developing preferences for such political and social processes as commonly use more of intellectuals’ inputs — e.g., political control and status ascription from the top down, “education” or “more research” as the answers to the world’s ills, and “participation” and institutional articulation as the way to better decisions.

The occupational self-interest of intellectuals is served not only by product differentiation, but by “relevance.” Many cognitively intellectual productions are of no immediate applicability, because (1) they have not yet been subjected to empirical validation or cannot be in the real world, or (2) their very nature and thrust are different from political discussions on the same subject matter, or (3) the time horizon of the scholarly endeavor may far exceed that of politics, so that no cognitively authenticated conclusion may be available within the time in which a political decision has to be made, and (4) such articulated knowledge as may be available may go counter to what is politically desired. Making intellectual output “relevant” involves resolving such dilemmas. Cognitive incentives mean less relevance and lower occupational rewards in money, status, power, popularity, etc. Occupational incentives obviously mean more of such rewards and less cognitive authenticity.

The incentives sketched are intended to depict the behavior of an intellectual motivated solely by occupational rewards, and prepared to trade off as expendable considerations such competing incentives as cognitive principles, ethical standards, and democratic freedoms. The point here is not to define a priori how many intellectuals will behave what way but to provide a framework within which to judge the observable behavior of actual intellectuals in a variety of social, political and historical settings.

“RELEVANCE”

Intellectuals have long sought to be politically “relevant.” More than three centuries ago, Hobbes expressed the hope that his Leviathan would someday “fall into the hands of a sovereign” who would “convert this truth of speculation into the utility of practice.”97 Karl Marx eloquently expressed the psychic importance of “relevance” to the intellectual:

… the time must come when philosophy not only internally by its content but externally by its appearance comes into contact and mutual reaction with the real contemporary world… Philosophy is introduced into the world by the clamour of its enemies who betray their internal infection by their desperate appeals for help against the blaze of ideas. These cries of its enemies mean as much for philosophy as the first cry of a child for the anxious ear of the mother, they are the cry of life of the ideas which have burst open the orderly hieroglyphic husk of the system and become citizens of the world.98

It is noteworthy that this was not an expression of the satisfaction of promoting a particular doctrine or cause. Marx at this point had not yet met Engels, who converted him to communism, and so there was not yet a Marxian theory to promote. It expressed simply the general joy of intellectuals at being taken seriously and talking about big things.

Nor is it solely in political subjects that political “relevance” is sought. Demography was heavily involved in politics literally from the first page of the first edition of Malthus’ Essay on Population in 1798.99 Biology was made the basis for political theory in the nineteenth and early twentieth century intellectual vogue called “social Darwinism.”100 Psychology was politicized in the decades long controversies preceding the drastic revision of American immigration laws in the 1920s. In the political crisis of the Great Depression, virtually all of the so-called “social sciences” attempted to be politically “relevant” rather than simply cognitively valid, and the rise of the welfare state institutionalized this tendency of applied intellectual activity among “social scientists.” In totalitarian nations, virtually every intellectual field is politicized. Genetics and economics acquire ideological significance in the Soviet Union,101 and Nazi Germany proclaimed the existence of such intellectual entities as German physics, German chemistry, and German mathematics.102 The concern here, however, is not so much with what governments have done to the intellectual process, but what intellectuals themselves have done in the quest for “relevance.”

Malthus’ population theory was openly intended to counter contemporary revolutionary political theories, notably those of Godwin and Condorcet. After these theories faded with the years, later editions of Malthus’ Essay on Population turned its thrust toward other policy issues, the aim being not so much policy solutions as moral justification of the existing institutions:

… it is evident that every man in the lower classes of society who became acquainted with these truths, would be disposed to bear the distresses in which he might be involved with more patience; would feel less discontent and irritation at the government and the higher classes of society, on account of his poverty… The mere knowledge of these truths, even if they did not operate sufficiently to produce any marked changes in the prudential habits of the poor with regard to marriage, would still have a most beneficial effect on their conduct in a political light.103

While the mere intentions or applications of a doctrine, in themselves, have no necessary effect on its cognitive validity, the Malthusian theory’s many intellectual flaws related directly to its political goals. Like many other intellectual productions with political “relevance,” its most fundamental flaw was not a particular conclusion but an inadequate basis for any conclusion. On a theoretical level, the Malthusian doctrine inconsistently compared one variable defined as an abstract potentiality (population growth) with another variable defined as an historical generalization (food growth).104 On an empirical level, there was grossly inadequate evidence for the postulated behavior of either variable. The supposed doubling of the population in colonial America every 25 years was based on a guess by Benjamin Franklin, repeated by a British clergyman named Price and obtained third-hand by Malthus. The first American census was published after Franklin’s death and the first British census was taken three years after Malthus’ book was published. The theoretical argument depended on shifting usages of the word “tendency,” to sometimes mean (1) what was abstractly possible, (2) what was causally probable, or (3) what was historically observable — each according to the polemical convenience of the moment. Though contemporaries criticized this shifting ambiguity that was central to the Malthusian doctrine, Malthus refused to be pinned down to any given meaning.105 Empirically, the successive censuses after Malthus’ book was published revealed that in fact the food supply was growing faster than the population, and that most of the population growth was not due to reckless marriages and childbearing among the poor, as Malthus claimed, but to reduced death rates.106 The Malthusian theory boils down to the proposition that population growth increases with prosperity — an empirical relationship that is demonstrably false from both the history of given countries over time and from comparisons of countries at a given time. As countries become more prosperous, their birth rates and population growth rates generally decline. At a given time, prosperous countries typically do not have higher population growth rates than poorer countries. In purely cognitive terms, it may well be that the Malthusian theory has received one of the most thorough refutations of any theory in the social sciences,107 but in social and political terms, the Malthusian doctrine is still going strong almost two centuries after its first appearance. Like so many other political-intellectual productions, its triumph is largely a triumph of reiteration. Malthus’ crucial success was in identifying poverty with “overpopulation” in the public mind, so that to deny the latter is deemed tantamount to denying the former.

One of the elements in the public success of the Malthusian doctrine which has proved equally serviceable in other politically “relevant” doctrines has been the display of cognitively irrelevant statistics. The second edition of Malthus’ Essay on Population was several times larger than the first, due to the addition of masses of data. These data were never used to test the Malthusian theory but to illustrate or apply it. In Malthus’ own words, the data are intended to “elucidate the manner” in which his theory operates, to “examine the effects of one great cause” — the population principle — but not to test the principle itself. Any population size or growth rate would be consistent with the principle: “The natural tendency to increase is everywhere so great that it will generally be easy to account for the height at which the population is found in any country.”108 No matter what the data show, he would be “right.”

This decorative display of numbers which in no way test the central premise continues in modern, more sophisticated, statistical studies. A noted study of the economic effects of racial discrimination begins by simply defining “discrimination” as all intergroup differences in economic prospects.109 It then proceeds to elaborate mathematically and statistically in the light of that premise, but never testing the premise itself. All intergroup differences in cultural orientation toward education, work, risk, management, etc., are simply banished from consideration by definition. Discrimination in this context becomes simply a word denoting statistical results, though of course the very reason we are interested in discrimination, in its usual sense, is because it refers to intentional behavior whose moral, political, and social implications concern us. That social and political concern is implicitly appropriated for statistical results that depend on numerous other factors as well.

Such arbitrary attribution of causation by definition is a special case of a more general problem that plagues statistical analysis. Whenever outcome A is due to factors B and C, by holding B constant, one can determine the residual effect of C on A. The problem is that A may also be affected by factors D, E, or F, etc., and if they are not specified in the analysis, then all of their effect is wrongly attributed to C. Moreover, even the attempt to hold B constant may fail in practice. Theoretical variables may be continuously divisible, but actual statistics may be available only in discrete categories. In comparing two groups who differ on a particular variable (male and female differences in height, for example), attempts to hold that variable constant by comparing individuals with the same value of the variable (the same height) may mean in practice comparing individuals who fall in the same discrete intervals (between five and six feet, for example). But groups whose distributions differ across specified intervals can also differ within those respective intervals. The average height of males and females who fall in the interval from five feet to six feet is probably different (males in that interval being taller than females in the same interval), despite the attempt to hold them constant. Therefore some of the effect of the variable supposedly held constant will appear statistically as the effect of some residual variable(s). This residual method of analysis has great potential for misstating causation, through inadequate specification of the variables involved, either inadvertently or deliberately. Whether one’s preferred residual explanation is discrimination, genetics, schooling, etc., deficiencies in the specification of alternative variables are rewarded with more apparent effect from the preferred residual variable. The ultimate extreme of this is to implicitly hold all other variables constant by arbitrarily defining one variable as the variable and using this definition as if it were a fact about the real world, by using the same word normally used to describe that fact — “discrimination” in this case. The political benefits of this cognitive deficiency may be illustrated not only by the reliance of national political figures and institutions on the advice of the economist using this technique, but also his academic success in promoting a conclusion consonant with academics’ social and political vision, however cognitively questionable. It is a technique — and a result — common in other fields, as will be noted again.

A similar pattern of disregarding alternative variables is followed in discussions of “income distribution,” where statistical results about people in various phases of their economic life cycle are spoken of as if they referred to socioeconomic classes in the usual sense of people stratified in a certain way across their lifetimes. The “top 10 percent” of wealth holders may conjure up visions of Rockefellers or Kennedys, but they are more likely to be elderly individuals who have finally paid off their mortgages, and who may well have been among the statistical “poor” in data collected when they were younger. The point here is not whether income or wealth differences are greater or less than might be desired from some point of view or other. The more basic question is whether there is sufficient congruence between the statistical categories and the social realities to make any conclusion viable. To declare that “dry statistics translate into workers with poverty-level incomes”110 may be politically effective but it asserts what is very much open to question.

The negative cognitive effects of political “relevance” can be further illustrated with Darwin’s theory of evolution. The political application of Darwin’s biological concept of “survival of the fittest” involved not simply an extension but a distortion of the concept. What was in Darwin a causal principle of biological evolution pertaining to species became in its political application an evaluative principle pertaining to individuals. The systemic tendency toward adaptation of organisms to their respective environments became an intentional triumph of individuals evaluated as superior not merely within a particular set of social environmental circumstances, but politically justifying one set of circumstances rather than another.111 This political application distorted the Darwinian principle. Lazy amorality might be the “fittest” quality to survive in a sufficiently extreme welfare state, for example, or ruthless ambition in a sufficiently extreme laissez-faire economy without adequate law enforcement. Darwin himself did not make the political applications and distortions known as “social Darwinism.” It was Herbert Spencer in England, William Graham Sumner in America, and countless disciples in both countries who turned the Darwinian principle of biological change into a political principle justifying the status quo.

Darwinism at least retained its integrity within biology. But the young field of psychology was not so fortunate in its rush to establish its claims to scientific stature and political “relevance.” Intelligence tests began in France in 1905 with a politically defined policy goal — the sorting out of students with low academic aptitudes to be placed in special schools. The test developed for that purpose by Alfred Binet in France was translated and adapted for American youths by Lewis Terman of Stanford University as the Stanford-Binet I.Q. Test. It was also politically adapted to American issues — the controversies then raging over American immigration policy.

Unlike earlier generations of immigrants, the immigrant groups arriving in the United States in the 1880s and afterwards were no longer of northern and western European stock, but largely eastern and southern Europeans who differed culturally, religiously (many being Catholic or Jewish) and genetically from the American population at large, as well as from earlier immigrants. The serious social stresses associated with the emergence of every new ethnic minority in the urban economy and society were seen as peculiarities of these new and “unassimilable” immigrants. Vast amounts of data showed that these “new” immigrant groups had higher incidences of social pathology — and lower I.Q.’s. To the new field of psychology, the immigrants’ low I.Q.’s were an opportunity to establish the political “relevance” of their profession along with its cognitive (“scientific”) claims.

The leading test “experts” of the era — including Terman, Goddard, and Yerkes — insisted that they were presenting “not theory or opinions but facts” and facts of relevance “above all to our law-makers”112 They were “measuring native or inborn intelligence.”113 Their results indicated “the fixed character of mental levels.”114 Intelligence tests would “bring tens of thousands” of “defectives” under “the surveillance and protection of society.”115 All of this was said at a time when the I.Q. test had existed for less than a decade in the United States.

The leading I.Q. “experts” were also members of eugenics societies devoted to preventing the reproduction of “inferior” stocks.116 However, the political impossibility “at present” of convincing “society” that low I.Q. groups “should not be allowed to reproduce”117 made the “experts” predict a “decline in American intelligence” over time.118 After a later survey of data generated by the mass testing of soldiers in World War I, testing expert Carl Brigham — later creator of the College Board SAT — concluded that “public action” and “legal steps” were needed to prevent the “decline of American intelligence.” Such steps should be “dictated by science and not by political expediency,” and included immigration laws that would be not only “restrictive” but “highly selective,” and other policies for “prevention of the continued propagation of defective strains in the present population.”119 Virtually identical conclusions were reached at the same time by Rudolf Pintner, another leading authority and also the creator of a well-known mental test: “Mental ability is inherited… The country cannot afford to admit year after year large numbers of mentally inferior people, who will continue to multiply and lower the level of intelligence of the whole nation.”120

These were not the views of the village racist. They were the conclusions of the top contemporary authorities in the field, based on masses of statistical data, and virtually unchallenged either intellectually, morally, or politically within the profession at the time. Controversies raged between the “experts” and others — notably Walter Lippman121 — but such critics’ conclusions were contemptuously dismissed as “sentiment and opinion” as contrasted with the “quantitative methods” of the new science.122

In many ways this episode illustrates far more general characteristics of intellectual-political “relevance”: (1) the almost casual ease with which vast expansions of the amount and scope of government power were called for by intellectuals to be used against their fellow citizens and fellow human beings, for purposes of implementing the intellectuals’ vision, (2) the automatic presumption that differences between the current views of the relevant intellectuals (“experts”) and the views of others reflect only the misguided ignorance of the latter, who are to be either “educated,” dismissed, or discredited, rather than being argued with directly in terms of cognitive substance (that is, the intellectual process was involved primarily in giving one side sufficient reputation not to have to engage in it with non-“experts”), (3) the confidence with which predictions were made, without reference to any prior record of correct predictions nor to any monitoring processes to confirm the future validity of current predictions, (4) the moral as well as intellectual superiority that accompanied the implicit faith that the current views of the “experts” represented the objective, inescapable conclusions of scientific evidence and logic, and their direct applicability for the public good, rather than either the vogues or the professional self-interest of these “experts,” and (5) a concentration on determining the most likely alternative conclusions rather than whether any of the conclusions had sufficient basis to go beyond tentative cognitive results to sweeping policy prescription.

What was the compelling evidence that led the early test experts to conclude that southern and eastern Europeans — including Jews123 — were innately intellectually inferior to other European “races”? They scored lower on mental tests — averaging I.Q.’s of about 85, the same as blacks today nationally, and slightly lower than northern blacks.124 What was controlled or held constant in these statistical comparisons? Practically nothing. The new immigrants (Jews, Italians, Slovaks, etc.) almost by definition averaged fewer years in the United States than most of the older immigrant groups (Germans, Irish, Britons, etc.), spoke correspondingly less English, and lived in commensurably lower socioeconomic conditions. When years of residence in the United States were held constant, the mental test differences disappeared.125 In the massive World War I testing program, the results on many subsets of the tests showed the modal number of correct answers to be zero — indicating little understanding of the instructions.126 On those subsections where special efforts were made to elaborate instructions or to demonstrate what was expected, zero scores were less common, even when the questions themselves were more complex (the same was true of black soldiers).127 Some “intelligence” test questions dealt with such peculiarly American phenomena as the name of the Brooklyn National League baseball team, Lee’s surrender at Appomattox, and the author of Huckleberry Finn.128 As for controlled samples, the methods of selecting which soldiers would take which test “varied from camp to camp, and sometimes from week to week at the same camp.”129

These defects in testing were known to the “experts” who sweepingly labeled great portions of the human race as innately inferior. One rationale for accepting the results was offered by Carl Brigham:

The adjustment to test conditions is a part of the intelligence test… If the tests used included some mysterious type of situation that was “typically American,” we are indeed fortunate, for this is America, and the purpose of our inquiry is that of obtaining a measure of the character of our immigration. Inability to respond to a “typically American” situation is obviously an undesirable trait.130

Whatever merit this kind of reasoning might have as a justification of the purely empirical predictive validity of a test, that is wholly different from reaching conclusions about genetic mental capacity as it must unfold in subsequent generations of American-born offspring — especially in the context of draconian proposals to forcibly control the reproduction of these groups. As for the correlation between immigrants’ mental test scores and their years of residence in the United States, this was dismissed by showing that immigrants with five years of residence taking the nonverbal test still did not reach native American test score levels131 — five years being presumably sufficient to change life-long cultural patterns, and a nonverbal test being presumed to be culturally unbiased. The ominous prediction of a declining national I.Q. — a prediction common in the literature in the United States and in other countries — had no empirical evidence, and as evidence accumulated over the years, it showed the national I.Q.’s in the United States and elsewhere either remaining constant or drifting upward, forcing later upward revisions of I.Q. standards.132

The point here is not that particular results in a particular field during a particular era were wrong. The point rather is that a certain general pattern of behavior appeared that has been far more general, a pattern later reappearing when psychological fashions changed and equality of the races was now deemed to be proven by “evidence” equally as shaky. Moreover, it is a pattern apparent in many other areas having nothing to do with I.Q. or race.

The dogmatic conclusions about racial inferiority which reigned supreme among “experts” in the 1910s and 1920s were replaced with equally dogmatic conclusions about scientific proof of racial equality in the same field by the 1940s and 1950s. By the 1960s official government agencies could declare it “demonstrable” — without demonstration — that “the talent pool in any one ethnic group is substantially the same as that in any other ethnic group.”133 According to the new dogma, “Intellectual potential is distributed among Negro infants in the same proportion and pattern as among Icelanders or Chinese, or any other group.”134 These statements may someday be shown to be true, but that is wholly different from claiming that any such evidence or proof exists today. Both in the earlier and the later dogmatism, the cognitive question is simply not open for discussion, and the ideologically preferred position becomes a moral touchstone rather than a tentative cognitive conclusion. Unlike the earlier period, the present dogmatism has some challenge within the profession — notably by Arthur R. Jensen135 — but the efforts to discredit his conclusions (“racist”) rather than confront his analysis, and sometimes to physically prevent his speaking,136 indicate that the new dogma is no more willing to treat issues according to intellectual processes than was the old. It is as if beliefs in the psychological field of mental testing have gone through the phases of adolescent fads — fiercely obligatory while in vogue and wholly beyond consideration once the vogue has passed. At least one of the leaders of the older dogmatism — Carl Brigham — later soberly recanted, after the vogue had passed, repudiating the reasoning of the earlier studies and declaring that his own earlier conclusions were “without foundation.”137 Not mistaken, exaggerated, or inconsistent, but without foundation.

Both phases of the innate intelligence controversy illustrate a more general characteristic of socially and politically “relevant” intellectual activity — an unwillingness or inability to say, “we don’t know,” or even to admit that conclusions are tentative. Such admissions would be wholly consonant with intellectual processes but not with the interests of intellectuals as a social class. The distinction must be insisted upon, in part because even otherwise worldly thinkers often proceed as if intellectuals have no self-interests involved but act solely on cognitive bases or in the policy interest of society at large. Even Voltaire could naively say: “The philosophers having no particular interest to defend, can only speak up in favor of reason and the public interest.”138 That belief — in their own minds or in the minds of others — is itself one of their greatest assets in furthering their own self-interests under protective coloration.

POWER

Intellectuals have for centuries promoted the abrogation of ordinary people’s freedom, and romanticized despotism. The shocking record of Western intellectuals glorifying Stalinism in the 1930s was no isolated aberration.

Religious intellectuals in the later Roman Empire, after it became Christian, created a “systematic, active intolerance” that was “something hitherto unknown in the Mediterranean world.”139 There had been “transient persecutions”140 of early Christians, whose doctrinal abhorrence of “idolatry” had led them to disdain, insult, and even disrupt other religions.141 But it was only with the triumph of Christianity, and especially of theological intellectuals like Augustine, that intolerance and persecution became pervasive in the Roman Empire. Pagan sacred books were burned,142 pagan traditions persecuted,143 and a “forced Christianization”144 imposed on the Roman Empire, which had long had religious diversity and tolerance as a means of preserving political tranquility and unity. The attempt to impose a particular intellectual (religious) unity or orthodoxy created political disunity as “the bands of civil society were torn asunder by the fury of religious factions.”145 A theoretical controversy among Christian intellectuals over the nature of the Trinity “successively penetrated into every part of the Christian world.”146 In the wake of this and other theological disputes followed violence and atrocities by Christians on other Christians deemed heretical.147 In many provinces, “towns and villages were laid waste and utterly destroyed.”148 After a respite of tolerance under the Emperor Julian,149 persecution was resumed under his successors.150 The internecine violence among various denominations of Christians took far more lives than all the earlier persecutions of Christians in the Roman Empire.151 Like later totalitarian persecutions in the twentieth century, the persecutions by the Christians produced the emigration of some of the “most industrious subjects” of the Roman Empire, taking with them “the arts both of peace and war.”152 Centuries later, the Reformation brought forth freedom — not by intention but systemically, from the new diversity of power sources. The Protestant Reformation was as intolerant and bloody as any Catholic inquisition.153 Freedom “was the consequence rather than the design of the Reformation.”154

In the Roman Empire, as with later persecutions, the abstruse issues involved were matters of moment only to intellectuals. Yet the rival intellectuals’ attempts to impose their own vision by force produced mass devastation and a divisiveness that contributed to the decline and fall of the empire.155 Its immediate effect was to vastly expand the scope of government power into an area — religion — which had once been a realm of freedom.

Such patterns — intellectuals promoting government power and intolerant divisiveness — were not peculiar to the Roman Empire, nor even to Western civilization. In the later dynasties of the Chinese empire, intellectuals also rose to dominance, producing a similar pattern in a very different setting. Beginning with the Sung dynasty (960-1127 a.d.), “scholar-officials,” chosen by examinations, dominated the Chinese government and society.156 Rulers became more autocratic, and government powers more centralized and pervasive in their scope, including “smothering government control of large scale business”157 and a “secret police almost unfettered by legal restraints.”158 Later, the “recurrent factional controversies” among the intellectuals running the government became “a major factor in the decline of the Ming dynasty.”159 As in ancient Rome, so in the later Chinese empire, the military profession was downgraded160 and the army “declined in strength and fighting ability.”161 As in ancient Rome, this was the prelude to the Chinese empire’s being overwhelmed militarily by foreign peoples once disdained as barbarians.

Prior to its decline and fall, imperial China was the preeminent nation in the world in technology, organization, commerce, and literature,162 and had the highest standard of living in the world, as late as the sixteenth century.163 As in the case of Rome in its decline, so in the last century of the Ming dynasty, many people emigrated from China.164 These “overseas Chinese” have flourished economically in numerous countries from southeast Asia to the Caribbean, while their native land languished in poverty and weakness, for lack of the practical skills and abilities of those driven out by the oppressions of governments dominated by intellectuals. These intellectuals, “applying the principles they learned from ancient Chinese writings to the realm of practical governance,”165 promoted “a strong sense of social-welfare activism” in which “central governments assumed responsibility for the total well-being of all Chinese and asserted regulatory authority over all aspects of Chinese life.”166 In short, Chinese intellectuals in power were impelled by Neo-Confucian ideals that would today be called “social justice.” But whatever the hoped-for results, the actual processes led to despotism, decline, and defeat.

Intellectuals’ promotion of despotism has not been confined to situations, like those in the Roman or Chinese empires, where they themselves were directly involved in wielding power or instigating violence. Even such admirers of freedom in principle as the eighteenth-century French philosophes were also admirers of contemporary Russian and Chinese despotism,167 much like their twentieth-century counterparts. The reasons were also quite similar. The despotisms in question were seen as vehicles for the imposition of intellectuals’ designs on society at large. In the eighteenth-century despotisms “the men of letters served in places of eminence, at the very center of things.”168 Class self-interest was, however, seen as the public interest. According to D’Alembert, “the greatest happiness of a nation is realized when those who govern agree with those who instruct it.”169 In the nineteenth century free nations as well, as John Stuart Mill observed, “impatient reformers, thinking it easier to get possession of the government than of the intellects and dispositions of the people,” proposed to expand “the power of government.”170

The French Revolution gave the eighteenth-century intellectuals a chance to rule directly, rather than by their influence on existing despots. Though disciples of the freedom-extolling philosophes and ostensibly concerned only with the public interest, their “all-powerful Committee of Public Safety ruled France absolutely as no monarch had ever been able to rule it.”171 The brief rule of Jacobin intellectuals was not only despotic and bloody, but totalitarian in its pervasiveness. The very names of months and years were changed to correspond with their ideology, as were the names of streets, people, and even playing cards.172 Their regulations extended to friendship and marriage: each adult male had to publicly declare who his friends were, and any married couple who did not either have children or adopt children within a specified time were to have their marriage dissolved and be separated by the government.173 To administer all this control of individuals, the intellectual-politicians created a vast bureaucracy — never dismantled, and the enduring legacy of the Revolution long after the ideologues were replaced by Napoleon and then by innumerable other French governments. It was one of the earliest demonstrations of what it meant in practice to “arrange” a society according to “justice.”

Although there were despotic governments in the nineteenth century, it was not until twentieth-century totalitarianism that anything like the Committee of Public Safety emerged again. Once more, it was intellectuals who created it — Lenin, Trotsky, and their successors and offshoots carrying out a vision descended from Marx, and Hitler carrying out his own vision from Mein Kampf. Whether or not any of these political leaders were intellectuals in the qualitatively cognitive sense, all owed their power precisely to their transmission of ideas, rather than to other political routes to power from dynastic succession, economic achievements, hierarchical progression, or technical expertise. The characteristics of these modern totalitarian governments have already been noted. The support, apologetics, and glorification of foreign totalitarianism among intellectuals in the democratic nations must also be noted, however. The glorification of the Stalin regime by democratic Fabian socialists Sidney and Beatrice Webb is perhaps the classic example,174 but they are part of a long line of intellectuals including Jean-Paul Sartre,175 George Bernard Shaw,176 and G. D. H. Cole,177 who extolled the virtues of Stalinist Russia, joined by the Nation, The New Republic, and (in England) The New Statesman.178 The supporters of an American Communist for President of the United States in 1932 included John Dos Passos, Sherwood Anderson, Edmund Wilson, and Granville Hicks.179 Fascism also did not lack for apologists and romanticizers, including Irving Babbitt, Charles Beard, George Santayana, and Ezra Pound.180

Most American intellectuals of the 1930s were, however, content to support a vast expansion of governmental power in more conventional terms under the New Deal. Disillusionment with Stalin and the Soviet Union eventually led many intellectuals to return to the liberal-left. It has not prevented a similar cycle of romantic glorification of Mao, Castro, and other totalitarians.

THE INTELLECTUAL VISION

Virtually everyone has political opinions, but not everyone has a political vision — a central set of premises from which particular positions can be deduced as corollaries. These premises may be religious, tribal, or ideological. What makes them a coherent vision is the high degree of correlation among the particular conclusions reached on highly disparate subjects. To a racist, for example, the color of an individual’s skin may determine a whole host of intellectual, moral, aesthetic, political, and even etiquette questions pertaining to that individual.

An ideological vision is more than belief in a principle. It is a belief that that principle is crucial or overriding, so that other principles or even empirical facts must give way when in conflict with it. The Inquisition had to reject Galileo’s astronomical findings in the interests of a higher vision, as the Nazis had to reject Einstein in spite of any evidence about his theories or his individual abilities.

An ideology has been defined as a “systematic and self-contained set of ideas supposedly dealing with the nature of reality (usually social reality), or some segment of reality, and of man’s relation (attitude, conduct) toward it; and calling for a commitment independent of specific experience or events.”181 The intellectual process might seem to be a counterforce against generalized, ideological visions, since its canons imply following the particular consequences of its cognitive procedures wherever those consequences (truth) lead in specific instances. Insofar as intellectuals as a social class are motivated by the intellectual process, their positions might be expected to be as diverse as the different readings possible on the complexities of political issues. In short, intellectuals as a social class might be expected to show less of a “herd instinct” pattern as regards group conformity, and at the individual level to dissect issues on their respective specific merits, leading to less correlation among their various political positions than among people who “vote the straight ticket” in either a partisan or an ideological sense. Actual studies of opinions among academics, however, show “exceptionally high correlations among opinions across a broad array of issues,”182 even when the specifics involve such disparate matters as foreign policy, marijuana, and race. These cohesive beliefs among intellectuals have been politically to the left of the general public for as long as such surveys have been taken.183 This is true not only in the United States, but internationally.184 What is important at this point, however, is not so much where the intellectuals are politically, but how cohesively the various positions fit together as principles deduced from an underlying vision.

The coherence of a vision may derive from an accurate depiction of a coherent set of relationships empirically observed in the real world, or from the deduction of various conclusions from a given set of premises without much regard to observed facts. As noted in earlier chapters, many political policies are neither based on hard evidence as to causation nor monitor hard evidence on subsequent effects, and especially not negative effects. Antitrust laws, schools busing, rent control, and minimum wage laws, are all based on their consonance with a general vision of the social process, rather than on empirical tests of their positive and negative effects. That crime is caused by poverty and/or discrimination is also part of the same vision, but the empirical evidence is hardly overwhelming, or even unambiguous, since violent crime declined in the 1930s185 during the greatest depression in history and skyrocketed during the affluent 1960s. In England, the crime rate rose as unemployment was reduced to the vanishing point. What Earl Warren called “our disturbed society”186 had a downwardly trending urban murder rate for about twenty years until the 1960s, when it suddenly doubled in less than a decade, as the Warren Court changed the rules of criminal justice. Sex education in the public schools was another part of the same social vision, and was promoted as a means of reducing teenage pregnancy and venereal disease — but no reconsideration of its wisdom or effectiveness has been made in the light of steep increases in both. The percentage of the public disavowing sex education in the public schools has increased,187 but among intellectuals there is no such reconsideration in the light of evidence. Public support of the death penalty, which was declining prior to the increase in the murder rate in the 1960s, rose again as the murder rate rose. Again, this suggests a public more responsive to empirical evidence than intellectuals — i.e., less ideological. A critic has said of liberal intellectuals that their responses to public issues “are as predictable as the salivation of Pavlovian dogs” and can be predicted “with the same comforting assurance with which you expect the sun to rise tomorrow.”188 The data show this to be an overstatement — but not otherwise an incorrect statement.

If the existence of the intellectual vision raises questions about whether it is a product of intellectual processes or of intellectuals’ occupational self-interest, the specific contents of the prevailing intellectual vision raise the same question even more sharply. These may be summarized, and to some extent simplified, as follows:


1. There is vast unhappiness (“social problems”) caused by other elites with whom intellectual elites are competing — notably businessmen, the military, and politicians.

2. Those who are empirically less fortunate are morally and causally “victims” of those competing elites, and their salvation lies in more utilization of the services of intellectuals as “educators” (literally or figuratively), as designers of programs (or societies), and as political leaders and decision-making surrogates.

3. Articulated rationality — the occupational characteristic of intellectuals — is the best mode of social decision making.

4. Existing knowledge — whether scattered in fragments through society or collected together in traditions, the Constitution, etc. — is inadequate for decision making, so that “solving” the society’s “problems” depends on the specific fragment of knowledge held by intellectuals.


Egocentric visions of the world do not imply deliberate attempts at deception and self-aggrandizement. The mechanisms of human rationalization are too complex for any attempt here to say how such views emerged. It is enough for present purposes that such views of social organization are concentrated among intellectuals, and the question is how these views compare with ascertainable facts.

As a necessarily limited sampling of what has been called a “litany of woe and crisis,” there have been recent assertions by intellectuals that “human society is in a stage of comprehensive breakdown,”189 that the United States “disintegrates,”190 that the nation is “essentially evil and the evil can be exorcised only by turning the system upside down,”191 that “the civil rights legislation is absolutely meaningless, and it was meant to be meaningless,”192 and that “life has broken down in this country.”193 Although intellectuals often pose as articulators of a general malaise, in fact neither the general public nor the designated “victims” share this vision of the intellectuals. Among the supposedly embittered and disenchanted youth, 90 percent describe their past life as happy and 93 percent expect their future life to be so.194 From 80 to 90 percent of the supposedly alienated workers with “dehumanizing” jobs describe themselves as satisfied with their work.195 Significantly, about half felt that others were dissatisfied with their work;196 the intellectuals’ outpourings were not ineffective, in matters outside people’s direct experience. More blacks were satisfied than dissatisfied in such areas as work, housing, and education.197 In contrast to the intellectuals’ preoccupation with “distributive justice,” there were four times as many blacks who thought that people with more ability should earn more as there were who believed in even approximate equality of earnings.198 As for “women’s liberation,” fewer women than men were sympathetic to it.199 For Americans as a whole, only 12 percent would like to live in another country — less than in Sweden, Holland, Brazil, or Greece, and less than half as many as in West Germany or Great Britain.200 Among those in foreign countries who would like to live somewhere else, the United States was either the first or second choice in Sweden, West Germany, Greece, Brazil, Finland and Uruguay.201

Where the public differs from intellectuals, it is often taken as axiomatic that that demonstrates the misguided ignorance of the public and their need to be “educated.” However, the supposed “alienation” of workers, “black rage,” and the opinion of women are subjects on which these respective groups are themselves the experts. Moreover, insofar as there are hard data on such matters, these data almost invariably support public opinion rather than the intellectual vision. The supposedly “meaningless” civil rights revolution saw black family income double in the 1960s while white family income rose by only 69 percent,202 black college enrollment almost doubled in less than a decade,203 and the number of black foremen and policemen more than doubled during the 1960s.204 While statisticians keep large-scale poverty alive with data limited to cash income, in-kind transfers (food stamps, housing subsidies, free medical care, etc.) have reduced it drastically in fact.205 The tripling of government welfare spending from 1965 to 1973 provided a total value of resources consumed by the poor in 1973 which was “enough to raise every officially poor family 30 percent above its poverty line.”206 Yet the official census data are based on samples in which people “are not even asked if they receive food stamps, live in public housing, or are eligible for medicaid.”207 Independent private researchers who count in-kind transfers find only 3 to 6 percent of the American population poor208 by the same standards as the government uses. One perhaps revealing statistic is that 30 percent of the families with official incomes under $3,000 have air conditioners and 29 percent have color televisions.209

Intellectuals almost automatically explain the misfortunes of groups in terms of victimization by elites who are rivals of intellectuals. By asserting or defining (seldom testing) misfortune as victimization, all other possible explanations are arbitrarily ruled out of order, and with them perhaps hopes of in fact remedying the misfortune. The victimhood approach also requires ignoring, suppressing, or deemphasizing successful initiatives already undertaken by the disadvantaged group or portions thereof — thereby sacrificing accumulated human capital in terms of know-how, morale, and a favorable public image of groups usually portrayed as a “problem.” In the victimization approach, intergroup statistical differences become “inequities,” though in particular cases they may be due to group differences in age, geographical distribution, or other variables with no moral implications.

Victimhood as an explanation of intergroup differences extends internationally to the Third World — typically countries that were poor before Western nations arrived, remained poor while they were there, and have continued poor after they left. The explanation of their poverty? Western exploitation! An economist who treats this as a testable hypothesis notes that “throughout the underdeveloped world the most prosperous areas are those with which the West has established closest contact” and contrasts this with “the extreme backwardness of societies and regions without external contacts.”210 But like other victimhood approaches, Third-Worldism is not really an hypothesis but an axiom, not so much argued explicitly as insinuated by the words chosen (“the web of capitalism,”211 “the imperialist network”212) and established by reiteration.

What is the function of victimhood for intellectuals? It hardly derives from rigorous application of intellectual processes. It does, however, greatly enhance the role of intellectuals as a social class — as consultants, advisors, planners, experimenters, authorities, etc. At a minimum, the victimhood approach presents intellectuals with psychic gratifications213 (including denouncing rival elites). Beyond that are influence, power, visibility, and money — ample incentives for most people in most times. The victimhood concept is at least a rational approach, and perhaps an optimal approach, to social questions from the standpoint of intellectuals as a social class, however little it does for anyone else and however counterproductive it may be for society at large. The victimhood approach is also consonant with a more general, intellectual approach to human beings, abstracting from tangible natural or cultural differences — and being left highly suspicious of intergroup differences in socioeconomic results, which are indeed inexplicable, once major variables have been assumed away.

Behind this questionable cognitive procedure may lie a desire to establish the equality of man and perhaps a sense of “there but for the grace of God go I.” This may be a laudable objective as a counterpoise to the egoistic ideology of individual or group “merit.” But both approaches confuse causation with morality. If individual A has characteristic X, and individual B does not, then it is important for both to know whether X is an advantage or a disadvantage, even if neither “deserves” it and even if both are completely creatures of circumstances beyond their control as regards that characteristic. Nothing is gained by pretending that it doesn’t matter when it does, or by leaving it out of account in explaining differences between them. That only opens the way to concocting mythical reasons for their differences.

The victimhood axiom is based on little more than a minute scrutiny of rival elites and a reporting of their numerous sins and shortcomings — such as could be found in equally close scrutiny of any other group of human beings — elite or otherwise. That multinational corporations have cheated here and bribed there is neither startling as information nor a causal explanation of Third World poverty, however morally deplorable or legally actionable it may be. If prosperity could come only from the united efforts of upright and noble-minded people, all of mankind would still be sunk in poverty. It is always true, at least in the short run, that those poorly fed would be better fed if the well-fed shared some of their food. That is wholly different from saying that people are starving in India because overfed Americans somehow took their food.

The dissonance between the intellectual vision and the experience and opinions of the public has led to a new phenomenon in recent years, sometimes called “totalitarian democracy.” Whereas in earlier times — the New Deal era, for example — the “intelligentsia saw The People as its ally in the struggle for power,”214 and “a plebiscitary interpretation of democracy”215 was considered a hallmark of liberalism, they now see public opinion and democratic processes as obstacles to be overcome. While intellectuals still speak in the name of The People and espouse democratic ideals, “their ceaseless strategy is inconsistent with their professed thought.”216 Such strategy features “rules that minimize majority participation, thereby permitting a small faction to gain control.”217 Whether within political party caucuses, environmental agencies, or other social decision-making institutions, complex rules and tiresome procedures are sorting devices that ensure the differential survival of intellectuals in decision-making processes. These procedures are, in effect, “the poll tax that the New Elite has been imposing on everyone else.”218 Recourse to courts and administrative agencies as the preferred mechanisms of decision making also favors the chances of intellectuals in imposing their vision on the rest of society. As a leader in the fight for eliminating capital punishment observed, there was “an unmistakable preference for the courts,” because reform through democratic legislation requires either “public consensus or a powerful minority lobby,”219 as contrasted with the greater ease of attempts to “market new constitutional protections to judges.”220 A bow toward democracy is made with claims that the newly created “constitutional” rights are “a response to deeply rooted social conflicts that elected representatives have not addressed” because “the interests that the Court protected could not mobilize sufficient power,”221 but these vague references to “deeply rooted social conflicts” and “power” boil down to the simple fact that a majority of the public — indeed, “a twenty-year high” — supported the death penalty in the midst of the intellectuals’ crusade to abolish it.222 Appeals to a higher moral code — of which they are axiomatically the keepers — not only justifies the superseding of the democratic will or the constitutional processes, but justifies calling it “democracy,” for it is what the people would want, if only they knew better, if only they shared the intellectuals’ vision. This approach has been aptly called “totalitarian democracy.” Sometimes the moral superiority of intellectuals is put even more bluntly, as in the assertion that “a more equal society is a better society even if its citizens prefer inequality.”223

Political intellectuals attempt to supersede not only political processes but also cognitive processes. Although they may specialize in cognitive skills, the impersonal or “objective” nature of this skill makes it politically unreliable at any given juncture. What is far more reliable is to use the intellectuals’ general superiority in cognitive matters as a reason for dismissing — rather than arguing with — opposing views on a particular matter. Terman did not in any substantive sense argue with Walter Lippman over the issue of racially innate intelligence. Rather he used his position as an “expert” in the field to dismiss Lippman’s ideas as “sentiment and opinion,” contrasted with his own “quantitative methods” — which he referred to but in no way exhibited. Keynes, in a book devoted to comparing capitalism and communism, sweepingly dismissed Marxism as a doctrine “which I know to be not only scientifically erroneous but without interest or application for the modern world224 — without ever telling us why it was wrong, or even offering a hint. James Baldwin similarly asserted that Americans are “the most dishonorable and violent people in the world,”225 without any reference to others whose claim to that title included the wholesale extermination of more people than were denied civil rights in the United States. More generally, intellectuals’ personal preferences and beliefs tend to become axioms rather than hypotheses. The notion that minority progress can only occur through governmental intervention is a typical such axiom — even though (1) low-income American Indians have long had much government involvement, while more financially successful groups such as Orientals and Jews have had little government involvement in their rise from poverty to affluence, (2) the very existence of northern urban black communities is due almost exclusively to private transfers of property through market mechanisms, and (3) the education of black youngsters was initially almost solely nongovernmental (or even antigovernmental, in defiance of laws against their education in the antebellum South), and it was 1916 before the number of black youngsters educated in public high schools equalled the number educated privately.226 The point is not that these particular facts are determining as far as the relative importance of contemporary political and nonpolitical alternatives. Rather, the point is that opposite facts have been arbitrarily postulated or implicitly assumed, as if they were determining.

Intellectuals’ attempts to depict the less fortunate as victims of some competing elite — especially businessmen — is likewise seldom subject to any empirical test or even specification of alternative hypotheses. If low-paid workers were exploited, for example, we might expect to find their employers unusually prosperous rather than finding, as we generally do, high rates of bankruptcy among low-wage firms. The point is not that this particular test has not been used, but that the whole discussion avoided any test, and relied instead on axioms. It is ideological rather than cognitive thinking: “When we discover that certain ideas about man, history and society seem, to those who believe in them, to be either self-evident or so manifestly correct that opposing them is a mark of stupidity or malice, then we may be fairly sure we are dealing with an ideology and ideological thinking.”227

The intellectual vision of victimhood makes the Third World the source of the wealth of the industrial countries, when in fact the bulk of American investments, for example, are in other industrial countries rather than the poorer nations. The rhetoric of victimhood extends even to those who prosper from so-called “underground” publications which are sold openly everywhere, including in government buildings. Often the nonempirical assertions assume the camouflage of empirical statements by the use of modifying words which reduce their meaningfulness “immeasureably,” “invariably,” “profoundly,” etc. — which simply “indicates that the writer has no data, has done no research, and has merely transmuted perceptions into ‘facts.’”228

Sometimes this transmuting of notions into “facts” includes an exaggeration of the advancement of foreign totalitarians rather than a denigration of that of democratic nations. For example, the supposed economic triumphs of the Bolsheviks are often based on the belief that czarist Russia had advanced unusually slowly, when in fact it had become one of the fastest growing economies in Europe. The military might of the U.S.S.R. is not proportional to its economic development, but to the ability of its government to appropriate a higher share of its output for military purposes.

Articulated rationality as a process and the delegation of decision making to “experts” have become the central features of the intellectuals’ vision of political and social decision making. Where there is no compellingly articulated rationality, then there is irrationality, from this viewpoint. The experiential, systemic, traditional, or other forms of authentication are not even considered. Thus “Americans have an irrational commitment to private ownership”229 to which they are “addicted”230 and social goals are built into the very definition of “rational” policy,231 in the approach of two well-known scholars who unsurprisingly declare: “Delegation to experts has become an indispensable aid to rational calculation in modern life.”232 To them bureaucracy “is a method for bringing scientific judgments to bear on policy decisions,”233 and a “triumph for the deliberate, calculated, conscious attempt to adapt means to ends in the most rational manner.”234 Like Max Weber’s assertion of the “indubitable technical superiority” of bureaucracy235 and Thorstein Veblen’s assertions of the supposed efficiency of a technocratic economy,236 this argument ignores the fact that there is no such thing as efficiency independent of values. Processes are efficient or inefficient at reaching specified values — e.g., an engine in moving a car forward, rather than dissipating its power in random shaking. No amount of bureaucratic or technological expertise can produce “efficiency” by numerous and disparate individual standards, however much they may facilitate the substitution of other standards by “experts” to whom power has been delegated.

Perhaps the most important policy question is not how or why intellectuals have sought power but how and why others have granted them as much power and influence as they have. It has seldom been because of any demonstrated success. Crime rates have soared as the theories of criminologists were put into practice; educational test scores have plummeted as new educational theories were tried. Indeed, no small part of the intellectuals’ achievement has been in keeping empirical verification processes off the agenda. Moreover, those who are more essentially intellectual in occupation — primarily producers of ideas — have been both more avid and more favored in power terms than those who produce tangible benefits in verifiable form. It is not the agronomists, physicians, or engineers who have risen to power, but the sociologists, psychologists, and legal theorists. It is the latter groups who have transformed the political and social landscape of the United States and much of the Western world. Not only is much of their cognitive output inherently unverifiable empirically; they have by various definitions and axiomatic procedures made their output even less susceptible of authentication than it would be otherwise. The jargon alone in these fields makes their substance largely inaccessible to outsiders. Transitionism explains away all disastrous consequences as the short-run price for a long-run triumph. They have conquered by faith rather than works. This is hardly surprising in the light of similar achievements by religious intellectuals who preceded them by centuries. Whatever has made human beings eager to hear those who claim to know the future has worked for modern as well as ancient intellectuals.

The modern equivalent of the ancient seer to whom men submitted their credulity is the “expert.” Deference to “experts” generally does not depend upon any consideration of (1) whether there is in fact any expertise on the particular issue (often there is not, especially in the social sciences), (2) whether the individuals selected have in fact any such expertise, as contrasted with an assortment of miscellaneous information, or (3) whether those who have expertise are in fact applying it, as distinguished from using it as a means of imposing personal preferences or group fashions. Politicians may also take issues to “experts” as a means of escaping political responsibility for unpredictable or controversial outcomes. Finally, there are “experts” whose expertise consists largely of detailed knowledge of some particular governmental program, whose institutional complexities and jargon make them incomprehensible to others. The enormous investment of time and effort required to acquire familiarity with intricate regulations and labyrinthine administrative procedures is unlikely to be made by someone unsympathetic to a program, both because the philosophic or cognitive interest would not be sufficient and because such an investment offers large payoffs only to those whom the particular bureaucracy would employ as consultants or officials — obviously not those unsympathetic to its programs. Even among “experts” in institutional detail who are unaffiliated with the program, their expertise has value only so long as the program itself exists. They would become experts in nothing if the programs were abolished, and a costly investment on their part would be destroyed. Under this set of incentives and constraints, it may be a truism that “all the experts” favor this or that program, but that may indicate very little about its value to the larger society. “Experts” of this sort can often devastate critics by exposing the latter’s misunderstandings of particular details, terminology, or legal technicalities — none of which may be crucial to the issue but all of which establish politically the superior knowledge of those favoring the program, and enable them to dismiss critics as “misinformed.”

It is not so much the bias of “expert” intellectuals that is crucial, but the difference between their perceived “objective” expertise and the reality which makes the political process vulnerable to their influence. Publicly recognized special interest groups — landlords discussing rent control, oil companies discussing energy, etc. — may have similar incentives and constraints, but are far less effective in getting their social viewpoints accepted as objective truth or social concern. But when an academic intellectual appears as an “expert” witness before a congressional committee, no one ever asks if he has been a recipient of large research grants or lucrative consulting fees from the very agency whose programs he is about to “objectively” assess in terms of the public interest. While special interest advertising carries not only that explicit designation but a heavy price tag as well, talk show hosts eagerly welcome “experts” extolling the virtues of this or that program, or raising alarms about the dire consequences of its possible curtailment or extinction. Such experts are then thanked warmly for “taking time out from your busy schedule” to come “inform” the public — i.e., to get free advertising for their special interest, with an audience in the millions. The print media are equally likely to bill such “experts’” statements as news rather than advertising.

As noted in Chapter 8, special interests can serve a useful social purpose in airing issues — especially when there are competing special interests and they are all recognized for what they are. The political advantages of intellectuals derive precisely from their not being recognized as interested parties. It is this difference in the public’s cost of knowledge of the personal stakes of the spokesman involved when businessmen, academic intellectuals, and others dispute that gives the intellectuals their decisive advantage. In many issues, there are no competing organized interests to challenge the intellectuals, as when it is a question of taking tax money and using it to create or support programs that intellectuals favor on ideological grounds or for personal gain. Vast governmental research funds, controlled by the very agencies whose performances and impact are being evaluated, ensure that any politically sophisticated agency can field a battalion of precommitted “experts” from among its academic grant recipients and consultants. Not all of the latter are simply “hired guns.” As long as the agency involved can select among grant recipients, they can choose people sincerely committed to their viewpoint and not those sincerely committed to opposite views. The former will have massive research to back up their viewpoint; the latter may be reduced to speaking in generalities or raising methodological questions about others’ research, neither of which is very effective politically. The net result is that tax money is used to subsidize campaigns to get more tax money. More important, from the standpoint of freedom, central government power is used to promote more central government power, with intellectuals a major force in these efforts.

Despite their acceptance as independent “experts” giving objective judgements, intellectuals have enormous personal stakes. In addition to their immediate personal gains as individuals, intellectuals as a class are dependent upon the backing of political power to impose their visions on the underlying population. The history of intellectuals from the Roman and Chinese empires to the French Revolution to modern totalitarianism shows how compelling a goal that has been, and how readily the freedom of others is sacrificed to such visions — whether of religious salvation, or “social justice.” Totalitarianism is only a carrying to its logical conclusion of the view that the vision — ideals, principles, religion, etc. — is paramount and flesh-and-blood human beings expendable.

Ironically, despite intellectuals’ power concentrating role and their insulation of that power from public feedback, among their justifications is that other decision-making elites possess concentrated power, and are unaccountable in its use. Attempts to depict nonintellectual decision makers as both powerful and socially irresponsible are clearly in the class interest of intellectuals. Moreover, it is easy for intellectuals to conceive of rival elites as unaccountable powers because their accountability is often not in terms of articulated rationality, the central modality of intellectuals. Corporate executives’ decisions may reflect very little articulated input from the public and may be accompanied by very little discussion of their own reasons, or may even be obfuscated by public relations statements — and yet be responsive to public opinion to the point of paranoia about offending, boring, or otherwise losing their customers. The extreme sensitivity of television networks to program ratings is a classic case of corporate hyperresponsiveness in a situation where there is virtually no articulate consumer-producer interaction. The Edsel was not dropped, nor the W.T. Grant department store chain liquidated because of articulation in either direction, but because customer choices forced such decisions.

In short, the absence of articulated accountability is not an absence of accountability as such. Conversely, the presence of articulation, and of phrases about “the public interest” or “the people” does not imply accountability, whether such phrases are used by intellectuals, politicians, or corporate press agents copying their styles to convey a fashionable image of “corporate responsibility.” The decisive knowledge that is conveyed, and responded to, is transmitted financially. Accountability is apparent not only in the dramatic cases where famous products or companies disappear, but more pervasively in the constant changing of products, corporate policies and/or managements to accomodate changing consumer preferences and changing technological and organizational possibilities.

That intellectuals tend to conceive of accountability solely in terms of their own processes of articulated rationality says more about the myopia or egocentricity of intellectuals than about the functioning of social processes. A businessman whose whole economic future is staked on the correctness of his assessments of consumer desires or technological possibilities is regarded by intellectuals as unaccountable, because he does not articulate to anyone. Conversely, psychiatrists, psychologists and social workers whose articulated assessments lead to dangerous criminals being turned loose are not accused of being unaccountable, even though they suffer no penalties for the robberies, assaults, or murders committed by those released — not even the embarrassment of having a personal box score kept on the criminals released on their recommendations.

Many of the same intellectuals who depict business as unaccountable to the public also deplore such things as television ratings and the proliferation of product models differing by nuances (automobiles, telephones, airline passenger sections) — all representing attempts to cater to public taste(s). Intellectuals’ conceptions of making business accountable almost invariably involve making more articulation necessary — at stockholders’ meetings, before government agencies, or public disclosures about internal business processes. Unarticulated accountability by results — product characteristics and prices — is either ignored or arbitrarily subordinated to articulation about processes, despite the fact that (almost by definition) a lay public is more likely to be able to judge tangible end results than to monitor complex specialized processes. Often proposals for accountability in the name of the public mean in practice articulation to intellectuals placed on corporate boards by government (or under threat of government action) as “public” representatives. Here the self-interest of intellectuals is even more apparent, and the claim of responsiveness to the desires of the general public even more questionable.

Nowhere is the meaning of “public” representation better illustrated than in so-called “public” television, where the tastes actually served are not those of the public but of atypical elites, favoring sports (soccer, tennis) different from those preferred by the public (baseball, football), favoring British soap operas (“Poldark,” “Upstairs, Downstairs”) rather than American, and rescuing performers who lost out in public popularity (Dick Cavett) compared to their competitors (Johnny Carson), but who happen to be favored by intellectuals. The issue here is not about the artistic merits of these various entertainment productions, but about what “public” accountability means in practice, when conceived of as articulation rather than alternative processes for conveying public preferences.

Sometimes the supposed lack of “accountability” of corporate management is vis-a-vis stockholders, rather than the general public. The “separation of ownership and control” has long been regarded as a social “problem” to be “solved” — almost invariably by more articulation and/or political control. The possibility that such separation may be desired by stockholders themselves is ignored. Yet many stockholders have sufficient investments to form their own business and manage it — if they wanted to. Their preference for having someone else carry out the managerial functions is revealed by their purchase of stock. As stockholders, they monitor end-results — dividends — rather than attempt to monitor managerial processes. To allow other stockholders or “public” representatives to monitor managerial processes would be to deprive stockholders in general of the option of choosing to whom to entrust their investments. Those stockholders who might prefer being involved in management can of course hold stock in such corporations as choose to attract them by offering such terms, if such arrangements are sufficiently viable to allow such corporations to compete and survive.

Sometimes the business “concentration” that is attacked is based on the percentage of the market served (“controlled”) by some small number of companies or the proportion of wealth or land owned by some given number or percent of businesses, families, or individuals. As noted in earlier discussions of so-called “income distribution,” much of the individual and family data reflect different stages of a life cycle rather than people in one class rather than another — some of today’s upper bracket people being yesterday’s lower bracket people and some of today’s lower bracket people being the children of today’s upper bracket people. Business concentration figures are even trickier. Statements that, for example, 568 companies control 11 percent of the land area237 convey insinuations but no economic conclusion or even allegation, since 568 companies are not a decision-making unit, nor even a basis for a viable conspiracy — even if 11 percent of the land were enough to conspire with. To claim, as Ralph Nader does, that twenty-five landowners own more than 61 percent of California’s private land238 is completely misleading. Not only do state and national government own a substantial part of California — reducing the true percentage well below the 61 percent figure — it is also important to realize that the so-called twenty-five “landowners” include thousands or millions of people, because of organizational ownership by corporations with vast numbers of stockholders. The full facts reveal not so much a concentration of land ownership among few people as a preference of many people to have their assets managed for them by professional managers.

Given the advantages of specialization, it is hard to imagine how various activities could fail to be “concentrated.” Business concentration is simply arbitrarily singled out for detailed scrutiny and exposé-style treatment, fraught with insinuations but devoid of empirically testable conclusions. The implicit premise is that there is something strange, unique, or sinister in such numerical relationships representing “concentration,” when in fact such numerical relationships are commonplace throughout human endeavors. Anyone who watches professional basketball knows that less than 12 percent of the population supplies over half the basketball stars. Only 3 percent of the population grows all of the food, less than 1 percent of the population runs all of the post offices or drives all of the taxicabs. Indeed, far less than 1 percent of the population writes all the stories about small percentages of people controlling large percentages of activities. All the authors, editors and reporters in the country add up to much less than one percent of the population — and in fact less than one-twentieth as many people as proprietors, managers, and officials in business, who are supposed to represent “concentration” dangers.239 The simple underlying fact of advantages of specialization can be looked at in many ways, including the sinister insinuations chosen by intellectuals when discussing competing elites.

The discussion here of the political role of intellectuals has been almost exclusively a discussion of the role of politically liberal intellectuals because (1) the predominant political orientation of American intellectuals has been liberal and left, and (2) the small, politically far less influential, nonliberal intellectuals are a heterogeneous group, consisting of followers of specific economic or social principles — the “Chicago School” of economists (Milton Friedman, George Stigler, etc.), the sociologically oriented “Neo-conservatives” (Irving Kristol, James Q. Wilson, etc.) and conservatives in the more usual sense of people who follow traditional values (William F. Buckley, Russell Kirk, etc). Unlike political liberalism, which can be reduced to a body of values, postulates or inferences,240 “conservatism,” as the term is usually applied (to include all the varieties itemized above, for example), has little or no determinate content. If a conservative is someone who wants to conserve, then what specifically he wants to conserve depends upon what happens to exist, and this might be anything from the social-political system of eighteenth-century England to the contemporary Soviet Union. In short, the broad label “conservative” is itself virtually devoid of content, however much specific content there may be in each of the groupings and individuals to whom that label is loosely applied.

Because the great majority of intellectuals are liberal, it is essentially liberals who define what is meant by the term “conservative.” In the liberal vision, conservatives are people who want to either preserve the status quo or go back to some earlier and “simpler” times. However politically effective such conceptions may be, in putting alternatives out of court, there are great cognitive difficulties with such characterizations. For example, there is not a speck of evidence that earlier times were in fact “simpler,” though of course our knowledge of such times may be cruder. Moreover, the status quo in the United States and throughout much of Western Europe is a liberal-left status quo, entrenched for at least a generation. Alternatives to this are arbitrarily called “going back,” even when these alternatives refer to social arrangements that have never existed (the monetary proposals of Chicago economists, for example), while proposals to continue or accelerate existing political-economic trends are called “innovative” or even “radical.” Conservers of liberal or socialist institutions are never called by the pejorative term, “conservative.” Neither are those who espouse the ideals, or repeat the very phrases, of 1789 France. In the broad sweep of history, the systemic advantages of decentralized decision making are a far more recent conception than the idea that salvation lies in concentrating power in the hands of the right people with the right principles. Adam Smith came two thousand years after Plato, but contemporary versions of the philosopher-king approach are considered new and revolutionary, while contemporary versions of systemic decentralization are considered “outmoded.” Such expressions are themselves part of a vision in which ideas may be judged temporally rather than cognitively — what was adequate to older and simpler times being inadequate for the complexities of modern life.

The characteristics of the intellectual vision are strikingly similar to the characteristics of totalitarian ideology — especially the localization of evil and of wisdom, and psychic identification with the interests of great masses, whose actual preferences are ignored in favor of the overriding preferences of intellectuals. It is consistent with this that intellectuals have supported and indeed spearheaded the movement toward a centralization of political power in democratic nations and have apologized for foreign despotisms and totalitarianisms which featured like-minded people. Democratic traditions may create either internal ideological conflicts or an external pragmatic need to rhetorically paper over the totalitarian thrust of the intellectual vision. Here intellectual processes — definitional clarity, logical consistency, canons of evidence — are often sacrificed to the intellectual vision or the self-interest of the intellectual class. For example, antidemocratic processes may be described by democratic rhetoric as “participation” or “public” representation. Presumption may be substituted for evidence — past, present, or future — as in numerous arguments that the national I.Q. was declining, or existing evidence may be resolutely disregarded, as in claims that crime rates reflect social “root causes,” or that “innovative” educational methods are more effective, or that sex education reduces the incidence of teenage pregnancy and venereal disease. In short, there is little to suggest that intellectuals’ political positions reflect the intellectual process, and much to suggest that their positions reflect a vision and a set of interests peculiar to the intellectual class.

SUMMARY: EMBATTLED FREEDOM

Freedom has always been embattled, where it has not been wholly crushed. The desire for freedom and for its opposite, power, are as universal as any human attributes. The nuclear age has added a new dimension to the struggle between them. So too has the rise to prominence of intellectuals as a social class with growing political aspirations, influence and/or dominance.

Almost by definition, the movement to totalitarianism is a one-way movement. No totalitarian government has ever chosen to become free or democratic, though a free and democratic nation may choose to move toward totalitarianism, as Germany did in 1933. If governmental choice were the only variable, the eventual worldwide triumph of totalitarianism would be inevitable, since choices in one direction are reversible and choices in the other direction are not. Nazi totalitarianism was smashed by external military power and its empire liberated by invading armies. But the invasion of Normandy that led to the liberation of Western Europe can hardly find a new counterpart to liberate Eastern Europe in a nuclear age. That the Western democracies had to stand by helplessly while Soviet tanks crushed Eastern European uprisings in the 1950s was grim proof of the new realities of nuclear annihilation. Perhaps in a very long run, political erosions might sap the vitality of totalitarianism or economic efficiency claims modify it incrementally (as it has already in agriculture) to the point where ultimately it no longer resembles its present centralized model. But even these remote hopes are lessened if the surviving examples of free and democratic nations are lost before this can happen.

In the nuclear era, the international survival of the nontotalitarian world rests ultimately on an American nuclear deterrent. Otherwise the nuclear power of the Soviet Union would be irresistible as a threat in international power politics, whether or not it was ever actually used. Seldom has the survival of human freedom rested so decisively in the hands of one government, or the survival of the species in just two.

The spread of totalitarianism — communism since World War II — has been at the expense of all kinds of nontotalitarian governments: a democracy in Czechoslovakia, a kingdom in Laos, a Latin American autocracy in Cuba. These various forms of government, whatever their merits or demerits otherwise, tend to be changeable. A dictatorship like Spain could liberalize after Franco, and Portugal could swing to the left after Salazar. As of any given moment, some of these governments might seem not very different in their degrees of freedom from communist dictatorships. But a communist dictatorship has a permanence that these other forms of government cannot approach. Inasmuch as most of the governments on the planet are nondemocratic as well as noncommunist, stemming the spread of totalitarianism necessarily means American cooperation with nondemocratic nations. To some Americans, but especially intellectuals, such cooperation appears as a violation of the democratic creed, and should be contingent on the nondemocratic nation’s adoption of democratic institutions. This is a special case of the general implicit assumption of a single scale of values applicable to all. The historical recency and rarity of constitutional democracy makes the universal application of such a model especially egocentric and arbitrary. As a precondition for cooperation to stem the tide of an irreversible totalitarianism, it suggests either a low estimate of the threat or an unwillingness to face the historic responsibility implied by it. The central assumption of a single scale of values applicable to all is a force in domestic as well as international politics. It has facilitated the imposition of many specific laws and policies resented by the population, and — more important — it has altered the enduring political framework to make such impositions possible through courts, administrative agencies, and other institutions and processes insulated from public feedback and responsive to smaller, more zealous constituencies. Domestically as well as internationally, freedom as the general preservation of options gives way to the imposition of one group’s preferred option. Their influence greatly exceeds their numbers, partly because they are perceived as objective “experts” and partly because of the moral nature of their arguments and the apparently moral high ground that they themselves occupy (as contrasted with the arguments of conventional special interest groups in these respects).

The moralistic approach to public policy is not merely a political advantage to those seeking greater concentration of power. Moralism in itself implies a concentration of power. More justice for all is a contradiction in terms, in a world of diverse values and disparate conceptions of justice itself. “More” justice in such a world means more forcible imposition of one particular brand of justice — i.e., less freedom. Perfect justice in this context means perfect tyranny. The point is not merely semantic or theoretical. The reach of national political power into every nook and cranny has proceeded in step with campaigns for greater “social justice.” A parent forced by the law and income to send his child off to a public school where he is abused or terrorized by other children is painfully aware of a loss of freedom, however much distant theoreticians talk of justice as they forcibly unsort people, and however safe the occupational advantages of intellectuals remain from governmental power.

The myopic conception of freedom as those freedoms peculiar to intellectuals, or formal constitutional guarantees, ignores the many ways in which options can be forcibly removed by administrative or judicial fiat, or by the government’s ability to structure financial or other incentives in such a way as to impose high costs or grant high rewards according to whether individuals and organizations do what the government wants done — whether or not the government has any explicit statutory or constitutional authority for controlling such behavior. More than a century ago, John Stuart Mill saw the dangers in the growth of the extralegal powers of government:

Every function superadded to those already exercised by the government causes its influence over hopes and fears to be more widely diffused, and converts, more and more, the active and ambitious part of the public into hangers-on of the government, or of some part which aims at becoming the government. If the roads, the railways, the banks, the insurance offices, the great joint-stock companies, the universities, and the public charities were all of them branches of the government; if, in addition, the municipal corporations and local boards, with all that now devolves on them, became departments of the central administration; if the employees of all these different enterprises were appointed and paid by the government, and looked to the government for every rise in life; not all the freedom of the press and popular constitution of the legislature would make this or any other country free otherwise than in name.241

Freedom is endangered both internationally and domestically. The international danger turns ultimately on military power, and the domestic danger on ideology. It is not merely that an ideology may be wrong — everything human is imperfect — but that the zeal, the urgency, and the moral certitude behind it create special dangers to a free constitutional government of checks and balances, for maintaining that constitutional freedom often seems less important than scoring a victory for “justice” as envisioned by zealots. When a segment of these zealots are able to pose as disinterested “experts” the dangers are compounded.

The United States of America is a central battleground for both kinds of dangers to freedom, domestic and international. Militarily, the whole Western world is dependent on American nuclear power. Politically, the power-centralizing forces have advanced much further toward their goals in other Western countries than in America, where a variety of autonomous forces are still able to oppose these trends. Intellectuals have never been as cohesive in the United States as in smaller, more socially homogeneous countries,242 and the public has never been as thoroughly awed by them. One symptom of this is the utter failure of socialist movements to take root in the United States, while they are strong in Western Europe. Socialist movements (and communist movements) have — in every period of history and around the world — been the creation of middle class intellectuals, though the ceaseless reiteration of the “working class” theme in socialist rhetoric may verbally obscure this plain fact. Where socialist intellectuals have allied themselves politically with labor unions — as in the British Labor Party, for example — it is the intellectuals who lead the alliance to the left, with varying degrees of resistance or acquiescence by the working class segment of the alliance. The very same pattern has been attempted at various times in American history, but American workers have historically been far less deferential to their “betters” — whether employers or intellectuals — than European workers. The intellectuals have been more successfully rebuffed here.

Certainly if the trend toward centralization of power — and the corresponding erosion of freedom — can be stopped anywhere, it can be stopped in America. But in a nuclear age, even the momentous question of human freedom must be considered in the light of military realities.

THE MILITARY “BALANCE”

For a brief period at the end of World War II, the United States stood in a military power position perhaps unparalleled in human history. The Roman Empire at its height was not as unchallengeable. In addition to its monopoly of the greatest military weapon in history, the United States alone of the industrial nations had its entire productive capacity intact, unscathed by war, and producing more than all the rest of the world put together.243 Its people were united behind the government as seldom before or since. In sheer power terms, the United States could have imposed an American empire or at least a modern version of the Pax Britannica that kept Europe and most of the world free of major wars for generations. The point here is not to argue that either of these things should have been done. The point is to show the situation, the possibilities, and to compare these with what in fact happened.

What actually happened was that three-quarters of the total American military force demobilized in one year — 9 million men and women from 1945 to 1946, and the remaining 3 million military personnel were reduced by half again by 1947.244 By 1948 the American military force was smaller than it had been at the time of Pearl Harbor. Nations from which the American army drove the Nazis were forthwith restored to their own sovereignty. The American occupation army that entered Japan in 1945 was ordered to neither take nor even buy food from the Japanese, as that would reduce food badly needed by the Japanese civilian population. For what may have been the first time in history, a conquering army was put on short rations until food arrived from their homeland, so that a conquered people would not be deprived. The humane treatment of conquered enemy nations made Germany and Japan two of the most pro-American nations in the world, both politically and culturally. These actions are noteworthy in themselves, remarkable against the historical background of other conquering nations, incongruous with the image of a “sick” society, and in particular contrast with the record of the Soviet Union.

Over the years since World War II, the military supremacy of the United States has disappeared, and what has been called the “nuclear stalemate” has emerged. Both the United States and the Soviet Union have enough nuclear weapons to annihilate the major population centers of the other nation several times over — “overkill,” as it is called. However, nuclear “overkill” may not be as unprecedented as it appears nor decisive as an indication of negligible incremental returns to continued military development. It may well be that when France surrendered to Nazi Germany in 1940, it had enough bullets left to kill every German soldier twice over, but such theoretical calculations would have meant little to a conquered nation. Would anyone say that a lone policeman confronting three criminals had “overkill” because his revolver contained enough bullets to kill them all twice over? On the contrary, depending on how close they were, and with what weapons they were armed, he might be in a very precarious position.

In an era of sophisticated radar defenses and missile interceptor systems, the only way to actually deliver a nuclear weapon on target might be to saturate the enemy defense system with more incoming missiles than it can handle — that is, with a number of missiles representing extravagant “overkill” in terms of what would be theoretically necessary if the enemy were as defenseless as a sitting duck. Since both the United States and the Soviet Union have missile defense systems, theoretical examples of “overkill” — if taken literally — represent either naiveté or demagoguery, depending upon how they are used. As long as the technology of attack and defense systems keeps advancing, there is no point at which we can comfortably say, “enough,” because it is not the size of the arsenal that matters but the ability to deliver it through enemy defense systems that matters. Military forces have always had overkill. It is doubtful if most of the bullets fired in most wars ever hit anybody, and a substantial number of soldiers never fire at all. Yet no one would claim that it is futile to arm soldiers going into combat or that it is a waste to issue more bullets than there are enemy soldiers.

The history of the Soviet-American military balance has been essentially a history of the relative decline of the American position. Whereas the United States in 1965 had several hundred more nuclear missiles than the U.S.S.R., by 1975 the Soviets had more than a thousand more nuclear missiles than the United States.245 Whereas the United States in 1965 had more military personnel in both conventional and nuclear attack forces than the U.S.S.R., by 1975 that too had been reversed.246 Most other components of nuclear military power had also changed to the detriment of the United States in this decade.247 In Europe, the Soviet bloc Warsaw Pact outnumbers the Western NATO allies in troops (50 percent more), tanks (three times as many), airplanes (40 percent more) and artillery pieces (three times as many), with the lone Western military advantage being in tactical nuclear weapons (twice as many).248 Tactical nuclear weapons — the West's one advantage — have the serious disadvantage that a defending nation risks endangering its own people with radioactive fallout if it uses the weapon against an invader. The invading forces face no comparable risk, since its tactical nuclear weapons would be used near someone else’s civilian population.

Western attempts to redress this imbalance by developing a tactical nuclear weapon with reduced and more transient fallout — the so-called “neutron bomb” (actually an artillery shell) were met by a massive worldwide propaganda campaign, centering on an incidental feature of the weapon, its lack of destruction of physical structures. That it would “kill people but not destroy property” became the theme of Soviet propaganda, echoed in the West, creating the impression that this demonstrated the capitalist mentality of concern for things rather than people. That the Soviets would argue this way is unsurprising, but that it should find such a responsive echo on the political left in Western countries — especially on a matter of national survival rather than political ideology — proved politically decisive. Antineutron “bomb” demonstrations swept across the Western world, and at the eleventh hour in the NATO negotiations, the American President withdrew plans for this tactical weapon, whose chief military characteristic was that it equalized defensive forces with offensive forces by not requiring defensive forces to destroy their own civilians to repel an invader. Existing tactical nuclear weapons, for example, would kill an estimated five million civilians in West Germany alone if used to repel an invader.249 The credibility of such a weapon as a deterrent could be discounted in advance by any invader, aware that it could literally hurt defenders worse than it would hurt an invading army. That emotional or ideological predispositions should influence decisions of this grim magnitude is an indication of the greater political as well as military vulnerability of the West. Such political reactions on the political left in Europe were far stronger than in the United States, the left itself being stronger in Western Europe. In America, the leading liberal spokesman, Senator Hubert Humphrey, threw his support behind the weapon.250 Western governments were apparently also in favor of the weapon, but often more so privately than publicly, given the political furor.251

How did the present military imbalance develop, given the initial Western predominance? Quite simply by political decisions to trade off defense spending for domestic welfare programs. In 1952 military expenditures were 66 percent of the federal budget, but this declined to 24 percent by 1977 while social welfare expenditures rose from 17 percent to 50 percent over the same span.252 Inflationary dollar figures maintain the political illusion that defense spending is rising, but in constant purchasing power terms military expenditures in the United States declined not only relatively but absolutely. Moreover, much of today’s military spending represents simply higher pay for military personnel — a fourfold increase in cost per soldier since 1952253 — rather than for weapons. More than half of all current American military expenditures are for personnel costs. The Soviet government has maintained and increased its military expenditures as the United States has reduced its. In short, the relative decline of American military power has been largely self-imposed, and “arms race” talk simply ignores the Soviet military buildup that has proceeded while American military resources were being diverted to social programs.

There is a striking parallel here with the decline and fall of the Roman Empire. In its early years the Romans “preserved the peace by a constant preparedness for war.”254 Their soldiers were rigorously trained255 and carried heavy armor and weaponry,256 and were commanded by the Roman aristocracy and led in battle by emperors.257 Their morale was supported by the pride of being Roman.258 Later, discipline relaxed,259 and the soldiers carried less armor and weaponry, as a result of their complaints about carrying burdens that had been carried in earlier generations.260 They were defeated by barbarian armies smaller than other barbarian armies that had been routed by Roman legions in earlier times.261 Behind the self-weakening of Rome lay forces similar to those at work today in the United States and in the Western world at large: internal divisiveness262 and demoralization,263 rising welfare expenditures,264 a growing and stifling bureaucracy265 — and a rising political influence of intellectuals.266 In Rome, as in later Western countries, both the zealotry and the power were concentrated precisely in those particular intellectuals who dealt in nonverifiable theories — religious theories in the case of Rome; “social justice” in the contemporary West.

The longer time horizon of a one-party totalitarian state is a military as well as political advantage. In the short run, elected officials in a democratic country have incentives to convert military expenditures into social welfare expenditures, since the former involve long-run national interests and the latter have short-run political payoff. This is especially so in an era when high levels of fixed governmental obligations and voter resistance to higher taxes leave little room for financial maneuvering, other than cutting the military share of the budget. In the United States that share has already been reduced by more than 40 percentage points in the past quarter century.267 A totalitarian government like the Soviet Union need make no such reductions, nor has it.

Not only are there political dividends in cutting defense spending — defense “waste” by either allegation or definition — to finance social programs; there are also more direct political dividends from advancing toward “peace” through military agreements with the Soviet Union, regardless of the long-run consequences of the specific terms of those agreements. The political advantages of such agreements fall within the time horizon of elected incumbents, while any later consequences are left for future administrations or generations to cope with. Again, this is not to claim that such explicitly cynical calculations are made. The point is that this is the tendency of the incentives, and human rationalization in the face of tempting incentives is a common phenomenon. As Congressman Les Aspin remarked, “you’ve got to cut the defense budget if you want sufficient money for your own programs.”268 The net result is an asymmetry in the bargaining power of the U.S. and the U.S.S.R. Politically, American elected officials need to make such agreements moreso than do Soviet officials, who are in a position to hold out for terms which neutralize those weapons in which the U.S. has an advantage and enhance the prospects for those weapons in which the U.S.S.R. has an advantage. At any given time, the results need not be a blatant imbalance. The cumulative effect over time is what matters.

The history of the West in general and the United States in particular is not encouraging as regards military preparedness. In the 1930s, the American army was only the sixteenth largest in the world, behind Portugal and Greece. In 1934, despite the aggressions of Japan in the Orient and the rise of Hitler in Europe, the budget of the U.S. army was cut 51 percent, to help finance New Deal programs.269 Overall military expenditures were reduced 23 percent in one year,270 and total military personnel on active duty fell below a quarter of a million in the early 1930s, drifting downward each year from 1930 through 1934.271 The Civilian Conservation Corps of young men working in forests was larger than the army — and the CCC recruits were paid more.272 Attempts to train them militarily were defeated politically by a pacifist protest led by intellectuals — John Dewey and Reinhold Neibuhr.273 Later, attempts to build some semblance of military defense for the Philippines were criticized by the editor of the Nation, who asked why the islands’ people were not being taught to live rather than to kill.274 This lofty assumption of unconstrained choice — three years before Pearl Harbor — takes on a grim or even hideous aspect as an historical background to the devastation of the Philippines and massive, unspeakable atrocities against its people by invading Japanese armies. American soldiers in the Philippines vainly attempted to defend themselves with obsolete rifles, mortars a quarter of a century old, and mortar shells so old that they proved to be duds in 70 percent of the cases.275 On Bataan, four out of five American hand grenades failed to explode.276 Attempts to break through the Japanese blockade of the Philippines had to be made “with banana boats hired from the United Fruit Company, and with converted World War I destroyers.”277 These were among the long-run costs of the “savings” on military expenditures during the previous decade. Actually it was not a saving but a disinvestment — a current consumption of future resources.

The uncontrolled political climate of a free nation allows the development of ideological currents inimical to national defense — the so-called “neutron bomb” episode being but one example — or even the orchestration of propaganda campaigns by foreign powers with an obvious vested interest in reduced Western military defense. Moreover, the unverified nature of arguments about nuclear prospects — prospects that no sane person wants verified — gives a special political advantage to the verbally adept, that is, to intellectuals, who have tended to be antimilitary at least as far back as the Roman Empire.278 It was precisely at the leading British universities that young men took the “Oxford Pledge” in the 1930s never to defend their own country in warfare.279 Such pacifist reaction to the carnage of World War I may have been understandable, like the current American reaction to the bitterness of Vietnam. However, such attitudes were a crucial element in the Western powers’ appeasement of Hitler at a time when they had superior military force but were politically incapable of using it.280 By the time Hitler’s rearmament policy, annexations, and conquests had changed Britain’s attitude, he now had superior military force. When the young men who took the “Oxford Pledge” saw Hitler’s armies marching and the bombs falling on their own homes, they vindicated themselves in the skies over Britain and later on the beaches at Normandy. But it was still a desperately close brush with subjugation by one of the greatest barbarians in human history. Hitler’s outrages put a pacifist intellectual like Einstein in the ironic position of initiating the development of the most destructive military weapon ever used. But now that the nuclear age is here, such changes of mind as a result of crisis experience may no longer be possible — or at least, not in time to change policy and change history. The timetable of a nuclear war — or nuclear blackmail — may not permit second thoughts about what should have been done when we had the chance.

For a richer and technologically more advanced nation to fall behind militarily, when national survival and the survival of democratic freedom internationally are among the stakes, requires a certain amount of demoralization. No one supplies this demoralization more constantly or effectively than intellectuals. Again, this is not, historically, a new role for intellectuals, The intellectuals’ vision has long taken precedence over any tangible reality. In the Roman Empire, the vision was religious salvation, and if divisiveness was engendered by persecutions of pagans, thereby weakening a whole civilization in the face of barbarian invaders, so be it. If the social visions behind the French Revolution required the execution of tens of thousands of human beings (including revolutionary philosophers like Condorcet), so be it. If the vision of proletarian communism or German racial purity required that millions be slain, so be it. Against this background, there is hardly any reason for surprise if current visions of “social justice” do not moderate to accommodate military necessity, or if campaigns to discredit rival elites like businessmen or the military are so all-out that the consequences are the demoralization of a whole civilization and a weakening of the will to defend it.

In this context, it is understandable how an American official can speak of the military arms race as something for which “all of us here in America are to blame,” how “the United States has led the way in arms escalation” and how “the lion’s share of the blame,” within the U.S. “belongs to the business sector of society” which is seeking “the profits of doom.”281 It is a remarkable statement from an official representative of the United States to the U.N. Disarmament Session, and particularly for the representative of a country that demobilized almost 90 percent of its armed forces in three years and has voluntarily relinquished military supremacy over the years by cutting back the resources devoted to it. But it is no more remarkable than statements by former U.N. Ambassador Andrew Young equating massive slave labor camps in the Soviet Union with individual miscarriages of justice in American courts, calling the victims of both “political prisoners.” Both officials are extreme examples of a more general tendency toward national demoralization, without which such people could not survive in their official positions. The public’s outrage is a sign that the battle is not over, but that American officials can continue in office after making anti-American propaganda on an international stage is also a sign of the political climate.

THE FUTURE OF FREEDOM

Hobbes defined freedom as the absence of opposition or impediments.282 Freedom may be constrained by political power or informal influences, but as long as diverse human beings constitute a society, their disparate values must somehow be reconciled and therefore someone’s — or everyone’s — freedom must be curtailed. When these mutual reconciliations are affected through informal channels, reciprocal advantages may be traded off, so that the disparate values of individuals permit them to incrementally relinquish what they value least for what they value most, even though physically what one relinquishes is identical to what another receives. When reconciliations are made by the decisions of formal hierarchies, one scheme of values is offered, and if the hierarchy is a monopoly — such as government — imposed. A choice among hierarchies (churches, employers, associations) preserved freedom through the inevitable differences among human beings as individuals or groups.

Where the differences among people are least — in the desire to be safe from violence and secure in their possessions, for example — there is less sacrifice of freedom in assigning to a monopoly the power to punish individual violence or robbery. Were the same monopoly to determine the “best” size(s) or style(s) of shoes, the result would be mass discomfort, and were it to determine more and weightier matters the results would be even less satisfactory in terms of the differing values of individuals, however “better” it might be in terms of the particular values of the monopoly.

This brief summary of various “efficiency” arguments already elaborated in earlier chapters is relevant here to freedom as a separate value in its own right. It is the difference between the preferred and the imposed values that necessitates the use of force — the curtailment (or extinction) of freedom. In this context, an ideology of categorically transcendant values — whether religious salvation or “social justice” — is an ideology of crushing power. The logic of transcendant values drives even the humane toward the use of force, as those not imbued with the same values prove recalcitrant, evasive, or undermining — provoking indignant anger and confronting decision makers with a choice between accepting defeat for sacred causes or applying more power. This systemic logic rather than intentional design drove Robespierre — “a man of great sweetness of character”283 — to mass executions as flesh-and-blood human beings repeatedly acted at cross purposes with the ideals of the French Revolution. “Moralism is fatal to freedom,” wrote a former friend of Robespierre, while awaiting the guillotine.284 It was not a principle unique to the French Revolution. Much milder political changes have been driven by similar logic to exert far more power than originally contemplated in pursuit of a transcendant goal. No one expected Brown v. Board of Education to lead to federal judges taking over local school systems and ordering the massive busing of children, in disregard of both initial opposition and subsequent consequences. Indeed, no one expected the humane social programs initiated by the New Deal to lead to bureaucratic empires issuing their own laws — more laws than Congress — unilaterally, outside the constitutional framework, and almost immune to either electoral correction or judicial oversight. Where, whether and how we can build a roof over our heads is determined by an anonymous zoning commission; whether we dare walk the streets near our home is determined by decisions of equally unknown parole board members; and how long we can live in our neighborhood depends on the grand designs of urban redevelopment administrators.

These are of course not attacks on intellectual freedom; merely on some of the most precious concerns of ordinary human beings down through the ages. Just how far the myopic view of freedom can go may be illustrated by the behavior of musicians under Nazi rule. As various ethnic, political, and cultural groups successively fled Nazi persecution, the musicians — including, notably, conductor Kurt Furtwangler and composer Richard Strauss — remained behind to collaborate with the Hitler regime, because there were no comparable restrictions on musicians’ freedom.285 Against this background, it may be less surprising that intellectuals living in affluent suburbs (or in “security buildings” in the cities) and/or with their children in private schools, can see no reason for working class people’s resentment of “progressive” political developments other than benighted ignorance, blind reaction, or vicious racism. Evidence that these are not, in fact, the attitudes of most working people is ignored, for these are the only explanations consonant with the intellectual vision. That businessmen — large or small — are in effect conscripted to be part-time, unpaid administrators for the Internal Revenue Service, the Social Security Administration, and numerous other federal agencies will occasion even less concern.

Past erosions of freedom are less critical than current trends which have implications for the future of freedom. Some of these trends amount to little less than the quiet, piecemeal, repeal of the American Revolution.

The American Revolution was very different from the French Revolution of the same era. The French Revolution was based on abstract speculation on the nature of man by intellectuals, and on the potentiality of government as a means of human improvement. The American Revolution was based on historical experience of man as he is and has been, and on the shortcomings and dangers of government as actually observed. Experience — personal and historical — was the last court of appeal of the founders of the United States and the writers of the Constitution. Their constantly reiterated references were to “experience, the least fallible guide of human opinions,”286 to “the accumulated experience of ages,”287 to “the uniform course of human events,”288 to the history of ancient Rome,289 to “the popular governments of antiquity,”290 and the history, economics, and geography of contemporary European nations.291 They explicitly rejected “Utopian speculations,”292 “the fallacy and extravagance” of “idle theories” with their “deceitful dream of a golden age.”293 In contrast to Robespierre, who said that revolutionary bloodshed would end “when all people will have become equally devoted to their country and its laws,”294 The Federalist regarded the idea of individual actions “unbiased by considerations not connected with the public good” to be an eventuality “more ardently to be wished than seriously to be expected.”295 They were establishing a government for such flesh-and-blood people as they knew about, not such creatures as they might hope to create by their activities.

The opposing policies of the two revolutions — and their very different historical fates — were related to their very different premises about the nature of knowledge and the nature of man. To the men who made the American Revolution and wrote the Constitution, knowledge derived from experience — personal and historical — and not from speculation or rhetorical virtuosity. Their own backgrounds before the Revolution were as men of affairs, personally responsible for economic outcomes, whether commercial or agricultural. By contrast, the French philosophes were denizens of literary salons where style, wit, and rhetoric were crucial296 — and whose whole lives were lived under circumstances in which the only authentication process consisted of impressing readers or listeners. In the modern vernacular, they “never met a payroll” — or a scoreboard, or a laboratory experiment, or a military campaign, or any other authentication process whose empirical results could not be talked away. They were masters of the world of unverified plausibilities.

Man, as he appeared in the writings of the American revolutionaries, was very different from man as he appeared in the writings of the French revolutionaries. In contrast with the “perfectability of man” in contemporary French thinking, The Federalist speaks of “the constitution of man” as an inherent barrier to objective decision making or administration.297 While the French revolutionaries put their faith in selecting the most dedicated leaders — “the brightest and the best” in modern terms — and entrusting them with vast powers, the Americans argued that the very reason why government existed at all was because “the passions of men will not conform to the dictates of reason and justice” otherwise,298 and that governments, like individuals, have a pride which “naturally disposes them to justify all their actions, and opposes their acknowledging, correcting, or repairing their errors and offenses.”299 Though there were American leaders “tried and justly approved for patriotism and abilities,”300 the future of the country could not be left to depend on such leaders: “Enlightened statesmen will not always be at the helm.”301 Moreover, there are “endless diversities in the opinions of men,”302 so that “latent cases of faction are thus sown in the nature of man,” and mankind has a propensity “to fall into mutual animosities.”303 Men “are ambitious, vindictive, and rapacious.” They have a “love of power or the desire of pre-eminence and dominion.”304 The question facing the founders of the American government was not how to give expression to the ideas of those presumed to be morally or intellectually superior, but how to guard freedom from the inherent weaknesses and destructive characteristics of men in general. Their answer was a series of checks and balances in which ambitions would counter ambition and power counter power, with all powers not explicitly granted retained by the people themselves or dispersed among state and local governments. Nor were they prepared to rely on pious hopes in the Constituion — “parchment barriers against the encroaching spirit of power,” as Madison called them305 — but relied instead on so structuring the institutions that they will “be the means of keeping each other in their proper places.”306 Such separation of powers was “essential to the preservation of liberty”307 and their coalescence in any branch was “precisely the definition of despotic government.”308 They did not trust anyone. If freedom was to exist, it had to be systemic rather than intentional, “supplying by opposite and rival interests, the defect of better motives,” and arranging things so that “the private interest of every individual may be a sentinel over the public rights.”309 That all this implied a negative view of man did not stop the writers of the Constitution:

It may be a reflection on human nature that such devices should be necessary to control the abuses of government. But what is government itself but the greatest of all reflections on human nature? If men were angels, no government would be necessary. If angels were to govern men, neither external nor internal controls on government would be necessary. In framing a government which is to be administered by men over men, the great difficulty lies in this: you must first enable the government to control the governed; and in the next place oblige it to control itself.310

Like a judo expert using an opponent’s strength against him, so the writers of the Constitution hoped to use the strong, if negative, motivations of man for the purpose of preserving the political benefits of freedom. As a modern writer has observed: “A system built on sin is built on very solid foundations indeed.”311 This is true of both economic and political systems. Neither constitutional democracy nor a market economy relies on decision makers to have superior wisdom or morality. Both put in the hands of the mass of ordinary people the ultimate power to thwart or topple decision makers. Historically, it was — and is — a revolutionary concept, rejecting theories going back thousands of years which insist that what matters is which persons and which doctrines rule, rather than the systemic incentives and constraints that control whoever rules under whatever doctrine. The American Constitution left little room for philosopher-kings or messiahs.

The great vulnerability of the Constitution today is that it is an obstacle in the path of groups that are growing in size, influence, and impatience. The most striking, and perhaps most important, of these are the intellectuals, especially in the politicized “social sciences.” Politicians, once constrained by national (voter) reverence for constitutional guarantees, now operate more freely in an atmosphere where intellectuals make all reverence suspect and make “social justice” imperative. The decline in political party control (“machine politics”) has given the individual politician more scope to be charismatic and entrepreneurial about causes and issues. Politicians ambitious for themselves as individuals and intellectuals ambitious for recognition as a class must discredit existing social processes, alternative decision-making elites, and the accumulated human capital of national experience and tradition which competes with their product, newly minted social salvation. However much they may emphasize the special virtues of their particular schemes, it is unnecessary here to go into them, for the point is that whatever the current specifics, they are certain to be superseded by new specifics in a few years to perform the same political function for the careers of new politicians and intellectuals. The danger to the Constitution is not so much in particular laws as in the general climate of opinion in which law and government are no longer seen as a framework within which individuals make changes incrementally, but as themselves means of making categorical changes directly, according to the preferences of whoever happens to have control of these institutions. One symptom of how far this has gone is that the first peacetime imposition of federal wage and price controls in American history occurred in 1971 under an administration widely regarded as “conservative” — as indeed it was. But that even “liberal” administrations in the past had not dared to do the same thing was one indication of how much the political climate had changed.

The “crisis” orientation of politicians and intellectuals is accepted and amplified by the mass media. Today's “problems” are news; neither the long-run implications nor the inherent constraints can be photographed by the television camera, or even discussed in the brief minutes between commercials. Moreover, with print and broadcast journalists as part of the intellectual class, grounded largely in the so-called “social sciences,” few questions may be raised about the cognitive processes they employ.

The rise of goal-oriented imperatives has meant the undermining or superseding of process-oriented constitutionalism. The imperatives of economic recovery from the Great Depression of the 1930s spawned numerous hybrid agencies combining the very powers which the Constitution had so carefully separated. Military imperatives, beginning in World War II and continuing into the nuclear age, have sanctioned an increase of the presidential powers as commander-in-chief of the armed forces, to the point where they include the de facto power to declare war without congress, as demonstrated in Vietnam. Finally, moral imperatives concerning the less fortunate segments of society (farmers and industrial workers in the 1930s, blacks in the 1960s, miscellaneous other groups in the 1970s) have expanded the scope of the judiciary beyond anything ever contemplated when the Constitution was written. Along with this has developed a philosophy that it is not merely expedient but legitimate to circumvent the democratic process in the interest of “higher” moral goals — ending the death penalty, integrating the schools, redistributing income, and other forms of “social justice.”

While the new trends in the political climate are easiest to notice, there is no need to extrapolate them as an inevitable “wave of the future.” There are ample signs that the public has had more than enough, and even signs that some of this disenchantment has begun to penetrate the insulation of courts, bureaucracies, and other institutions. The Burger Court is not the Warren Court, though it is hardly the pre-Warren Court either. Deregulation moves by the Civil Aeronautics Board, stronger criminal sentencing laws in various states, and the defeat of school bond issues that were once passed easily are all signs that nothing is inevitable. Whether this particular period is merely a pause in a long march or a time of reassessment for new directions is something that only the future can tell. The point here is not to prophesy but to consider what is at stake, in terms of human freedom.

Historically, freedom is a rare and fragile thing. It has emerged out of the stalemates of would-be oppressors. Freedom has cost the blood of millions in obscure places and in historic sites ranging from Gettysburg to the Gulag Archipelago. A frontal assault on freedom is still impossible in America and in most of Western civilization. Perhaps nowhere in the world is anyone frankly against it, though everywhere there are those prepared to scrap it for other things that shine more brightly for the moment. That something that cost so much in human lives should be surrendered piecemeal in exchange for visions or rhetoric seems grotesque. Freedom is not simply the right of intellectuals to circulate their merchandise. It is, above all, the right of ordinary people to find elbow room for themselves and a refuge from the rampaging presumptions of their “betters.”

Загрузка...