Part 2: WE JUST CAN’T PREDICT

When I ask people to name three recently implemented technologies that most impact our world today, they usually propose the computer, the Internet, and the laser. All three were unplanned, unpredicted, and unappreciated upon their discovery, and remained unappreciated well after their initial use. They were consequential. They were Black Swans. Of course, we have this retrospective illusion of their partaking in some master plan. You can create your own lists with similar results, whether you use political events, wars, or intellectual epidemics.

You would expect our record of prediction to be horrible: the world is far, far more complicated than we think, which is not a problem, except when most of us don’t know it. We tend to “tunnel” while looking into the future, making it business as usual, Black Swan-free, when in fact there is nothing usual about the future. It is not a Platonic category!

We have seen how good we are at narrating backward, at inventing stories that convince us that we understand the past. For many people, knowledge has the remarkable power of producing confidence instead of measurable aptitude. Another problem: the focus on the (inconsequential) regular, the Platoniflcation that makes the forecasting “inside the box”.

I find it scandalous that in spite of the empirical record we continue to project into the future as if we were good at it, using tools and methods that exclude rare events. Prediction is firmly institutionalized in our world. We are suckers for those who help us navigate uncertainty, whether the fortune-teller or the “well-published” (dull) academics or civil servants using phony mathematics.

From Yogi Berra to Henri Poincaré

The great baseball coach Yogi Berra has a saying, “It is tough to make predictions, especially about the future”. While he did not produce the writings that would allow him to be considered a philosopher, in spite of his wisdom and intellectual abilities, Berra can claim to know something about randomness. He was a practitioner of uncertainty, and, as a baseball player and coach, regularly faced random outcomes, and had to face their results deep into his bones.

In fact, Yogi Berra is not the only thinker who thought about how much of the future lies beyond our abilities. Many less popular, less pithy, but not less competent thinkers than he have examined our inherent limitations in this regard, from the philosophers Jacques Hadamard and Henri Poincaré (commonly described as mathematicians), to the philosopher Friedrich von Hayek (commonly described, alas, as an economist), to the philosopher Karl Popper (commonly known as a philosopher). We can safely call this the Berra-Hadamard-Poincaré-Hayek-Popper conjecture, which puts structural, built-in limits to the enterprise of predicting.

“The future ain’t what it used to be”, Berra later said.[29] He seems to have been right: the gains in our ability to model (and predict) the world may be dwarfed by the increases in its complexity – implying a greater and greater role for the unpredicted. The larger the role of the Black Swan, the harder it will be for us to predict. Sorry.

Before going into the limits of prediction, we will discuss our track record in forecasting and the relation between gains in knowledge and the offsetting gains in confidence.

Chapter Ten: THE SCANDAL OF PREDICTION

Welcome to Sydney – How many lovers did she have? – How to be an economist, wear a nice suit, and make friends – Not right, just “almost” right – Shallow rivers can have deep spots

One March evening, a few men and women were standing on the esplanade overlooking the bay outside the Sydney Opera House. It was close to the end of the summer in Sydney, but the men were wearing jackets despite the warm weather. The women were more thermally comfortable than the men, but they had to suffer the impaired mobility of high heels.

They all had come to pay the price of sophistication. Soon they would listen for several hours to a collection of oversize men and women singing endlessly in Russian. Many of the opera-bound people looked like they worked for the local office of J. P. Morgan, or some other financial institution where employees experience differential wealth from the rest of the local population, with concomitant pressures on them to live by a sophisticated script (wine and opera). But I was not there to take a peek at the neosophisticates. I had come to look at the Sydney Opera House, a building that adorns every Australian tourist brochure. Indeed, it is striking, though it looks like the sort of building architects create in order to impress other architects.

That evening walk in the very pleasant part of Sydney called the Rocks was a pilgrimage. While Australians were under the illusion that they had built a monument to distinguish their skyline, what they had really done was to construct a monument to our failure to predict, to plan, and to come to grips with our unknowledge of the future – our systematic underestimation of what the future has in store.

The Australians had actually built a symbol of the epistemic arrogance of the human race. The story is as follows. The Sydney Opera House was supposed to open in early 1963 at a cost of AU$ 7 million. It finally opened its doors more than ten years later, and, although it was a less ambitious version than initially envisioned, it ended up costing around AU$ 104 million. While there are far worse cases of planning failures (namely the Soviet Union), or failures to forecast (all important historical events), the Sydney Opera House provides an aesthetic (at least in principle) illustration of the difficulties. This opera-house story is the mildest of all the distortions we will discuss in this section (it was only money, and it did not cause the spilling of innocent blood). But it is nevertheless emblematic.

This chapter has two topics. First, we are demonstrably arrogant about what we think we know. We certainly know a lot, but we have a built-in tendency to think that we know a little bit more than we actually do, enough of that little bit to occasionally get into serious trouble. We shall see how you can verify, even measure, such arrogance in your own living room.

Second, we will look at the implications of this arrogance for all the activities involving prediction.

Why on earth do we predict so much? Worse, even, and more interesting: Why don’t we talk about our record in predicting? Why don’t we see how we (almost) always miss the big events? I call this the scandal of prediction.

ON THE VAGUENESS OF CATHERINE’S LOVER COUNT

Let us examine what I call epistemic arrogance, literally, our hubris concerning the limits of our knowledge. Epistēmē is a Greek word that refers to knowledge; giving a Greek name to an abstract concept makes it sound important. True, our knowledge does grow, but it is threatened by greater increases in confidence, which make our increase in knowledge at the same time an increase in confusion, ignorance, and conceit.

Take a room full of people. Randomly pick a number. The number could correspond to anything: the proportion of psychopathic stockbrokers in western Ukraine, the sales of this book during the months with r in them, the average IQ of business-book editors (or business writers), the number of lovers of Catherine II of Russia, et cetera. Ask each person in the room to independently estimate a range of possible values for that number set in such a way that they believe that they have a 98 percent chance of being right, and less than 2 percent chance of being wrong. In other words, whatever they are guessing has about a 2 percent chance to fall outside their range. For example:

“I am 98 percent confident that the population of Rajastan is between 15 and 23 million”.

“I am 98 percent confident that Catherine II of Russia had between 34 and 63 lovers”.

You can make inferences about human nature by counting how many people in your sample guessed wrong; it is not expected to be too much higher than two out of a hundred participants. Note that the subjects (your victims) are free to set their range as wide as they want: you are not trying to gauge their knowledge but rather their evaluation of their own knowledge.

Now, the results. Like many things in life, the discovery was unplanned, serendipitous, surprising, and took a while to digest. Legend has it that Albert and Raiffa, the researchers who noticed it, were actually looking for something quite different, and more boring: how humans figure out probabilities in their decision making when uncertainty is involved (what the learned call calibrating). The researchers came out befuddled. The 2 percent error rate turned out to be close to 45 percent in the population being tested! It is quite telling that the first sample consisted of Harvard Business School students, a breed not particularly renowned for their humility or introspective orientation. MBAs are particularly nasty in this regard, which might explain their business success. Later studies document more humility, or rather a smaller degree of arrogance, in other populations. Janitors and cabdrivers are rather humble. Politicians and corporate executives, alas … I’ll leave them for later.

Are we twenty-two times too comfortable with what we know? It seems so.

This experiment has been replicated dozens of times, across populations, professions, and cultures, and just about every empirical psychologist and decision theorist has tried it on his class to show his students the big problem of humankind: we are simply not wise enough to be trusted with knowledge. The intended 2 percent error rate usually turns out to be between 15 percent and 30 percent, depending on the population and the subject matter.

I have tested myself and, sure enough, failed, even while consciously trying to be humble by carefully setting a wide range – and yet such underestimation happens to be, as we will see, the core of my professional activities. This bias seems present in all cultures, even those that favor humility – there may be no consequential difference between downtown Kuala Lumpur and the ancient settlement of Amioun, (currently) Lebanon. Yesterday afternoon, I gave a workshop in London, and had been mentally writing on my way to the venue because the cabdriver had an above-average ability to “find traffic”. I decided to make a quick experiment during my talk.

I asked the participants to take a stab at a range for the number of books in Umberto Eco’s library, which, as we know from the introduction to Part One, contains 30,000 volumes. Of the sixty attendees, not a single one made the range wide enough to include the actual number (the 2 percent error rate became 100 percent). This case may be an aberration, but the distortion is exacerbated with quantities that are out of the ordinary. Interestingly, the crowd erred on the very high and the very low sides: some set their ranges at 2,000 to 4,000; others at 300,000 to 600,000.

True, someone warned about the nature of the test can play it safe and set the range between zero and infinity; but this would no longer be “calibrating” – that person would not be conveying any information, and could not produce an informed decision in such a manner. In this case it is more honorable to just say, “I don’t want to play the game; I have no clue”.

It is not uncommon to find counterexamples, people who overshoot in the opposite direction and actually overestimate their error rate: you may have a cousin particularly careful in what he says, or you may remember that college biology professor who exhibited pathological humility; the tendency that I am discussing here applies to the average of the population, not to every single individual. There are sufficient variations around the average to warrant occasional counterexamples. Such people are in the minority – and, sadly, since they do not easily achieve prominence, they do not seem to play too influential a role in society.

Epistemic arrogance bears a double effect: we overestimate what we know, and underestimate uncertainty, by compressing the range of possible uncertain states (i.e., by reducing the space of the unknown).

The applications of this distortion extend beyond the mere pursuit of knowledge: just look into the lives of the people around you. Literally any decision pertaining to the future is likely to be infected by it. Our human race is affected by a chronic underestimation of the possibility of the future straying from the course initially envisioned (in addition to other biases that sometimes exert a compounding effect). To take an obvious example, think about how many people divorce. Almost all of them are acquainted with the statistic that between one-third and one-half of all marriages fail, something the parties involved did not forecast while tying the knot. Of course, “not us”, because “we get along so well” (as if others tying the knot got along poorly).

I remind the reader that I am not testing how much people know, but assessing the difference between what people actually know and how much they think they know. I am reminded of a measure my mother concocted, as a joke, when I decided to become a businessman. Being ironic about my (perceived) confidence, though not necessarily unconvinced of my abilities, she found a way for me to make a killing. How? Someone who could figure out how to buy me at the price I am truly worth and sell me at what I think I am worth would be able to pocket a huge difference. Though I keep trying to convince her of my internal humility and insecurity concealed under a confident exterior; though I keep telling her that I am an introspector – she remains skeptical. Introspector shmintrospector, she still jokes at the time of this writing that I am a little ahead of myself.

BLACK SWAN BLINDNESS REDUX

The simple test above suggests the presence of an ingrained tendency in humans to underestimate outliers – or Black Swans. Left to our own devices, we tend to think that what happens every decade in fact only happens once every century, and, furthermore, that we know what’s going on.

This miscalculation problem is a little more subtle. In truth, outliers are not as sensitive to underestimation since they are fragile to estimation errors, which can go in both directions. As we saw in Chapter 6, there are conditions under which people overestimate the unusual or some specific unusual event (say when sensational images come to their minds) – which, we have seen, is how insurance companies thrive. So my general point is that these events are very fragile to miscalculation, with a general severe underestimation mixed with an occasional severe overestimation.

The errors get worse with the degree of remoteness to the event. So far, we have only considered a 2 percent error rate in the game we saw earlier, but if you look at, say, situations where the odds are one in a hundred, one in a thousand, or one in a million, then the errors become monstrous. The longer the odds, the larger the epistemic arrogance.

Note here one particularity of our intuitive judgment: even if we lived in Mediocristan, in which large events are rare, we would still underestimate extremes – we would think that they are even rarer. We underestimate our error rate even with Gaussian variables. Our intuitions are sub-Mediocristani. But we do not live in Mediocristan. The numbers we are likely to estimate on a daily basis belong largely in Extremistan, i.e., they are run by concentration and subjected to Black Swans.

Guessing and Predicting

There is no effective difference between my guessing a variable that is not random, but for which my information is partial or deficient, such as the number of lovers who transited through the bed of Catherine II of Russia, and predicting a random one, like tomorrow’s unemployment rate or next year’s stock market. In this sense, guessing (what I don’t know, but what someone else may know) and predicting (what has not taken place yet) are the same thing.

To further appreciate the connection between guessing and predicting, assume that instead of trying to gauge the number of lovers of Catherine of Russia, you are estimating the less interesting but, for some, more important question of the population growth for the next century, the stock-market returns, the social-security deficit, the price of oil, the results of your great-uncle’s estate sale, or the environmental conditions of Brazil two decades from now. Or, if you are the publisher of Yevgenia Krasnova’s book, you may need to produce an estimate of the possible future sales. We are now getting into dangerous waters: just consider that most professionals who make forecasts are also afflicted with the mental impediment discussed above. Furthermore, people who make forecasts professionally are often more affected by such impediments than those who don’t.

INFORMATION IS BAD FOR KNOWLEDGE

You may wonder how learning, education, and experience affect epistemic arrogance – how educated people might score on the above test, as compared with the rest of the population (using Mikhail the cabdriver as a benchmark). You will be surprised by the answer: it depends on the profession. I will first look at the advantages of the “informed” over the rest of us in the humbling business of prediction.

I recall visiting a friend at a New York investment bank and seeing a frenetic hotshot “master of the universe” type walking around with a set of wireless headphones wrapped around his ears and a microphone jutting out of the right side that prevented me from focusing on his lips during my twenty-second conversation with him. I asked my friend the purpose of that contraption. “He likes to keep in touch with London”, I was told. When you are employed, hence dependent on other people’s judgment, looking busy can help you claim responsibility for the results in a random environment. The appearance of busyness reinforces the perception of causality, of the link between results and one’s role in them. This of course applies even more to the CEOs of large companies who need to trumpet a link between their “presence” and “leadership” and the results of the company. I am not aware of any studies that probe the usefulness of their time being invested in conversations and the absorption of small-time information – nor have too many writers had the guts to question how large the CEO’s role is in a corporation’s success.

Let us discuss one main effect of information: impediment to knowledge.

Aristotle Onassis, perhaps the first mediatized tycoon, was principally famous for being rich – and for exhibiting it. An ethnic Greek refugee from southern Turkey, he went to Argentina, made a lump of cash by importing Turkish tobacco, then became a shipping magnate. He was reviled when he married Jacqueline Kennedy, the widow of the American president John F. Kennedy, which drove the heartbroken opera singer Maria Callas to immure herself in a Paris apartment to await death.

If you study Onassis’s life, which I spent part of my early adulthood doing, you would notice an interesting regularity: “work”, in the conventional sense, was not his thing. He did not even bother to have a desk, let alone an office. He was not just a dealmaker, which does not necessitate having an office, but he also ran a shipping empire, which requires day-to-day monitoring. Yet his main tool was a notebook, which contained all the information he needed. Onassis spent his life trying to socialize with the rich and famous, and to pursue (and collect) women. He generally woke up at noon. If he needed legal advice, he would summon his lawyers to some nightclub in Paris at two A.M. He was said to have an irresistible charm, which helped him take advantage of people.

Let us go beyond the anecdote. There may be a “fooled by randomness” effect here, of making a causal link between Onassis’s success and his modus operandi. I may never know if Onassis was skilled or lucky, though I am convinced that his charm opened doors for him, but I can subject his modus to a rigorous examination by looking at empirical research on the link between information and understanding. So this statement, additional knowledge of the minutiae of daily business can be useless, even actually toxic, is indirectly but quite effectively testable.

Show two groups of people a blurry image of a fire hydrant, blurry enough for them not to recognize what it is. For one group, increase the resolution slowly, in ten steps. For the second, do it faster, in five steps. Stop at a point where both groups have been presented an identical image and ask each of them to identify what they see. The members of the group that saw fewer intermediate steps are likely to recognize the hydrant much faster. Moral? The more information you give someone, the more hypotheses they will formulate along the way, and the worse off they will be. They see more random noise and mistake it for information.

The problem is that our ideas are sticky: once we produce a theory, we are not likely to change our minds – so those who delay developing their theories are better off. When you develop your opinions on the basis of weak evidence, you will have difficulty interpreting subsequent information that contradicts these opinions, even if this new information is obviously more accurate. Two mechanisms are at play here: the confirmation bias that we saw in Chapter 5, and belief perseverance, the tendency not to reverse opinions you already have. Remember that we treat ideas like possessions, and it will be hard for us to part with them.

The fire hydrant experiment was first done in the sixties, and replicated several times since. I have also studied this effect using the mathematics of information: the more detailed knowledge one gets of empirical reality, the more one will see the noise (i.e., the anecdote) and mistake it for actual information. Remember that we are swayed by the sensational. Listening to the news on the radio every hour is far worse for you than reading a weekly magazine, because the longer interval allows information to be filtered a bit.

In 1965, Stuart Oskamp supplied clinical psychologists with successive files, each containing an increasing amount of information about patients; the psychologists’ diagnostic abilities did not grow with the additional supply of information. They just got more confident in their original diagnosis. Granted, one may not expect too much of psychologists of the 1965 variety, but these findings seem to hold across disciplines.

Finally, in another telling experiment, the psychologist Paul Slovic asked bookmakers to select from eighty-eight variables in past horse races those that they found useful in computing the odds. These variables included all manner of statistical information about past performances. The bookmakers were given the ten most useful variables, then asked to predict the outcome of races. Then they were given ten more and asked to predict again. The increase in the information set did not lead to an increase in their accuracy; their confidence in their choices, on the other hand, went up markedly. Information proved to be toxic. I’ve struggled much of my life with the common middlebrow belief that “more is better” – more is sometimes, but not always, better. This toxicity of knowledge will show in our investigation of the so-called expert.

THE EXPERT PROBLEM, OR THE TRAGEDY OF THE EMPTY SUIT

So far we have not questioned the authority of the professionals involved but rather their ability to gauge the boundaries of their own knowledge. Epistemic arrogance does not preclude skills. A plumber will almost always know more about plumbing than a stubborn essayist and mathematical trader. A hernia surgeon will rarely know less about hernias than a belly dancer. But their probabilities, on the other hand, will be off – and, this is the disturbing point, you may know much more on that score than the expert. No matter what anyone tells you, it is a good idea to question the error rate of an expert’s procedure. Do not question his procedure, only his confidence. (As someone who was burned by the medical establishment, I learned to be cautious, and I urge everyone to be: if you walk into a doctor’s office with a symptom, do not listen to his odds of its not being cancer.)

I will separate the two cases as follows. The mild case: arrogance in the presence of (some) competence, and the severe case: arrogance mixed with incompetence (the empty suit). There are some professions in which you know more than the experts, who are, alas, people for whose opinions you are paying – instead of them paying you to listen to them. Which ones?

What Moves and What Does Not Move

There is a very rich literature on the so-called expert problem, running empirical testing on experts to verify their record. But it seems to be confusing at first. On one hand, we are shown by a class of expert-busting researchers such as Paul Meehl and Robyn Dawes that the “expert” is the closest thing to a fraud, performing no better than a computer using a single metric, their intuition getting in the way and blinding them. (As an example of a computer using a single metric, the ratio of liquid assets to debt fares better than the majority of credit analysts.) On the other hand, there is abundant literature showing that many people can beat computers thanks to their intuition. Which one is correct?

There must be some disciplines with true experts. Let us ask the following questions: Would you rather have your upcoming brain surgery performed by a newspaper’s science reporter or by a certified brain surgeon? On the other hand, would you prefer to listen to an economic forecast by someone with a PhD in finance from some “prominent” institution such as the Wharton School, or by a newspaper’s business writer? While the answer to the first question is empirically obvious, the answer to the second one isn’t at all. We can already see the difference between “know-how” and “know-what”. The Greeks made a distinction between technē and epistēmē. The empirical school of medicine of Menodotus of Nicomedia and Heraclites of Tarentum wanted its practitioners to stay closest to technē (i.e., “craft”), and away from epistēmē (i.e., “knowledge”, “science”).

The psychologist James Shanteau undertook the task of finding out which disciplines have experts and which have none. Note the confirmation problem here: if you want to prove that there are no experts, then you will be able to find a profession in which experts are useless. And you can prove the opposite just as well. But there is a regularity: there are professions where experts play a role, and others where there is no evidence of skills. Which are which?

Experts who tend to be experts: livestock judges, astronomers, test pilots, soil judges, chess masters, physicists, mathematicians (when they deal with mathematical problems, not empirical ones), accountants, grain inspectors, photo interpreters, insurance analysts (dealing with bell curve-style statistics).

Experts who tend to be . . . not experts: stockbrokers, clinical psychologists, psychiatrists, college admissions officers, court judges, councilors, personnel selectors, intelligence analysts (the CIA’s record, in spite of its costs, is pitiful). I would add these results from my own examination of the literature: economists, financial forecasters, finance professors, political scientists, “risk experts”, Bank for International Settlements staff, august members of the International Association of Financial Engineers, and personal financial advisers.

Simply, things that move, and therefore require knowledge, do not usually have experts, while things that don’t move seem to have some experts. In other words, professions that deal with the future and base their studies on the nonrepeatable past have an expert problem (with the exception of the weather and businesses involving short-term physical processes, not socioeconomic ones). I am not saying that no one who deals with the future provides any valuable information (as I pointed out earlier, newspapers can predict theater opening hours rather well), but rather that those who provide no tangible added value are generally dealing with the future.

Another way to see it is that things that move are often Black Swan-prone. Experts are narrowly focused persons who need to “tunnel”. In situations where tunneling is safe, because Black Swans are not consequential, the expert will do well.

Robert Trivers, an evolutionary psychologist and a man of supernormal insights, has another answer (he became one of the most influential evolutionary thinkers since Darwin with ideas he developed while trying to go to law school). He links it to self-deception. In fields where we have ancestral traditions, such as pillaging, we are very good at predicting outcomes by gauging the balance of power. Humans and chimps can immediately sense which side has the upper hand, and make a cost-benefit analysis about whether to attack and take the goods and the mates. Once you start raiding, you put yourself into a delusional mind-set that makes you ignore additional information – it is best to avoid wavering during battle. On the other hand, unlike raids, large-scale wars are not something present in human heritage – we are new to them – so we tend to misestimate their duration and overestimate our relative power. Recall the underestimation of the duration of the Lebanese war. Those who fought in the Great War thought it would be a mere cakewalk. So it was with the Vietnam conflict, so it is with the Iraq war, and just about every modern conflict.

You cannot ignore self-delusion. The problem with experts is that they do not know what they do not know. Lack of knowledge and delusion about the quality of your knowledge come together – the same process that makes you know less also makes you satisfied with your knowledge.

Next, instead of the range of forecasts, we will concern ourselves with the accuracy of forecasts, i.e., the ability to predict the number itself.

How to Have the Last Laugh

We can also learn about prediction errors from trading activities. We quants have ample data about economic and financial forecasts – from general data about large economic variables to the forecasts and market calls of the television “experts” or “authorities”. The abundance of such data and the ability to process it on a computer make the subject invaluable for an empiricist. If I had been a journalist, or, God forbid, a historian, I would have had a far more difficult time testing the predictive effectiveness of these verbal discussions. You cannot process verbal commentaries with a computer – at least not so easily. Furthermore, many economists naïvely make the mistake of producing a lot of forecasts concerning many variables, giving us a database of economists and variables, which enables us to see whether some economists are better than others (there is no consequential difference) or if there are certain variables for which they are more competent (alas, none that are meaningful).

I was in a seat to observe from very close our ability to predict. In my full-time trader days, a couple of times a week, at 8:30 A.M., my screen would flash some economic number released by the Department of Commerce, or Treasury, or Trade, or some such honorable institution. I never had a clue about what these numbers meant and never saw any need to invest energy in finding out. So I would not have cared the least about them except that people got all excited and talked quite a bit about what these figures were going to mean, pouring verbal sauce around the forecasts. Among such numbers you have the Consumer Price Index (CPI), Nonfarm Payrolls (changes in the number of employed individuals), the Index of Leading Economic Indicators, Sales of Durable Goods (dubbed “doable girls” by traders), the Gross Domestic Product (the most important one), and many more that generate different levels of excitement depending on their presence in the discourse.

The data vendors allow you to take a peek at forecasts by “leading economists”, people (in suits) who work for the venerable institutions, such as J. P. Morgan Chase or Morgan Stanley. You can watch these economists talk, theorizing eloquently and convincingly. Most of them earn seven figures and they rank as stars, with teams of researchers crunching numbers and projections. But the stars are foolish enough to publish their projected numbers, right there, for posterity to observe and assess their degree of competence.

Worse yet, many financial institutions produce booklets every year-end called “Outlook for 200X”, reading into the following year. Of course they do not check how their previous forecasts fared after they were formulated. The public might have been even more foolish in buying the arguments without requiring the following simple tests – easy though they are, very few of them have been done. One elementary empirical test is to compare these star economists to a hypothetical cabdriver (the equivalent of Mikhail from Chapter 1): you create a synthetic agent, someone who takes the most recent number as the best predictor of the next, while assuming that he does not know anything. Then all you have to do is compare the error rates of the hotshot economists and your synthetic agent. The problem is that when you are swayed by stories you forget about the necessity of such testing.

Events Are Outlandish

The problem with prediction is a little more subtle. It comes mainly from the fact that we are living in Extremistan, not Mediocristan. Our predictors may be good at predicting the ordinary, but not the irregular, and this is where they ultimately fail. All you need to do is miss one interest-rates move, from 6 percent to 1 percent in a longer-term projection (what happened between 2000 and 2001) to have all your subsequent forecasts rendered completely ineffectual in correcting your cumulative track record. What matters is not how often you are right, but how large your cumulative errors are.

And these cumulative errors depend largely on the big surprises, the big opportunities. Not only do economic, financial, and political predictors miss them, but they are quite ashamed to say anything outlandish to their clients – and yet events, it turns out, are almost always outlandish. Furthermore, as we will see in the next section, economic forecasters tend to fall closer to one another than to the resulting outcome. Nobody wants to be off the wall.

Since my testing has been informal, for commercial and entertainment purposes, for my own consumption and not formatted for publishing, I will use the more formal results of other researchers who did the dog work of dealing with the tedium of the publishing process. I am surprised that so little introspection has been done to check on the usefulness of these professions. There are a few – but not many – formal tests in three domains: security analysis, political science, and economics. We will no doubt have more in a few years. Or perhaps not – the authors of such papers might become stigmatized by his colleagues. Out of close to a million papers published in politics, finance, and economics, there have been only a small number of checks on the predictive quality of such knowledge.

Herding Like Cattle

A few researchers have examined the work and attitude of security analysts, with amazing results, particularly when one considers the epistemic arrogance of these operators. In a study comparing them with weather forecasters, Tadeusz Tyszka and Piotr Zielonka document that the analysts are worse at predicting, while having a greater faith in their own skills. Somehow, the analysts’ self-evaluation did not decrease their error margin after their failures to forecast.

Last June I bemoaned the dearth of such published studies to Jean-Philippe Bouchaud, whom I was visiting in Paris. He is a boyish man who looks half my age though he is only slightly younger than I, a matter that I half jokingly attribute to the beauty of physics. Actually he is not exactly a physicist but one of those quantitative scientists who apply methods of statistical physics to economic variables, a field that was started by Benoît Mandelbrot in the late 1950s. This community does not use Mediocristan mathematics, so they seem to care about the truth. They are completely outside the economics and business-school finance establishment, and survive in physics and mathematics departments or, very often, in trading houses (traders rarely hire economists for their own consumption, but rather to provide stories for their less sophisticated clients). Some of them also operate in sociology with the same hostility on the part of the “natives”. Unlike economists who wear suits and spin theories, they use empirical methods to observe the data and do not use the bell curve.

He surprised me with a research paper that a summer intern had just finished under his supervision and that had just been accepted for publication; it scrutinized two thousand predictions by security analysts. What it showed was that these brokerage-house analysts predicted nothing – a naïve forecast made by someone who takes the figures from one period as predictors of the next would not do markedly worse. Yet analysts are informed about companies’ orders, forthcoming contracts, and planned expenditures, so this advanced knowledge should help them do considerably better than a naïve forecaster looking at the past data without further information. Worse yet, the forecasters’ errors were significantly larger than the average difference between individual forecasts, which indicates herding. Normally, forecasts should be as far from one another as they are from the predicted number. But to understand how they manage to stay in business, and why they don’t develop severe nervous breakdowns (with weight loss, erratic behavior, or acute alcoholism), we must look at the work of the psychologist Philip Tetlock.

I Was “Almost” Right

Tetlock studied the business of political and economic “experts”. He asked various specialists to judge the likelihood of a number of political, economic, and military events occurring within a specified time frame (about five years ahead). The outcomes represented a total number of around twenty-seven thousand predictions, involving close to three hundred specialists. Economists represented about a quarter of his sample. The study revealed that experts’ error rates were clearly many times what they had estimated. His study exposed an expert problem: there was no difference in results whether one had a PhD or an undergraduate degree. Well-published professors had no advantage over journalists. The only regularity Tetlock found was the negative effect of reputation on prediction: those who had a big reputation were worse predictors than those who had none.

But Tetlock’s focus was not so much to show the real competence of experts (although the study was quite convincing with respect to that) as to investigate why the experts did not realize that they were not so good at their own business, in other words, how they spun their stories. There seemed to be a logic to such incompetence, mostly in the form of belief defense, or the protection of self-esteem. He therefore dug further into the mechanisms by which his subjects generated ex post explanations.

I will leave aside how one’s ideological commitments influence one’s perception and address the more general aspects of this blind spot toward one’s own predictions.

You tell yourself that you were playing a different game. Let’s say you failed to predict the weakening and precipitous fall of the Soviet Union (which no social scientist saw coming). It is easy to claim that you were excellent at understanding the political workings of the Soviet Union, but that these Russians, being exceedingly Russian, were skilled at hiding from you crucial economic elements. Had you been in possession of such economic intelligence, you would certainly have been able to predict the demise of the Soviet regime. It is not your skills that are to blame. The same might apply to you if you had forecast the landslide victory for Al Gore over George W. Bush. You were not aware that the economy was in such dire straits; indeed, this fact seemed to be concealed from everyone. Hey, you are not an economist, and the game turned out to be about economics.

You invoke the outlier. Something happened that was outside the system, outside the scope of your science. Given that it was not predictable, you are not to blame. It was a Black Swan and you are not supposed to predict Black Swans. Black Swans, NNT tells us, are fundamentally unpredictable (but then I think that NNT would ask you, Why rely on predictions?). Such events are “exogenous”, coming from outside your science. Or maybe it was an event of very, very low probability, a thousand-year flood, and we were unlucky to be exposed to it. But next time, it will not happen. This focus on the narrow game and linking one’s performance to a given script is how the nerds explain the failures of mathematical methods in society. The model was right, it worked well, but the game turned out to be a different one than anticipated.

The “almost right” defense. Retrospectively, with the benefit of a revision of values and an informational framework, it is easy to feel that it was a close call. Tetlock writes, “Observers of the former Soviet Union who, in 1988, thought the Communist Party could not be driven from power by 1993 or 1998 were especially likely to believe that Kremlin hardliners almost overthrew Gorbachev in the 1991 coup attempt, and they would have if the conspirators had been more resolute and less inebriated, or if key military officers had obeyed orders to kill civilians challenging martial law or if Yeltsin had not acted so bravely”.

I will go now into more general defects uncovered by this example. These “experts” were lopsided: on the occasions when they were right, they attributed it to their own depth of understanding and expertise; when wrong, it was either the situation that was to blame, since it was unusual, or, worse, they did not recognize that they were wrong and spun stories around it. They found it difficult to accept that their grasp was a little short. But this attribute is universal to all our activities: there is something in us designed to protect our self-esteem.

We humans are the victims of an asymmetry in the perception of random events. We attribute our successes to our skills, and our failures to external events outside our control, namely to randomness. We feel responsible for the good stuff, but not for the bad. This causes us to think that we are better than others at whatever we do for a living. Ninety-four percent of Swedes believe that their driving skills put them in the top 50 percent of Swedish drivers; 84 percent of Frenchmen feel that their lovemaking abilities put them in the top half of French lovers.

The other effect of this asymmetry is that we feel a little unique, unlike others, for whom we do not perceive such an asymmetry. I have mentioned the unrealistic expectations about the future on the part of people in the process of tying the knot. Also consider the number of families who tunnel on their future, locking themselves into hard-to-flip real estate thinking they are going to live there permanently, not realizing that the general track record for sedentary living is dire. Don’t they see those well-dressed real-estate agents driving around in fancy two-door German cars? We are very nomadic, far more than we plan to be, and forcibly so. Consider how many people who have abruptly lost their job deemed it likely to occur, even a few days before. Or consider how many drug addicts entered the game willing to stay in it so long.

There is another lesson from Tetlock’s experiment. He found what I mentioned earlier, that many university stars, or “contributors to top journals”, are no better than the average New York Times reader or journalist in detecting changes in the world around them. These sometimes overspecialized experts failed tests in their own specialties.

The hedgehog and the fox. Tetlock distinguishes between two types of predictors, the hedgehog and the fox, according to a distinction promoted by the essayist Isaiah Berlin. As in Aesop’s fable, the hedgehog knows one thing, the fox knows many things – these are the adaptable types you need in daily life. Many of the prediction failures come from hedgehogs who are mentally married to a single big Black Swan event, a big bet that is not likely to play out. The hedgehog is someone focusing on a single, improbable, and consequential event, falling for the narrative fallacy that makes us so blinded by one single outcome that we cannot imagine others.

Hedgehogs, because of the narrative fallacy, are easier for us to understand – their ideas work in sound bites. Their category is overrepresented among famous people; ergo famous people are on average worse at forecasting than the rest of the predictors.

I have avoided the press for a long time because whenever journalists hear my Black Swan story, they ask me to give them a list of future impacting events. They want me to be predictive of these Black Swans. Strangely, my book Fooled by Randomness, published a week before September 11, 2001, had a discussion of the possibility of a plane crashing into my office building. So I was naturally asked to show “how I predicted the event”. I didn’t predict it – it was a chance occurrence. I am not playing oracle! I even recently got an e-mail asking me to list the next ten Black Swans. Most fail to get my point about the error of specificity, the narrative fallacy, and the idea of prediction. Contrary to what people might expect, I am not recommending that anyone become a hedgehog – rather, be a fox with an open mind. I know that history is going to be dominated by an improbable event, I just don’t know what that event will be.

Reality? What For?

I found no formal, Tetlock-like comprehensive study in economics journals. But, suspiciously, I found no paper trumpeting economists’ ability to produce reliable projections. So I reviewed what articles and working papers in economics I could find. They collectively show no convincing evidence that economists as a community have an ability to predict, and, if they have some ability, their predictions are at best just slightly better than random ones – not good enough to help with serious decisions.

The most interesting test of how academic methods fare in the real world was run by Spyros Makridakis, who spent part of his career managing competitions between forecasters who practice a “scientific method” called econometrics – an approach that combines economic theory with statistical measurements. Simply put, he made people forecast in real life and then he judged their accuracy. This led to the series of “M-Competitions” he ran, with assistance from Michele Hibon, of which M3 was the third and most recent one, completed in 1999. Makridakis and Hibon reached the sad conclusion that “statistically sophisticated or complex methods do not necessarily provide more accurate forecasts than simpler ones”.

I had an identical experience in my quant days – the foreign scientist with the throaty accent spending his nights on a computer doing complicated mathematics rarely fares better than a cabdriver using the simplest methods within his reach. The problem is that we focus on the rare occasion when these methods work and almost never on their far more numerous failures. I kept begging anyone who would listen to me: “Hey, I am an uncomplicated, no-nonsense fellow from Amioun, Lebanon, and have trouble understanding why something is considered valuable if it requires running computers overnight but does not enable me to predict better than any other guy from Amioun”. The only reactions I got from these colleagues were related to the geography and history of Amioun rather than a no-nonsense explanation of their business. Here again, you see the narrative fallacy at work, except that in place of journalistic stories you have the more dire situation of the “scientists” with a Russian accent looking in the rearview mirror, narrating with equations, and refusing to look ahead because he may get too dizzy. The econometrician Robert Engel, an otherwise charming gentleman, invented a very complicated statistical method called GARCH and got a Nobel for it. No one tested it to see if it has any validity in real life. Simpler, less sexy methods fare exceedingly better, but they do not take you to Stockholm. You have an expert problem in Stockholm, and I will discuss it in Chapter 17.

This unfitness of complicated methods seems to apply to all methods. Another study effectively tested practitioners of something called game theory, in which the most notorious player is John Nash, the schizophrenic mathematician made famous by the film A Beautiful Mind. Sadly, for all the intellectual appeal of these methods and all the media attention, its practitioners are no better at predicting than university students.

There is another problem, and it is a little more worrisome. Makridakis and Hibon were to find out that the strong empirical evidence of their studies has been ignored by theoretical statisticians. Furthermore, they encountered shocking hostility toward their empirical verifications. “Instead [statisticians] have concentrated their efforts in building more sophisticated models without regard to the ability of such models to more accurately predict real-life data”, Makridakis and Hibon write.

Someone may counter with the following argument: Perhaps economists’ forecasts create feedback that cancels their effect (this is called the Lucas critique, after the economist Robert Lucas). Let’s say economists predict inflation; in response to these expectations the Federal Reserve acts and lowers inflation. So you cannot judge the forecast accuracy in economics as you would with other events. I agree with this point, but I do not believe that it is the cause of the economists’ failure to predict. The world is far too complicated for their discipline.

When an economist fails to predict outliers he often invokes the issue of earthquakes or revolutions, claiming that he is not into geodesies, atmospheric sciences, or political science, instead of incorporating these fields into his studies and accepting that his field does not exist in isolation. Economics is the most insular of fields; it is the one that quotes least from outside itself! Economics is perhaps the subject that currently has the highest number of philistine scholars – scholarship without erudition and natural curiosity can close your mind and lead to the fragmentation of disciplines.

“OTHER THAN THAT”, IT WAS OKAY

We have used the story of the Sydney Opera House as a springboard for our discussion of prediction. We will now address another constant in human nature: a systematic error made by project planners, coming from a mixture of human nature, the complexity of the world, or the structure of organizations. In order to survive, institutions may need to give themselves and others the appearance of having a “vision”.

Plans fail because of what we have called tunneling, the neglect of sources of uncertainty outside the plan itself.

The typical scenario is as follows. Joe, a nonfiction writer, gets a book contract with a set final date for delivery two years from now. The topic is relatively easy: the authorized biography of the writer Salman Rushdie, for which Joe has compiled ample data. He has even tracked down Rushdie’s former girlfriends and is thrilled at the prospect of pleasant interviews. Two years later, minus, say, three months, he calls to explain to the publisher that he will be a little delayed. The publisher has seen this coming; he is used to authors being late. The publishing house now has cold feet because the subject has unexpectedly faded from public attention – the firm projected that interest in Rushdie would remain high, but attention has faded, seemingly because the Iranians, for some reason, lost interest in killing him.

Let’s look at the source of the biographer’s underestimation of the time for completion. He projected his own schedule, but he tunneled, as he did not forecast that some “external” events would emerge to slow him down. Among these external events were the disasters on September 11, 2001, which set him back several months; trips to Minnesota to assist his ailing mother (who eventually recovered); and many more, like a broken engagement (though not with Rushdie’s ex-girlfriend). “Other than that”, it was all within his plan; his own work did not stray the least from schedule. He does not feel responsible for his failure.[30]

The unexpected has a one-sided effect with projects. Consider the track records of builders, paper writers, and contractors. The unexpected almost always pushes in a single direction: higher costs and a longer time to completion. On very rare occasions, as with the Empire State Building, you get the opposite: shorter completion and lower costs – these occasions are truly exceptional.

We can run experiments and test for repeatability to verify if such errors in projection are part of human nature. Researchers have tested how students estimate the time needed to complete their projects. In one representative test, they broke a group into two varieties, optimistic and pessimistic. Optimistic students promised twenty-six days; the pessimistic ones forty-seven days. The average actual time to completion turned out to be fifty-six days.

The example of Joe the writer is not acute. I selected it because it concerns a repeatable, routine task – for such tasks our planning errors are milder. With projects of great novelty, such as a military invasion, an all-out war, or something entirely new, errors explode upward. In fact, the more routine the task, the better you learn to forecast. But there is always something nonroutine in our modern environment.

There may be incentives for people to promise shorter completion dates – in order to win the book contract or in order for the builder to get your down payment and use it for his upcoming trip to Antigua. But the planning problem exists even where there is no incentive to underestimate the duration (or the costs) of the task. As I said earlier, we are too narrow-minded a species to consider the possibility of events straying from our mental projections, but furthermore, we are too focused on matters internal to the project to take into account external uncertainty, the “unknown unknown”, so to speak, the contents of the unread books.

There is also the nerd effect, which stems from the mental elimination of off-model risks, or focusing on what you know. You view the world from within a model. Consider that most delays and cost overruns arise from unexpected elements that did not enter into the plan – that is, they lay outside the model at hand – such as strikes, electricity shortages, accidents, bad weather, or rumors of Martian invasions. These small Black Swans that threaten to hamper our projects do not seem to be taken into account. They are too abstract – we don’t know how they look and cannot talk about them intelligently.

We cannot truly plan, because we do not understand the future – but this is not necessarily bad news. We could plan while bearing in mind such limitations. It just takes guts.

The Beauty of Technology: Excel Spreadsheets

In the not too distant past, say the precomputer days, projections remained vague and qualitative, one had to make a mental effort to keep track of them, and it was a strain to push scenarios into the future. It took pencils, erasers, reams of paper, and huge wastebaskets to engage in the activity. Add to that an accountant’s love for tedious, slow work. The activity of projecting, in short, was effortful, undesirable, and marred with self-doubt.

But things changed with the intrusion of the spreadsheet. When you put an Excel spreadsheet into computer-literate hands you get a “sales projection” effortlessly extending ad infinitum! Once on a page or on a computer screen, or, worse, in a PowerPoint presentation, the projection takes on a life of its own, losing its vagueness and abstraction and becoming what philosophers call reified, invested with concreteness; it takes on a new life as a tangible object.

My friend Brian Hinchcliffe suggested the following idea when we were both sweating at the local gym. Perhaps the ease with which one can project into the future by dragging cells in these spreadsheet programs is responsible for the armies of forecasters confidently producing longer-term forecasts (all the while tunneling on their assumptions). We have become worse planners than the Soviet Russians thanks to these potent computer programs given to those who are incapable of handling their knowledge. Like most commodity traders, Brian is a man of incisive and sometimes brutally painful realism.

A classical mental mechanism, called anchoring, seems to be at work here. You lower your anxiety about uncertainty by producing a number, then you “anchor” on it, like an object to hold on to in the middle of a vacuum. This anchoring mechanism was discovered by the fathers of the psychology of uncertainty, Danny Kahneman and Amos Tversky, early in their heuristics and biases project. It operates as follows. Kahneman and Tversky had their subjects spin a wheel of fortune. The subjects first looked at the number on the wheel, which they knew was random, then they were asked to estimate the number of African countries in the United Nations. Those who had a low number on the wheel estimated a low number of African nations; those with a high number produced a higher estimate.

Similarly, ask someone to provide you with the last four digits of his social security number. Then ask him to estimate the number of dentists in Manhattan. You will find that by making him aware of the four-digit number, you elicit an estimate that is correlated with it.

We use reference points in our heads, say sales projections, and start building beliefs around them because less mental effort is needed to compare an idea to a reference point than to evaluate it in the absolute (System 1 at work!). We cannot work without a point of reference.

So the introduction of a reference point in the forecaster’s mind will work wonders. This is no different from a starting point in a bargaining episode: you open with high number (“I want a million for this house”); the bidder will answer “only eight-fifty” – the discussion will be determined by that initial level.

The Character of Prediction Errors

Like many biological variables, life expectancy is from Mediocristan, that is, it is subjected to mild randomness. It is not scalable, since the older we get, the less likely we are to live. In a developed country a newborn female is expected to die at around 79, according to insurance tables. When, she reaches her 79th birthday, her life expectancy, assuming that she is in typical health, is another 10 years. At the age of 90, she should have another 4.7 years to go. At the age of 100, 2.5 years. At the age of 119, if she miraculously lives that long, she should have about nine months left. As she lives beyond the expected date of death, the number of additional years to go decreases. This illustrates the major property of random variables related to the bell curve. The conditional expectation of additional life drops as a person gets older.

With human projects and ventures we have another story. These are often scalable, as I said in Chapter 3. With scalable variables, the ones from Extremistan, you will witness the exact opposite effect. Let’s say a project is expected to terminate in 79 days, the same expectation in days as the newborn female has in years. On the 79th day, if the project is not finished, it will be expected to take another 25 days to complete. But on the 90th day, if the project is still not completed, it should have about 58 days to go. On the 100th, it should have 89 days to go. On the 119th, it should have an extra 149 days. On day 600, if the project is not done, you will be expected to need an extra 1,590 days. As you see, the longer you wait, the longer you will be expected to wait.

Let’s say you are a refugee waiting for the return to your homeland. Each day that passes you are getting farther from, not closer to, the day of triumphal return. The same applies to the completion date of your next opera house. If it was expected to take two years, and three years later you are asking questions, do not expect the project to be completed any time soon. If wars last on average six months, and your conflict has been going on for two years, expect another few years of problems. The Arab-Israeli conflict is sixty years old, and counting – yet it was considered “a simple problem” sixty years ago. (Always remember that, in a modern environment, wars last longer and kill more people than is typically planned.) Another example: Say that you send your favorite author a letter, knowing that he is busy and has a two-week turnaround. If three weeks later your mailbox is still empty, do not expect the letter to come tomorrow – it will take on average another three weeks. If three months later you still have nothing, you will have to expect to wait another year. Each day will bring you closer to your death but further from the receipt of the letter.

This subtle but extremely consequential property of scalable randomness is unusually counterintuitive. We misunderstand the logic of large deviations from the norm.

I will get deeper into these properties of scalable randomness in Part Three. But let us say for now that they are central to our misunderstanding of the business of prediction.

DON’T CROSS A RIVER IF IT IS (ON AVERAGE) FOUR FEET DEEP

Corporate and government projections have an additional easy-to-spot flaw: they do not attach a possible error rate to their scenarios. Even in the absence of Black Swans this omission would be a mistake.

I once gave a talk to policy wonks at the Woodrow Wilson Center in Washington, D.C., challenging them to be aware of our weaknesses in seeing ahead.

The attendees were tame and silent. What I was telling them was against everything they believed and stood for; I had gotten carried away with my aggressive message, but they looked thoughtful, compared to the testosterone-charged characters one encounters in business. I felt guilty for my aggressive stance. Few asked questions. The person who organized the talk and invited me must have been pulling a joke on his colleagues. I was like an aggressive atheist making his case in front of a synod of cardinals, while dispensing with the usual formulaic euphemisms.

Yet some members of the audience were sympathetic to the message. One anonymous person (he is employed by a governmental agency) explained to me privately after the talk that in January 2004 his department was forecasting the price of oil for twenty-five years later at $27 a barrel, slightly higher than what it was at the time. Six months later, around June 2004, after oil doubled in, price, they had to revise their estimate to $54 (the price of oil is currently, as I am writing these lines, close to $79 a barrel). It did not dawn on them that it was ludicrous to forecast a second time given that their forecast was off so early and so markedly, that this business of forecasting had to be somehow questioned. And they were looking twenty-five years ahead! Nor did it hit them that there was something called an error rate to take into account.[31]

Forecasting without incorporating an error rate uncovers three fallacies, all arising from the same misconception about the nature of uncertainty.

The first fallacy: variability matters. The first error lies in taking a projection too seriously, without heeding its accuracy. Yet, for planning purposes, the accuracy in your forecast matters far more the forecast itself. I will explain it as follows.

Don’t cross a river if it is four feet deep on average. You would take a different set of clothes on your trip to some remote destination if I told you that the temperature was expected to be seventy degrees Fahrenheit, with an expected error rate of forty degrees than if I told you that my margin of error was only five degrees. The policies we need to make decisions on should depend far more on the range of possible outcomes than on the expected final number. I have seen, while working for a bank, how people project cash flows for companies without wrapping them in the thinnest layer of uncertainty. Go to the stockbroker and check on what method they use to forecast sales ten years ahead to “calibrate” their valuation models. Go find out how analysts forecast government deficits. Go to a bank or security-analysis training program and see how they teach trainees to make assumptions; they do not teach you to build an error rate around those assumptions – but their error rate is so large that it is far more significant than the projection itself!

The second fallacy lies in failing to take into account forecast degradation as the projected period lengthens. We do not realize the full extent of the difference between near and far futures. Yet the degradation in such forecasting through time becomes evident through simple introspective examination – without even recourse to scientific papers, which on this topic are suspiciously rare. Consider forecasts, whether economic or technological, made in 1905 for the following quarter of a century. How close to the projections did 1925 turn out to be? For a convincing experience, go read George Orwell’s 1984. Or look at more recent forecasts made in 1975 about the prospects for the new millennium. Many events have taken place and new technologies have appeared that lay outside the forecasters’ imaginations; many more that were expected to take place or appear did not do so. Our forecast errors have traditionally been enormous, and there may be no reasons for us to believe that we are suddenly in a more privileged position to see into the future compared to our blind predecessors. Forecasting by bureaucrats tends to be used for anxiety relief rather than for adequate policy making.

The third fallacy, and perhaps the gravest, concerns a misunderstanding of the random character of the variables being forecast. Owing to the Black Swan, these variables can accommodate far more optimistic – or far more pessimistic – scenarios than are currently expected. Recall from my experiment with Dan Goldstein testing the domain-specificity of our intuitions, how we tend to make no mistakes in Mediocristan, but make large ones in Extremistan as we do not realize the consequences of the rare event.

What is the implication here? Even if you agree with a given forecast, you have to worry about the real possibility of significant divergence from it. These divergences may be welcomed by a speculator who does not depend on steady income; a retiree, however, with set risk attributes cannot afford such gyrations. I would go even further and, using the argument about the depth of the river, state that it is the lower bound of estimates (i.e., the worst case) that matters when engaging in a policy – the worst case is far more consequential than the forecast itself. This is particularly true if the bad scenario is not acceptable. Yet the current phraseology makes no allowance for that. None.

It is often said that “is wise he who can see things coming”. Perhaps the wise one is the one who knows that he cannot see things far away.

Get Another Job

The two typical replies I face when I question forecasters’ business are: “What should he do? Do you have a better way for us to predict?” and “If you’re so smart, show me your own prediction”. In fact, the latter question, usually boastfully presented, aims to show the superiority of the practitioner and “doer” over the philosopher, and mostly comes from people who do not know that I was a trader. If there is one advantage of having been in the daily practice of uncertainty, it is that one does not have to take any crap from bureaucrats.

One of my clients asked for my predictions. When I told him I had none, he was offended and decided to dispense with my services. There is in fact a routine, unintrospective habit of making businesses answer questionnaires and fill out paragraphs showing their “outlooks”. I have never had an outlook and have never made professional predictions – but at least I know that I cannot forecast and a small number of people (those I care about) take that as an asset.

There are those people who produce forecasts uncritically. When asked why they forecast, they answer, “Well, that’s what we’re paid to do here”.

My suggestion: get another job.

This suggestion is not too demanding: unless you are a slave, I assume you have some amount of control over your job selection. Otherwise this becomes a problem of ethics, and a grave one at that. People who are trapped in their jobs who forecast simply because “that’s my job”, knowing pretty well that their forecast is ineffectual, are not what I would call ethical. What they do is no different from repeating lies simply because “it’s my job”.

Anyone who causes harm by forecasting should be treated as either a fool or a liar. Some forecasters cause more damage to society than criminals. Please, don’t drive a school bus blindfolded.


Caravaggio’s The Fortune-Teller. We have always been suckers for those who tell us about the future. In this picture the fortune-teller is stealing the victim’s ring.

At JFK

At New York’s JFK airport you can find gigantic newsstands with walls full of magazines. They are usually manned by a very polite family from the Indian subcontinent (just the parents; the children are in medical school). These walls present you with the entire corpus of what an “informed” person needs in order “to know what’s going on”. I wonder how long it would take to read every single one of these magazines, excluding the fishing and motorcycle periodicals (but including the gossip magazines – you might as well have some fun). Half a lifetime? An entire lifetime?

Sadly, all this knowledge would not help the reader to forecast what is to happen tomorrow. Actually, it might decrease his ability to forecast.

There is another aspect to the problem of prediction: its inherent limitations, those that have little to do with human nature, but instead arise from the very nature of information itself. I have said that the Black Swan has three attributes: unpredictability, consequences, and retrospective explainability. Let us examine this unpredictability business.[32]

Chapter Eleven: HOW TO LOOK FOR BIRD POOP

Popper’s prediction about the predictors – Poincaré plays with billiard balls – Von Hayek is allowed to be irreverent – Anticipation machines – Paul Samuelson wants you to be rational – Beware the philosopher – Demand some certainties.

We’ve seen that a) we tend to both tunnel and think “narrowly” (epistemic arrogance), and b) our prediction record is highly overestimated – many people who think they can predict actually can’t.

We will now go deeper into the unadvertised structural limitations on our ability to predict. These limitations may arise not from us but from the nature of the activity itself – too complicated, not just for us, but for any tools we have or can conceivably obtain. Some Black Swans will remain elusive, enough to kill our forecasts.

HOW TO LOOK FOR BIRD POOP

In the summer of 1998 I worked at a European-owned financial institution. It wanted to distinguish itself by being rigorous and farsighted. The unit involved in trading had five managers, all serious-looking (always in dark blue suits, even on dress-down Fridays), who had to meet throughout the summer in order “to formulate the five-year plan”. This was supposed to be a meaty document, a sort of user’s manual for the firm. A five-year plan? To a fellow deeply skeptical of the central planner, the notion was ludicrous; growth within the firm had been organic and unpredictable, bottom-up not top-down. It was well known that the firm’s most lucrative department was the product of a chance call from a customer asking for a specific but strange financial transaction. The firm accidentally realized that they could build a unit just to handle these transactions, since they were profitable, and it rapidly grew to dominate their activities.

The managers flew across the world in order to meet: Barcelona, Hong Kong, et cetera. A lot of miles for a lot of verbiage. Needless to say they were usually sleep-deprived. Being an executive does not require very developed frontal lobes, but rather a combination of charisma, a capacity to sustain boredom, and the ability to shallowly perform on harrying schedules. Add to these tasks the “duty” of attending opera performances.

The managers sat down to brainstorm during these meetings, about, of course, the medium-term future – they wanted to have “vision”. But then an event occurred that was not in the previous five-year plan: the Black Swan of the Russian financial default of 1998 and the accompanying meltdown of the values of Latin American debt markets. It had such an effect on the firm that, although the institution had a sticky employment policy of retaining managers, none of the five was still employed there a month after the sketch of the 1998 five-year plan.

Yet I am confident that today their replacements are still meeting to work on the next “five-year plan”. We never learn.

Inadvertent Discoveries

The discovery of human epistemic arrogance, as we saw in the previous chapter, was allegedly inadvertent. But so were many other discoveries as well. Many more than we think.

The classical model of discovery is as follows: you search for what you know (say, a new way to reach India) and find something you didn’t know was there (America).

If you think that the inventions we see around us came from someone sitting in a cubicle and concocting them according to a timetable, think again: almost everything of the moment is the product of serendipity. The term serendipity was coined in a letter by the writer Hugh Walpole, who derived it from a fairy tale, “The Three Princes of Serendip”. These princes “were always making discoveries by accident or sagacity, of things which they were not in quest of”.

In other words, you find something you are not looking for and it changes the world, while wondering after its discovery why it “took so long” to arrive at something so obvious. No journalist was present when the wheel was invented, but I am ready to bet that people did not just embark on the project of inventing the wheel (that main engine of growth) and then complete it according to a timetable. Likewise with most inventions.

Sir Francis Bacon commented that the most important advances are the least predictable ones, those “lying out of the path of the imagination”. Bacon was not the last intellectual to point this out. The idea keeps popping up, yet then rapidly dying out. Almost half a century ago, the bestselling novelist Arthur Koestler wrote an entire book about it, aptly called The Sleepwalkers. It describes discoverers as sleepwalkers stumbling upon results and not realizing what they have in their hands. We think that the import of Copernicus’s discoveries concerning planetary motions was obvious to him and to others in his day; he had been dead seventy-five years before the authorities started getting offended. Likewise we think that Galileo was a victim in the name of science; in fact, the church didn’t take him too seriously. It seems, rather, that Galileo caused the uproar himself by ruffling a few feathers. At the end of the year in which Darwin and Wallace presented their papers on evolution by natural selection that changed the way we view the world, the president of the Linnean society, where the papers were presented, announced that the society saw “no striking discovery”, nothing in particular that could revolutionize science.

We forget about unpredictability when it is our turn to predict. This is why people can read this chapter and similar accounts, agree entirely with them, yet fail to heed their arguments when thinking about the future.

Take this dramatic example of a serendipitous discovery. Alexander Fleming was cleaning up his laboratory when he found that penicillium mold had contaminated one of his old experiments. He thus happened upon the antibacterial properties of penicillin, the reason many of us are alive today (including, as I said in Chapter 8, myself, for typhoid fever is often fatal when untreated). True, Fleming was looking for “something”, but the actual discovery was simply serendipitous. Furthermore, while in hindsight the discovery appears momentous, it took a very long time for health officials to realize the importance of what they had on their hands. Even Fleming lost faith in the idea before it was subsequently revived.

In 1965 two radio astronomists at Bell Labs in New Jersey who were mounting a large antenna were bothered by a background noise, a hiss, like the static that you hear when you have bad reception. The noise could not be eradicated – even after they cleaned the bird excrement out of the dish, since they were convinced that bird poop was behind the noise. It took a while for them to figure out that what they were hearing was the trace of the birth of the universe, the cosmic background microwave radiation. This discovery revived the big bang theory, a languishing idea that was posited by earlier researchers. I found the following comments on Bell Labs’ website commenting on how this “discovery” was one of the century’s greatest advances:

Dan Stanzione, then Bell Labs president and Lucent’s chief operating officer when Penzias [one of the radio astronomers involved in the discovery] retired, said Penzias “embodies the creativity and technical excellence that are the hallmarks of Bell Labs”. He called him a Renaissance figure who “extended our fragile understanding of creation, and advanced the frontiers of science in many important areas”.

Renaissance shmenaissance. The two fellows were looking for bird poop! Not only were they not looking for anything remotely like the evidence of the big bang but, as usual in these cases, they did not immediately see the importance of their find. Sadly, the physicist Ralph Alpher, the person who initially conceived of the idea, in a paper coauthored with heavyweights George Gamow and Hans Bethe, was surprised to read about the discovery in The New York Times. In fact, in the languishing papers positing the birth of the universe, scientists were doubtful whether such radiation could ever be measured. As happens so often in discovery, those looking for evidence did not find it; those not looking for it found it and were hailed as discoverers.

We have a paradox. Not only have forecasters generally failed dismally to foresee the drastic changes brought about by unpredictable discoveries, but incremental change has turned out to be generally slower than forecasters expected. When a new technology emerges, we either grossly underestimate or severely overestimate its importance. Thomas Watson, the founder of IBM, once predicted that there would be no need for more than just a handful of computers.

That the reader of this book is probably reading these lines not on a screen but in the pages of that anachronistic device, the book, would seem quite an aberration to certain pundits of the “digital revolution”. That you are reading them in archaic, messy, and inconsistent English, French, or Swahili, instead of in Esperanto, defies the predictions of half a century ago that the world would soon be communicating in a logical, unambiguous, and Platonically designed lingua franca. Likewise, we are not spending long weekends in space stations as was universally predicted three decades ago. In an example of corporate arrogance, after the first moon landing the now-defunct airline Pan Am took advance bookings for round-trips between earth and the moon. Nice prediction, except that the company failed to forsee that it would be out of business not long after.

A Solution Waiting for a Problem

Engineers tend to develop tools for the pleasure of developing tools, not to induce nature to yield its secrets. It so happens that some of these tools bring us more knowledge; because of the silent evidence effect, we forget to consider tools that accomplished nothing but keeping engineers off the streets. Tools lead to unexpected discoveries, which themselves lead to other unexpected discoveries. But rarely do our tools seem to work as intended; it is only the engineer’s gusto and love for the building of toys and machines that contribute to the augmentation of our knowledge. Knowledge does not progress from tools designed to verify or help theories, but rather the opposite. The computer was not built to allow us to develop new, visual, geometric mathematics, but for some other purpose. It happened to allow us to discover mathematical objects that few cared to look for. Nor was the computer invented to let you chat with your friends in Siberia, but it has caused some long-distance relationships to bloom. As an essayist, I can attest that the Internet has helped me to spread my ideas by bypassing journalists. But this was not the stated purpose of its military designer.

The laser is a prime illustration of a tool made for a given purpose (actually no real purpose) that then found applications that were not even dreamed of at the time. It was a typical “solution looking for a problem”. Among the early applications was the surgical stitching of detached retinas. Half a century later, The Economist asked Charles Townes, the alleged inventor of the laser, if he had had retinas on his mind. He had not. He was satisfying his desire to split light beams, and that was that. In fact, Townes’s colleagues teased him quite a bit about the irrelevance of his discovery. Yet just consider the effects of the laser in the world around you: compact disks, eyesight corrections, microsurgery, data storage and retrieval – all unforeseen applications of the technology.[33]

We build toys. Some of those toys change the world.

Keep Searching

In the summer of 2005 I was the guest of a biotech company in California that had found inordinate success. I was greeted with T-shirts and pins showing a bell-curve buster and the announcement of the formation of the Fat Tails Club (“fat tails” is a technical term for Black Swans). This was my first encounter with a firm that lived off Black Swans of the positive kind. I was told that a scientist managed the company and that he had the instinct, as a scientist, to just let scientists look wherever their instinct took them. Commercialization came later. My hosts, scientists at heart, understood that research involves a large element of serendipity, which can pay off big as long as one knows how serendipitous the business can be and structures it around that fact. Viagra, which changed the mental outlook and social mores of retired men, was meant to be a hypertension drug. Another hypertension drug led to a hair-growth medication. My friend Bruce Goldberg, who understands randomness, calls these unintended side applications “corners”. While many worry about unintended consequences, technology adventurers thrive on them.

The biotech company seemed to follow implicitly, though not explicitly, Louis Pasteur’s adage about creating luck by sheer exposure. “Luck favors the prepared”, Pasteur said, and, like all great discoverers, he knew something about accidental discoveries. The best way to get maximal exposure is to keep researching. Collect opportunities – on that, later.

To predict the spread of a technology implies predicting a large element of fads and social contagion, which lie outside the objective utility of the technology itself (assuming there is such an animal as objective utility). How many wonderfully useful ideas have ended up in the cemetery, such as the Segway, an electric scooter that, it was prophesized, would change the morphology of cities, and many others. As I was mentally writing these lines I saw a Time magazine cover at an airport stand announcing the “meaningful inventions” of the year. These inventions seemed to be meaningful as of the issue date, or perhaps for a couple of weeks after. Journalists can teach us how to not learn.

HOW TO PREDICT YOUR PREDICTIONS!

This brings us to Sir Doktor Professor Karl Raimund Popper’s attack on historicism. As I said in Chapter 5, this was his most significant insight, but it remains his least known. People who do not really know his work tend to focus on Popperian falsification, which addresses the verification or nonverification of claims. This focus obscures his central idea: he made skepticism a method, he made of a skeptic someone constructive.

Just as Karl Marx wrote, in great irritation, a diatribe called The Misery of Philosophy in response to Proudhon’s The Philosophy of Misery, Popper, irritated by some of the philosophers of his time who believed in the scientific understanding of history, wrote, as a pun, The Misery of Historicism (which has been translated as The Poverty of Historicism).[34]

Popper’s insight concerns the limitations in forecasting historical events and the need to downgrade “soft” areas such as history and social science to a level slightly above aesthetics and entertainment, like butterfly or coin collecting. (Popper, having received a classical Viennese education, didn’t go quite that far; I do. I am from Amioun.) What we call here soft historical sciences are narrative dependent studies.

Popper’s central argument is that in order to predict historical events you need to predict technological innovation, itself fundamentally unpredictable.

“Fundamentally” unpredictable? I will explain what he means using a modern framework. Consider the following property of knowledge: If you expect that you will know tomorrow with certainty that your boyfriend has been cheating on you all this time, then you know today with certainty that your boyfriend is cheating on you and will take action today, say, by grabbing a pair of scissors and angrily cutting all his Ferragamo ties in half. You won’t tell yourself, This is what I will figure out tomorrow, but today is different so I will ignore the information and have a pleasant dinner. This point can be generalized to all forms of knowledge. There is actually a law in statistics called the law of iterated expectations, which I outline here in its strong form: if I expect to expect something at some date in the future, then I already expect that something at present.

Consider the wheel again. If you are a Stone Age historical thinker called on to predict the future in a comprehensive report for your chief tribal planner, you must project the invention of the wheel or you will miss pretty much all of the action. Now, if you can prophesy the invention of the wheel, you already know what a wheel looks like, and thus you already know how to build a wheel, so you are already on your way. The Black Swan needs to be predicted!

But there is a weaker form of this law of iterated knowledge. It can be phrased as follows: to understand the future to the point of being able to predict it, you need to incorporate elements from this future itself. If you know about the discovery you are about to make in the future, then you have almost made it. Assume that you are a special scholar in Medieval University’s Forecasting Department specializing in the projection of future history (for our purposes, the remote twentieth century). You would need to hit upon the inventions of the steam machine, electricity, the atomic bomb, and the Internet, as well as the institution of the airplane onboard massage and that strange activity called the business meeting, in which well-fed, but sedentary, men voluntarily restrict their blood circulation with an expensive device called a necktie.

This incapacity is not trivial. The mere knowledge that something has been invented often leads to a series of inventions of a similar nature, even though not a single detail of this invention has been disseminated – there is no need to find the spies and hang them publicly. In mathematics, once a proof of an arcane theorem has been announced, we frequently witness the proliferation of similar proofs coming out of nowhere, with occasional accusations of leakage and plagiarism. There may be no plagiarism: the information that the solution exists is itself a big piece of the solution.

By the same logic, we are not easily able to conceive of future inventions (if we were, they would have already been invented). On the day when we are able to foresee inventions we will be living in a state where everything conceivable has been invented. Our own condition brings to mind the apocryphal story from 1899 when the head of the U.S. patent of flee resigned because he deemed that there was nothing left to discover – except that on that day the resignation would be justified.[35]

Popper was not the first to go after the limits to our knowledge. In Germany, in the late nineteenth century, Emil du Bois-Reymond claimed that ignoramus et ignorabimus – we are ignorant and will remain so. Somehow his ideas went into oblivion. But not before causing a reaction: the mathematician David Hilbert set to defy him by drawing a list of problems that mathematicians would need to solve over the next century.

Even du Bois-Reymond was wrong. We are not even good at understanding the unknowable. Consider the statements we make about things that we will never come to know – we confidently underestimate what knowledge we may acquire in the future. Auguste Comte, the founder of the school of positivism, which is (unfairly) accused of aiming at the scientization of everything in sight, declared that mankind would forever remain ignorant of the chemical composition of the fixed stars. But, as Charles Sanders Peirce reported, “The ink was scarcely dry upon the printed page before the spectroscope was discovered and that which he had deemed absolutely unknowable was well on the way of getting ascertained”. Ironically, Comte’s other projections, concerning what we would come to learn about the workings of society, were grossly – and dangerously – overstated. He assumed that society was like a clock that would yield its secrets to us.

I’ll summarize my argument here: Prediction requires knowing about technologies that will be discovered in the future. But that very knowledge would almost automatically allow us to start developing those technologies right away. Ergo, we do not know what we will know.

Some might say that the argument, as phrased, seems obvious, that we always think that we have reached definitive knowledge but don’t notice that those past societies we laugh at also thought the same way. My argument is trivial, so why don’t we take it into account? The answer lies in a pathology of human nature. Remember the psychological discussions on asymmetries in the perception of skills in the previous chapter? We see flaws in others and not in ourselves. Once again we seem to be wonderful at self-deceit machines.

THE NTH BILLIARD BALL

Henri Poincaré, in spite of his fame, is regularly considered to be an undervalued scientific thinker, given that it took close to a century for some of his ideas to be appreciated. He was perhaps the last great thinking mathematician (or possibly the reverse, a mathematical thinker). Every time I see a T-shirt bearing the picture of the modern icon Albert Einstein, I cannot help thinking of Poincaré – Einstein is worthy of our reverence, but he has displaced many others. There is so little room in our consciousness; it is winner-take-all up there.


Third Republic-Style Decorum

Again, Poincaré is in a class by himself. I recall my father recommending Poincaré’s essays, not just for their scientific content, but for the quality of his French prose. The grand master wrote these wonders as serialized articles and composed them like extemporaneous speeches. As in every masterpiece, you see a mixture of repetitions, digressions, everything a “me too” editor with a prepackaged mind would condemn – but these make his text even more readable owing to an iron consistency of thought.

Poincaré became a prolific essayist in his thirties. He seemed in a hurry and died prematurely, at fifty-eight; he was in such a rush that he did not bother correcting typos and grammatical errors in his text, even after spotting them, since he found doing so a gross misuse of his time. They no longer make geniuses like that – or they no longer let them write in their own way.

Poincaré’s reputation as a thinker waned rapidly after his death. His idea that concerns us took almost a century to resurface, but in another form. It was indeed a great mistake that I did not carefully read his essays as a child, for in his magisterial La scienceet l’hypothèse, I discovered later, he angrily disparages the use of the bell curve.

I will repeat that Poincaré was the true kind of philosopher of science: his philosophizing came from his witnessing the limits of the subject itself, which is what true philosophy is all about. I love to tick off French literary intellectuals by naming Poincaré as my favorite French philosopher. “Him a philosophe? What do you mean, monsieur?” It is always frustrating to explain to people that the thinkers they put on the pedestals, such as Henri Bergson or Jean-Paul Sartre, are largely the result of fashion production and can’t come close to Poincaré in terms of sheer influence that will continue for centuries to come. In fact, there is a scandal of prediction going on here, since it is the French Ministry of National Education that decides who is a philosopher and which philosophers need to be studied.

I am looking at Poincaré’s picture. He was a bearded, portly and imposing, well-educated patrician gentleman of the French Third Republic, a man who lived and breathed general science, looked deep into his subject, and had an astonishing breadth of knowledge. He was part of the class of mandarins that gained respectability in the late nineteenth century: upper middle class, powerful, but not exceedingly rich. His father was a doctor and professor of medicine, his uncle was a prominent scientist and administrator, and his cousin Raymond became a president of the republic of France. These were the days when the grandchildren of businessmen and wealthy landowners headed for the intellectual professions.

However, I can hardly imagine him on a T-shirt, or sticking out his tongue like in that famous picture of Einstein. There is something non-playful about him, a Third Republic style of dignity.

In his day, Poincaré was thought to be the king of mathematics and science, except of course by a few narrow-minded mathematicians like Charles Hermite who considered him too intuitive, too intellectual, or too “hand-waving”. When mathematicians say “hand-waving”, disparagingly, about someone’s work, it means that the person has: a) insight, b) realism, c) something to say, and it means that d) he is right because that’s what critics say when they can’t find anything more negative. A nod from Poincaré made or broke a career. Many claim that Poincaré figured out relativity before Einstein – and that Einstein got the idea from him – but that he did not make a big deal out of it. These claims are naturally made by the French, but there seems to be some validation from Einstein’s friend and biographer Abraham Pais. Poincaré was too aristocratic in both background and demeanor to complain about the ownership of a result.

Poincaré is central to this chapter because he lived in an age when we had made extremely rapid intellectual progress in the fields of prediction – think of celestial mechanics. The scientific revolution made us feel that we were in possession of tools that would allow us to grasp the future. Uncertainty was gone. The universe was like a clock and, by studying the movements of the pieces, we could project into the future. It was only a matter of writing down the right models and having the engineers do the calculations. The future was a mere extension of our technological certainties.

The Three Body Problem

Poincaré was the first known big-gun mathematician to understand and explain that there are fundamental limits to our equations. He introduced nonlinearities, small effects that can lead to severe consequences, an idea that later became popular, perhaps a bit too popular, as chaos theory. What’s so poisonous about this popularity? Because Poincaré’s entire point is about the limits that nonlinearities put on forecasting; they are not an invitation to use mathematical techniques to make extended forecasts. Mathematics can show us its own limits rather clearly.

There is (as usual) an element of the unexpected in this story. Poincaré initially responded to a competition organized by the mathematician Gösta Mittag-Leffer to celebrate the sixtieth birthday of King Oscar of Sweden. Poincaré’s memoir, which was about the stability of the solar system, won the prize that was then the highest scientific honor (as these were the happy days before the Nobel Prize). A problem arose, however, when a mathematical editor checking the memoir before publication realized that there was a calculation error, and that, after consideration, it led to the opposite conclusion – unpredictability, or, more technically, nonintegrability. The memoir was discreetly pulled and reissued about a year later.

Poincaré’s reasoning was simple: as you project into the future you may need an increasing amount of precision about the dynamics of the process that you are modeling, since your error rate grows very rapidly. The problem is that near precision is not possible since the degradation of your forecast compounds abruptly – you would eventually need to figure out the past with infinite precision. Poincaré showed this in a very simple case, famously known as the “three body problem”. If you have only two planets in a solar-style system, with nothing else affecting their course, then you may be able to indefinitely predict the behavior of these planets, no sweat. But add a third body, say a comet, ever so small, between the planets. Initially the third body will cause no drift, no impact; later, with time, its effects on the two other bodies may become explosive. Small differences in where this tiny body is located will eventually dictate the future of the behemoth planets.

Explosive forecasting difficulty comes from complicating the mechanics, ever so slightly. Our world, unfortunately, is far more complicated than the three body problem; it contains far more than three objects. We are dealing with what is now called a dynamical system – and the world, we will see, is a little too much of a dynamical system.

Think of the difficulty in forecasting in terms of branches growing out of a tree; at every fork we have a multiplication of new branches. To see how our intuitions about these nonlinear multiplicative effects are rather weak, consider this story about the chessboard. The inventor of the chessboard requested the following compensation: one grain of rice for the first square, two for the second, four for the third, eight, then sixteen, and so on, doubling every time, sixty-four times. The king granted this request, thinking that the inventor was asking for a pittance – but he soon realized that he was outsmarted. The amount of rice exceeded all possible grain reserves!

This multiplicative difficulty leading to the need for greater and greater precision in assumptions can be illustrated with the following simple exercise concerning the prediction of the movements of billiard balls on a table. I use the example as computed by the mathematician Michael Berry. If you know a set of basic parameters concerning the ball at rest, can compute the resistance of the table (quite elementary), and can gauge the strength of the impact, then it is rather easy to predict what would happen at the first hit. The second impact becomes more complicated, but possible; you need to be more careful about your knowledge of the initial states, and more precision is called for. The problem is that to correctly compute the ninth impact, you need to take into account the gravitational pull of someone standing next to the table (modestly, Berry’s computations use a weight of less than 150 pounds). And to compute the fifty-sixth impact, every single elementary particle of the universe needs to be present in your assumptions! An electron at the edge of the universe, separated from us by 10 billion light-years, must figure in the calculations, since it exerts a meaningful effect on the outcome. Now, consider the additional burden of having to incorporate predictions about where these variables will be in the future. Forecasting the motion of a billiard ball on a pool table requires knowledge of the dynamics of the entire universe, down to every single atom! We can easily predict the movements of large objects like planets (though not too far into the future), but the smaller entities can be difficult to figure out – and there are so many more of them.

FIGURE 2: PRECISION AND FORECASTING

One of the readers of a draft of this book, David Cowan, gracefully drew this picture of scattering, which shows how, at the second bounce, variations in the initial conditions can lead to extremely divergent results. As the initial imprecision in the angle is multiplied, every additional bounce will be further magnified. This causes a severe multiplicative effect where the error grows out disproportionately.

Note that this billiard-ball story assumes a plain and simple world; it does not even take into account these crazy social matters possibly endowed with free will. Billiard balls do not have a mind of their own. Nor does our example take into account relativity and quantum effects. Nor did we use the notion (often invoked by phonies) called the “uncertainty principle”. We are not concerned with the limitations of the precision in measurements done at the subatomic level. We are just dealing with billiard balls!

In a dynamical system, where you are considering more than a ball on its own, where trajectories in a way depend on one another, the ability to project into the future is not just reduced, but is subjected to a fundamental limitation. Poincaré proposed that we can only work with qualitative matters – some property of systems can be discussed, but not computed. You can think rigorously, but you cannot use numbers. Poincaré even invented a field for this, analysis in situ, now part of topology. Prediction and forecasting are a more complicated business than is commonly accepted, but it takes someone who knows mathematics to understand that. To accept it takes both understanding and courage.

In the 1960s the MIT meteorologist Edward Lorenz rediscovered Poincaré’s results on his own – once again, by accident. He was producing a computer model of weather dynamics, and he ran a simulation that projected a weather system a few days ahead. Later he tried to repeat the same simulation with the exact same model and what he thought were the same input parameters, but he got wildly different results. He initially attributed these differences to a computer bug or a calculation error. Computers then were heavier and slower machines that bore no resemblance to what we have today, so users were severely constrained by time. Lorenz subsequently realized that the consequential divergence in his results arose not from error, but from a small rounding in the input parameters. This became known as the butterfly effect, since a butterfly moving its wings in India could cause a hurricane in New York, two years later. Lorenz’s findings generated interest in the field of chaos theory.

Naturally researchers found predecessors to Lorenz’s discovery, not only in the work of Poincaré, but also in that of the insightful and intuitive Jacques Hadamard, who thought of the same point around 1898, and then went on to live for almost seven more decades – he died at the age of ninety-eight.[36]

They Still Ignore Hayek

Popper and Poincaré’s findings limit our ability to see into the future, making it a very complicated reflection of the past – if it is a reflection of the past at all. A potent application in the social world comes from a friend of Sir Karl, the intuitive economist Friedrich Hayek. Hayek is one of the rare celebrated members of his “profession” (along with J.M. Keynes and G.L.S. Shackle) to focus on true uncertainty, on the limitations of knowledge, on the unread books in Eco’s library.

In 1974 he received the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel, but if you read his acceptance speech you will be in for a bit of a surprise. It was eloquently called “The Pretense of Knowledge”, and he mostly railed about other economists and about the idea of the planner. He argued against the use of the tools of hard science in the social ones, and depressingly, right before the big boom for these methods in economics. Subsequently, the prevalent use of complicated equations made the environment for true empirical thinkers worse than it was before Hayek wrote his speech. Every year a paper or a book appears, bemoaning the fate of economics and complaining about its attempts to ape physics. The latest I’ve seen is about how economists should shoot for the role of lowly philosophers rather than that of high priests. Yet, in one ear and out the other.

For Hayek, a true forecast is done organically by a system, not by fiat. One single institution, say, the central planner, cannot aggregate knowledge; many important pieces of information will be missing. But society as a whole will be able to integrate into its functioning these multiple pieces of information. Society as a whole thinks outside the box. Hayek attacked socialism and managed economies as a product of what I have called nerd knowledge, or Platonicity – owing to the growth of scientific knowledge, we overestimate our ability to understand the subtle changes that constitute the world, and what weight needs to be imparted to each such change. He aptly called this “scientism”.

This disease is severely ingrained in our institutions. It is why I fear governments and large corporations – it is hard to distinguish between them. Governments make forecasts; companies produce projections; every year various forecasters project the level of mortgage rates and the stock market at the end of the following year. Corporations survive not because they have made good forecasts, but because, like the CEOs visiting Wharton I mentioned earlier, they may have been the lucky ones. And, like a restaurant owner, they may be hurting themselves, not us – perhaps helping us and subsidizing our consumption by giving us goods in the process, like cheap telephone calls to the rest of the world funded by the overinvestment during the dotcom era. We consumers can let them forecast all they want if that’s what is necessary for them to get into business. Let them go hang themselves if they wish.

As a matter of fact, as I mentioned in Chapter 8, we New Yorkers are all benefiting from the quixotic overconfidence of corporations and restaurant entrepreneurs. This is the benefit of capitalism that people discuss the least.

But corporations can go bust as often as they like, thus subsidizing us consumers by transferring their wealth into our pockets – the more bankruptcies, the better it is for us. Government is a more serious business and we need to make sure we do not pay the price for its folly. As individuals we should love free markets because operators in them can be as incompetent as they wish.

The only criticism one might have of Hayek is that he makes a hard and qualitative distinction between social sciences and physics. He shows that the methods of physics do not translate to its social science siblings, and he blames the engineering-oriented mentality for this. But he was writing at a time when physics, the queen of science, seemed to zoom in our world. It turns out that even the natural sciences are far more complicated than that. He was right about the social sciences, he is certainly right in trusting hard scientists more than social theorizers, but what he said about the weaknesses of social knowledge applies to all knowledge. All knowledge.

Why? Because of the confirmation problem, one can argue that we know very little about our natural world; we advertise the read books and forget about the unread ones. Physics has been successful, but it is a narrow field of hard science in which we have been successful, and people tend to generalize that success to all science. It would be preferable if we were better at understanding cancer or the (highly nonlinear) weather than the origin of the universe.

How Not to Be a Nerd

Let us dig deeper into the problem of knowledge and continue the comparison of Fat Tony and Dr. John in Chapter 9. Do nerds tunnel, meaning, do they focus on crisp categories and miss sources of uncertainty? Remember from the Prologue my presentation of Platonification as a top-down focus on a world composed of these crisp categories.[37]

Think of a bookworm picking up a new language. He will learn, say, Serbo-Croatian or !Kung by reading a grammar book cover to cover, and memorizing the rules. He will have the impression that some higher grammatical authority set the linguistic regulations so that nonlearned ordinary people could subsequently speak the language. In reality, languages grow organically; grammar is something people without anything more exciting to do in their lives codify into a book. While the scholastic-minded will memorize declensions, the a-Platonic nonnerd will acquire, say, Serbo-Croatian by picking up potential girlfriends in bars on the outskirts of Sarajevo, or talking to cabdrivers, then fitting (if needed) grammatical rules to the knowledge he already possesses.

Consider again the central planner. As with language, there is no grammatical authority codifying social and economic events; but try to convince a bureaucrat or social scientist that the world might not want to follow his “scientific” equations. In fact, thinkers of the Austrian school, to which Hayek belonged, used the designations tacit or implicit precisely for that part of knowledge that cannot be written down, but that we should avoid repressing. They made the distinction we saw earlier between “know-how” and “know-what” – the latter being more elusive and more prone to nerdification.

To clarify, Platonic is top-down, formulaic, closed-minded, self-serving, and commoditized; a-Platonic is bottom-up, open-minded, skeptical, and empirical.

The reason for my singling out the great Plato becomes apparent with the following example of the master’s thinking: Plato believed that we should use both hands with equal dexterity. It would not “make sense” otherwise. He considered favoring one limb over the other a deformation caused by the “folly of mothers and nurses”. Asymmetry bothered him, and he projected his ideas of elegance onto reality. We had to wait until Louis Pasteur to figure out that chemical molecules were either left– or right-handed and that this mattered considerably.

One can find similar ideas among several disconnected branches of thinking. The earliest were (as usual) the empirics, whose bottom-up, theory-free, “evidence-based” medical approach was mostly associated with Philnus of Cos, Serapion of Alexandria, and Glaucias of Tarentum, later made skeptical by Menodotus of Nicomedia, and currently well-known by its vocal practitioner, our friend the great skeptical philosopher Sextus Empiricus. Sextus who, we saw earlier, was perhaps the first to discuss the Black Swan. The empirics practiced the “medical art” without relying on reasoning; they wanted to benefit from chance observations by making guesses, and experimented and tinkered until they found something that worked. They did minimal theorizing.

Their methods are being revived today as evidence-based medicine, after two millennia of persuasion. Consider that before we knew of bacteria, and their role in diseases, doctors rejected the practice of hand washing because it made no sense to them, despite the evidence of a meaningful decrease in hospital deaths. Ignaz Semmelweis, the mid-nineteenth-century doctor who promoted the idea of hand washing, wasn’t vindicated until decades after his death. Similarly it may not “make sense” that acupuncture works, but if pushing a needle in someone’s toe systematically produces relief from pain (in properly conducted empirical tests), then it could be that there are functions too complicated for us to understand, so let’s go with it for now while keeping our minds open.

Academic Libertarianism

To borrow from Warren Buffett, don’t ask the barber if you need a haircut – and don’t ask an academic if what he does is relevant. So I’ll end this discussion of Hayek’s libertarianism with the following observation. As I’ve said, the problem with organized knowledge is that there is an occasional divergence of interests between academic guilds and knowledge itself. So I cannot for the life of me understand why today’s libertarians do not go after tenured faculty (except perhaps because many libertarians are academics). We saw that companies can go bust, while governments remain. But while governments remain, civil servants can be demoted and congressmen and senators can be eventually voted out of office. In academia a tenured faculty is permanent – the business of knowledge has permanent “owners”. Simply, the charlatan is more the product of control than the result of freedom and lack of structure.

Prediction and Free Will

If you know all possible conditions of a physical system you can, in theory (though not, as we saw, in practice), project its behavior into the future. But this only concerns inanimate objects. We hit a stumbling block when social matters are involved. It is another matter to project a future when humans are involved, if you consider them living beings and endowed with free will.

If I can predict all of your actions, under given circumstances, then you may not be as free as you think you are. You are an automaton responding to environmental stimuli. You are a slave of destiny. And the illusion of free will could be reduced to an equation that describes the result of interactions among molecules. It would be like studying the mechanics of a clock: a genius with extensive knowledge of the initial conditions and the causal chains would be able to extend his knowledge to the future of your actions. Wouldn’t that be stifling?

However, if you believe in free will you can’t truly believe in social science and economic projection. You cannot predict how people will act. Except, of course, if there is a trick, and that trick is the cord on which neoclassical economics is suspended. You simply assume that individuals will be rational in the future and thus act predictably. There is a strong link between rationality, predictability, and mathematical tractability. A rational individual will perform a unique set of actions in specified circumstances. There is one and only one answer to the question of how “rational” people satisfying their best interests would act. Rational actors must be coherent: they cannot prefer apples to oranges, oranges to pears, then pears to apples. If they did, then it would be difficult to generalize their behavior. It would also be difficult to project their behavior in time.

In orthodox economics, rationality became a straitjacket. Platonified economists ignored the fact that people might prefer to do something other than maximize their economic interests. This led to mathematical techniques such as “maximization”, or “optimization”, on which Paul Samuelson built much of his work. Optimization consists in finding the mathematically optimal policy that an economic agent could pursue. For instance, what is the “optimal” quantity you should allocate to stocks? It involves complicated mathematics and thus raises a barrier to entry by non-mathematically trained scholars. I would not be the first to say that this optimization set back social science by reducing it from the intellectual and reflective discipline that it was becoming to an attempt at an “exact science”. By “exact science”, I mean a second-rate engineering problem for those who want to pretend that they are in the physics department – so-called physics envy. In other words, an intellectual fraud.

Optimization is a case of sterile modeling that we will discuss further in Chapter 17. It had no practical (or even theoretical) use, and so it became principally a competition for academic positions, a way to make people compete with mathematical muscle. It kept Platonified economists out of the bars, solving equations at night. The tragedy is that Paul Samuelson, a quick mind, is said to be one of the most intelligent scholars of his generation. This was clearly a case of very badly invested intelligence. Characteristically, Samuelson intimidated those who questioned his techniques with the statement “Those who can, do science, others do methodology”. If you knew math, you could “do science”. This is reminiscent of psychoanalysts who silence their critics by accusing them of having trouble with their fathers. Alas, it turns out that it was Samuelson and most of his followers who did not know much math, or did not know how to use what math they knew, how to apply it to reality. They only knew enough math to be blinded by it.

Tragically, before the proliferation of empirically blind idiot savants, interesting work had been begun by true thinkers, the likes of J. M. Keynes, Friedrich Hayek, and the great Benoît Mandelbrot, all of whom were displaced because they moved economics away from the precision of second-rate physics. Very sad. One great underestimated thinker is G.L.S. Shackle, now almost completely obscure, who introduced the notion of “unknowledge”, that is, the unread books in Umberto Eco’s library. It is unusual to see Shackle’s work mentioned at all, and I had to buy his books from secondhand dealers in London.

Legions of empirical psychologists of the heuristics and biases school have shown that the model of rational behavior under uncertainty is not just grossly inaccurate but plain wrong as a description of reality. Their results also bother Platonified economists because they reveal that there are several ways to be irrational. Tolstoy said that happy families were all alike, while each unhappy one is unhappy in its own way. People have been shown to make errors equivalent to preferring apples to oranges, oranges to pears, and pears to apples, depending on how the relevant questions are presented to them. The sequence matters! Also, as we have seen with the anchoring example, subjects’ estimates of the number of dentists in Manhattan are influenced by which random number they have just been presented with – the anchor. Given the randomness of the anchor, we will have randomness in the estimates. So if people make inconsistent choices and decisions, the central core of economic optimization fails. You can no longer produce a “general theory”, and without one you cannot predict.

You have to learn to live without a general theory, for Pluto’s sake!

THE GRUENESS OF EMERALD

Recall the turkey problem. You look at the past and derive some rule about the future. Well, the problems in projecting from the past can be even worse than what we have already learned, because the same past data can confirm a theory and also its exact opposite! If you survive until tomorrow, it could mean that either a) you are more likely to be immortal or b) that you are closer to death. Both conclusions rely on the exact same data. If you are a turkey being fed for a long period of time, you can either naïvely assume that feeding confirms your safety or be shrewd and consider that it confirms the danger of being turned into supper. An acquaintance’s unctuous past behavior may indicate his genuine affection for me and his concern for my welfare; it may also confirm his mercenary and calculating desire to get my business one day.


FIGURE 3

A series of a seemingly growing bacterial population (or of sales records, or of any variable observed through time – such as the total feeding of the turkey in Chapter 4).


FIGURE 4

Easy to fit the trend – there is one and only one linear model that fits the data. You can project a continuation into the future


FIGURE 5

We look at a broader scale. Hey, other models also fit it rather well.


FIGURE 6

And the real “generating process” is extremely simple but it had nothing to do with a linear model! Some parts of it appear to be linear and we are fooled by extrapolating in a direct line.[38]

So not only can the past be misleading, but there are also many degrees of freedom in our interpretation of past events.

For the technical version of this idea, consider a series of dots on a page representing a number through time – the graph would resemble Figure 1 showing the first thousand days in Chapter 4. Let’s say your high school teacher asks you to extend the series of dots. With a linear model, that is, using a ruler, you can run only a straight line, a single straight line from the past to the future. The linear model is unique. There is one and only one straight line that can project from a series of points. But it can get trickier. If you do not limit yourself to a straight line, you find that there is a huge family of curves that can do the job of connecting the dots. If you project from the past in a linear way, you continue a trend. But possible future deviations from the course of the past are infinite.

This is what the philosopher Nelson Goodman called the riddle of induction: We project a straight line only because we have a linear model in our head – the fact that a number has risen for 1,000 days straight should make you more confident that it will rise in the future. But if you have a nonlinear model in your head, it might confirm that the number should decline on day 1,001.

Let’s say that you observe an emerald. It was green yesterday and the day before yesterday. It is green again today. Normally this would confirm the “green” property: we can assume that the emerald will be green tomorrow. But to Goodman, the emerald’s color history could equally confirm the “grue” property. What is this grue property? The emerald’s grue property is to be green until some specified date, say, December 31, 2006, and then blue thereafter.

The riddle of induction is another version of the narrative fallacy – you face an infinity of “stories” that explain what you have seen. The severity of Goodman’s riddle of induction is as follows: if there is no longer even a single unique way to “generalize” from what you see, to make an inference about the unknown, then how should you operate? The answer, clearly, will be that you should employ “common sense”, but your common sense may not be so well developed with respect to some Extremistan variables.

THAT GREAT ANTICIPATION MACHINE

The reader is entitled to wonder, So, NNT, why on earth do we plan? Some people do it for monetary gain, others because it’s “their job”. But we also do it without such intentions – spontaneously.

Why? The answer has to do with human nature. Planning may come with the package of what makes us human, namely, our consciousness.

There is supposed to be an evolutionary dimension to our need to project matters into the future, which I will rapidly summarize here, since it can be an excellent candidate explanation, an excellent conjecture, though, since it is linked to evolution, I would be cautious.

The idea, as promoted by the philosopher Daniel Dennett, is as follows: What is the most potent use of our brain? It is precisely the ability to project conjectures into the future and play the counterfactual game – “If I punch him in the nose, then he will punch me back right away, or, worse, call his lawyer in New York”. One of the advantages of doing so is that we can let our conjectures die in our stead. Used correctly and in place of more visceral reactions, the ability to project effectively frees us from immediate, first-order natural selection – as opposed to more primitive organisms that were vulnerable to death and only grew by the improvement in the gene pool through the selection of the best. In a way, projecting allows us to cheat evolution: it now takes place in our head, as a series of projections and counterfactual scenarios.

This ability to mentally play with conjectures, even if it frees us from the laws of evolution, is itself supposed to be the product of evolution – it is as if evolution has put us on a long leash whereas other animals live on the very short leash of immediate dependence on their environment. For Dennett, our brains are “anticipation machines”; for him the human mind and consciousness are emerging properties, those properties necessary for our accelerated development.

Why do we listen to experts and their forecasts? A candidate explanation is that society reposes on specialization, effectively the division of knowledge. You do not go to medical school the minute you encounter a big health problem; it is less taxing (and certainly safer) for you to consult someone who has already done so. Doctors listen to car mechanics (not for health matters, just when it comes to problems with their cars); car mechanics listen to doctors. We have a natural tendency to listen to the expert, even in fields where there may be no experts.

Chapter Twelve: EPISTEMOCRACY, A DREAM

This is only an essay – Children and philosophers vs. adults and nonphilosophers – Science as an autistic enterprise – The past too has a past – Mispredict and live a long, happy life (if you survive)

Someone with a low degree of epistemic arrogance is not too visible, like a shy person at a cocktail party. We are not predisposed to respect humble people, those who try to suspend judgment. Now contemplate epistemic humility. Think of someone heavily introspective, tortured by the awareness of his own ignorance. He lacks the courage of the idiot, yet has the rare guts to say “I don’t know”. He does not mind looking like a fool or, worse, an ignoramus. He hesitates, he will not commit, and he agonizes over the consequences of being wrong. He introspects, introspects, and introspects until he reaches physical and nervous exhaustion.

This does not necessarily mean that he lacks confidence, only that he holds his own knowledge to be suspect. I will call such a person an epistemocrat; the province where the laws are structured with this kind of human fallibility in mind I will call an epistemocracy.

The major modern epistemocrat is Montaigne.

Monsieur de Montaigne, Epistemocrat

At the age of thirty-eight, Michel Eyquem de Montaigne retired to his estate, in the countryside of southwestern France. Montaigne, which means mountain in Old French, was the name of the estate. The area is known today for the Bordeaux wines, but in Montaigne’s time not many people invested their mental energy and sophistication in wine. Montaigne had stoic tendencies and would not have been strongly drawn to such pursuits anyway. His idea was to write a modest collection of “attempts”, that is, essays. The very word essay conveys the tentative, the speculative, and the nondefinitive. Montaigne was well grounded in the classics and wanted to meditate on life, death, education, knowledge, and some not uninteresting biological aspects of human nature (he wondered, for example, whether cripples had more vigorous libidos owing to the richer circulation of blood in their sexual organs).

The tower that became his study was inscribed with Greek and Latin sayings, almost all referring to the vulnerability of human knowledge. Its windows offered a wide vista of the surrounding hills.

Montaigne’s subject, officially, was himself, but this was mostly as a means to facilitate the discussion; he was not like those corporate executives who write biographies to make a boastful display of their honors and accomplishments. He was mainly interested in discovering things about himself, making us discover things about himself, and presenting matters that could be generalized – generalized to the entire human race. Among the inscriptions in his study was a remark by the Latin poet Terence: Homo sum, humani a me nil alienum puto – I am a man, and nothing human is foreign to me.

Montaigne is quite refreshing to read after the strains of a modern education since he fully accepted human weaknesses and understood that no philosophy could be effective unless it took into account our deeply ingrained imperfections, the limitations of our rationality, the flaws that make us human. It is not that he was ahead of his time; it would be better said that later scholars (advocating rationality) were backward.

He was a thinking, ruminating fellow, and his ideas did not spring up in his tranquil study, but while on horseback. He went on long rides and came back with ideas. Montaigne was neither one of the academics of the Sorbonne nor a professional man of letters, and he was not these things on two planes. First, he was a doer; he had been a magistrate, a businessman, and the mayor of Bordeaux before he retired to mull over his life and, mostly, his own knowledge. Second, he was an antidogmatist: he was a skeptic with charm, a fallible, noncommittal, personal, introspective writer, and, primarily, someone who, in the great classical tradition, wanted to be a man. Had he been in a different period, he would have been an empirical skeptic – he had skeptical tendencies of the Pyrrhonian variety, the antidogmatic kind like Sextus Empiricus, particularly in his awareness of the need to suspend judgment.

Epistemocracy

Everyone has an idea of Utopia. For many it means equality, universal justice, freedom from oppression, freedom from work (for some it may be the more modest, though no more attainable, society with commuter trains free of lawyers on cell phones). To me Utopia is an epistemocracy, a society in which anyone of rank is an epistemocrat, and where epistemocrats manage to be elected. It would be a society governed from the basis of the awareness of ignorance, not knowledge.

Alas, one cannot assert authority by accepting one’s own fallibility. Simply, people need to be blinded by knowledge – we are made to follow leaders who can gather people together because the advantages of being in groups trump the disadvantages of being alone. It has been more profitable for us to bind together in the wrong direction than to be alone in the right one. Those who have followed the assertive idiot rather than the introspective wise person have passed us some of their genes. This is apparent from a social pathology: psychopaths rally followers.

Once in a while you encounter members of the human species with so much intellectual superiority that they can change their minds effortlessly.

Note here the following Black Swan asymmetry. I believe that you can be dead certain about some things, and ought to be so. You can be more confident about disconfirmation than confirmation. Karl Popper was accused of promoting self-doubt while writing in an aggressive and confident tone (an accusation that is occasionally addressed to this author by people who don’t follow my logic of skeptical empiricism). Fortunately, we have learned a lot since Montaigne about how to carry on the skeptical-empirical enterprise. The Black Swan asymmetry allows you to be confident about what is wrong, not about what you believe is right. Karl Popper was once asked whether one “could falsify falsification” (in other words, if one could be skeptical about skepticism). His answer was that he threw students out of his lectures for asking far more intelligent questions than that one. Quite tough, Sir Karl was.

THE PAST’S PAST, AND THE PAST’S FUTURE

Some truths only hit children – adults and nonphilosophers get sucked into the minutiae of practical life and need to worry about “serious matters”, so they abandon these insights for seemingly more relevant questions. One of these truths concerns the larger difference in texture and quality between the past and the future. Thanks to my studying this distinction all my life, I understand it better than I did during my childhood, but I no longer envision it as vividly.

The only way you can imagine a future “similar” to the past is by assuming that it will be an exact projection of it, hence predictable. Just as you know with some precision when you were born, you would then know with equal precision when you will die. The notion of future mixed with chance, not a deterministic extension of your perception of the past, is a mental operation that our mind cannot perform. Chance is too fuzzy for us to be a category by itself. There is an asymmetry between past and future, and it is too subtle for us to understand naturally.

The first consequence of this asymmetry is that, in people’s minds, the relationship between the past and the future does not learn from the relationship between the past and the past previous to it. There is a blind spot: when we think of tomorrow we do not frame it in terms of what we thought about yesterday or the day before yesterday. Because of this introspective defect we fail to learn about the difference between our past predictions and the subsequent outcomes. When we think of tomorrow, we just project it as another yesterday.

This small blind spot has other manifestations. Go to the primate section of the Bronx Zoo where you can see our close relatives in the happy primate family leading their own busy social lives. You can also see masses of tourists laughing at the caricature of humans that the lower primates represent. Now imagine being a member of a higher-level species (say a “real” philosopher, a truly wise person), far more sophisticated than the human primates. You would certainly laugh at the people laughing at the nonhuman primates. Clearly, to those people amused by the apes, the idea of a being who would look down on them the way they look down on the apes cannot immediately come to their minds – if it did, it would elicit self-pity. They would stop laughing.

Accordingly, an element in the mechanics of how the human mind learns from the past makes us believe in definitive solutions – yet not consider that those who preceded us thought that they too had definitive solutions. We laugh at others and we don’t realize that someone will be just as justified in laughing at us on some not too remote day. Such a realization would entail the recursive, or second-order, thinking that I mentioned in the Prologue; we are not good at it.

This mental block about the future has not yet been investigated and labeled by psychologists, but it appears to resemble autism. Some autistic subjects can possess high levels of mathematical or technical intelligence. Their social skills are defective, but that is not the root of their problem. Autistic people cannot put themselves in the shoes of others, cannot view the world from their standpoint. They see others as inanimate objects, like machines, moved by explicit rules. They cannot perform such simple mental operations as “he knows that I don’t know that I know”, and it is this inability that impedes their social skills. (Interestingly, autistic subjects, regardless of their “intelligence”, also exhibit an inability to comprehend uncertainty.)

Just as autism is called “mind blindness”, this inability to think dynamically, to position oneself with respect to a future observer, we should call “future blindness”.

Prediction, Misprediction, and Happiness

I searched the literature of cognitive science for any research on “future blindness” and found nothing. But in the literature on happiness I did find an examination of our chronic errors in prediction that will make us happy.

This prediction error works as follows. You are about to buy a new car. It is going to change your life, elevate your status, and make your commute a vacation. It is so quiet that you can hardly tell if the engine is on, so you can listen to Rachmaninoff’s nocturnes on the highway. This new car will bring you to a permanently elevated plateau of contentment. People will think, Hey, he has a great car, every time they see you. Yet you forget that the last time you bought a car, you also had the same expectations. You do not anticipate that the effect of the new car will eventually wane and that you will revert to the initial condition, as you did last time. A few weeks after you drive your new car out of the showroom, it will become dull. If you had expected this, you probably would not have bought it.

You are about to commit a prediction error that you have already made. Yet it would cost so little to introspect!

Psychologists have studied this kind of misprediction with respect to both pleasant and unpleasant events. We overestimate the effects of both kinds of future events on our lives. We seem to be in a psychological predicament that makes us do so. This predicament is called “anticipated utility” by Danny Kahneman and “affective forecasting” by Dan Gilbert. The point is not so much that we tend to mispredict our future happiness, but rather that we do not learn recursively from past experiences. We have evidence of a mental block and distortions in the way we fail to learn from our past errors in projecting the future of our affective states.

We grossly overestimate the length of the effect of misfortune on our lives. You think that the loss of your fortune or current position will be devastating, but you are probably wrong. More likely, you will adapt to anything, as you probably did after past misfortunes. You may feel a sting, but it will not be as bad as you expect. This kind of misprediction may have a purpose: to motivate us to perform important acts (like buying new cars or getting rich) and to prevent us from taking certain unnecessary risks. And it is part of a more general problem: we humans are supposed to fool ourselves a little bit here and there. According to Trivers’s theory of self-deception, this is supposed to orient us favorably toward the future. But self-deception is not a desirable feature outside of its natural domain. It prevents us from taking some unnecessary risks – but we saw in Chapter 6 how it does not as readily cover a spate of modern risks that we do not fear because they are not vivid, such as investment risks, environmental dangers, or long-term security.

Helenus and the Reverse Prophecies

If you are in the business of being a seer, describing the future to other less-privileged mortals, you are judged on the merits of your predictions.

Helenus, in The Iliad, was a different kind of seer. The son of Priam and Hecuba, he was the cleverest man in the Trojan army. It was he who, under torture, told the Achaeans how they would capture Troy (apparently he didn’t predict that he himself would be captured). But this is not what distinguished him. Helenus, unlike other seers, was able to predict the past with great precision – without having been given any details of it. He predicted backward.

Our problem is not just that we do not know the future, we do not know much of the past either. We badly need someone like Helenus if we are to know history. Let us see how.

The Melting Ice Cube

Consider the following thought experiment borrowed from my friends Aaron Brown and Paul Wilmott:

Operation 1 (the melting ice cube): Imagine an ice cube and consider how it may melt over the next two hours while you play a few rounds of poker with your friends. Try to envision the shape of the resulting puddle.

Operation 2 (where did the water come from?): Consider a puddle of water on the floor. Now try to reconstruct in your mind’s eye the shape of the ice cube it may once have been. Note that the puddle may not have necessarily originated from an ice cube.

The second operation is harder. Helenus indeed had to have skills.

The difference between these two processes resides in the following. If you have the right models (and some time on your hands, and nothing better to do) you can predict with great precision how the ice cube will melt – this is a specific engineering problem devoid of complexity, easier than the one involving billiard balls. However, from the pool of water you can build infinite possible ice cubes, if there was in fact an ice cube there at all. The first direction, from the ice cube to the puddle, is called the forward process. The second direction, the backward process, is much, much more complicated. The forward process is generally used in physics and engineering; the backward process in nonrepeatable, nonexperimental historical approaches.

In a way, the limitations that prevent us from unfrying an egg also prevent us from reverse engineering history.

Now, let me increase the complexity of the forward-backward problem just a bit by assuming nonlinearity. Take what is generally called the “butterfly in India” paradigm from the discussion of Lorenz’s discovery in the previous chapter. As we have seen, a small input in a complex system can lead to nonrandom large results, depending on very special conditions. A single butterfly flapping its wings in New Delhi may be the certain cause of a hurricane in North Carolina, though the hurricane may take place a couple of years later. However, given the observation of a hurricane in North Carolina, it is dubious that you could figure out the causes with any precision: there are billions of billions of such small things as wing-flapping butterflies in Timbuktu or sneezing wild dogs in Australia that could have caused it. The process from the butterfly to the hurricane is greatly simpler than the reverse process from the hurricane to the potential butterfly.

Confusion between the two is disastrously widespread in common culture. This “butterfly in India” metaphor has fooled at least one filmmaker. For instance, Happenstance (a.k.a. The Beating of a Butterfly’s Wings), a French-language film by one Laurent Firode, meant to encourage people to focus on small things that can change the course of their lives. Hey, since a small event (a petal falling on the ground and getting your attention) can lead to your choosing one person over another as a mate for life, you should focus on these very small details. Neither the filmmaker nor the critics realized that they were dealing with the backward process; there are trillions of such small things in the course of a simple day, and examining all of them lies outside of our reach.

Once Again, Incomplete Information

Take a personal computer. You can use a spreadsheet program to generate a random sequence, a succession of points we can call a history. How? The computer program responds to a very complicated equation of a nonlinear nature that produces numbers that seem random. The equation is very simple: if you know it, you can predict the sequence. It is almost impossible, however, for a human being to reverse engineer the equation and predict further sequences. I am talking about a simple one-line computer program (called the “tent map”) generating a handful of data points, not about the billions of simultaneous events that constitute the real history of the world. In other words, even if history were a nonrandom series generated by some “equation of the world”, as long as reverse engineering such an equation does not seem within human possibility, it should be deemed random and not bear the name “deterministic chaos”. Historians should stay away from chaos theory and the difficulties of reverse engineering except to discuss general properties of the world and learn the limits of what they can’t know.

This brings me to a greater problem with the historian’s craft. I will state the fundamental problem of practice as follows: while in theory randomness is an intrinsic property, in practice, randomness is incomplete information, what I called opacity in Chapter 1.

Nonpractitioners of randomness do not understand the subtlety. Often, in conferences when they hear me talk about uncertainty and randomness, philosophers, and sometimes mathematicians, bug me about the least relevant point, namely whether the randomness I address is “true randomness” or “deterministic chaos” that masquerades as randomness. A true random system is in fact random and does not have predictable properties. A chaotic system has entirely predictable properties, but they are hard to know. So my answer to them is dual.

a) There is no functional difference in practice between the two since we will never get to make the distinction – the difference is mathematical, not practical. If I see a pregnant woman, the sex of her child is a purely random matter to me (a 50 percent chance for either sex) – but not to her doctor, who might have done an ultrasound. In practice, randomness is fundamentally incomplete information.

b) The mere fact that a person is talking about the difference implies that he has never made a meaningful decision under uncertainty – which is why he does not realize that they are indistinguishable in practice.

Randomness, in the end, is just unknowledge. The world is opaque and appearances fool us.

What They Call Knowledge

One final word on history.

History is like a museum where one can go to see the repository of the past, and taste the charm of olden days. It is a wonderful mirror in which we can see our own narratives. You can even track the past using DNA analyses. I am fond of literary history. Ancient history satisfies my desire to build my own self-narrative, my identity, to connect with my (complicated) Eastern Mediterranean roots. I even prefer the accounts of older, patently less accurate books to modern ones. Among the authors I’ve reread (the ultimate test of whether you like an author is if you’ve reread him) the following come to mind: Plutarch, Livy, Suetonius, Diodorus Siculus, Gibbon, Carlyle, Renan, and Michelet. These accounts are patently substandard, compared to today’s works; they are largely anecdotal, and full of myths. But I know this.

History is useful for the thrill of knowing the past, and for the narrative (indeed), provided it remains a harmless narrative. One should learn under severe caution. History is certainly not a place to theorize or derive general knowledge, nor is it meant to help in the future, without some caution. We can get negative confirmation from history, which is invaluable, but we get plenty of illusions of knowledge along with it.

This brings me back once again to Menodotus and the treatment of the turkey problem and how to not be a sucker for the past. The empirical doctor’s approach to the problem of induction was to know history without theorizing from it. Learn to read history, get all the knowledge you can, do not frown on the anecdote, but do not draw any causal links, do not try to reverse engineer too much – but if you do, do not make big scientific claims. Remember that the empirical skeptics had respect for custom: they used it as a default, a basis for action, but not for more than that. This clean approach to the past they called epilogism[39]

But most historians have another opinion. Consider the representative introspection What Is History? by Edward Hallett Carr. You will catch him explicitly pursuing causation as a central aspect of his job. You can even go higher up: Herodotus, deemed to be the father of the subject, defined his purpose in the opening of his work:

To preserve a memory of the deeds of the Greeks and barbarians, “and in particular, beyond everything else, to give a cause [emphasis mine] to their fighting one another”.

You see the same with all theoreticians of history, whether Ibn Khaldoun, Marx, or Hegel. The more we try to turn history into anything other than an enumeration of accounts to be enjoyed with minimal theorizing, the more we get into trouble. Are we so plagued with the narrative fallacy?[40]

We may have to wait for a generation of skeptical-empiricist historians capable of understanding the difference between a forward process and a reverse one.

Just as Popper attacked the historicists in their making claims about the future, I have just presented the weakness of the historical approach in knowing the past itself.


After this discussion about future (and past) blindness, let us see what to do about it. Remarkably, there are extremely practical measures we can take. We will explore this next.

Chapter Thirteen: APPELLES THE PAINTER, OR WHAT DO YOU DO IF YOU CANNOT PREDICT?[41]

You should charge people for advice – My two cents here – Nobody knows anything, but, at least, he knows it – Go to parties

ADVICE IS CHEAP, VERY CHEAP

It is not a good habit to stuff one’s text with quotations from prominent thinkers, except to make fun of them or provide a historical reference. They “make sense”, but well-sounding maxims force themselves on our gullibility and do not always stand up to empirical tests. So I chose the following statement by the überphilosopher Bertrand Russell precisely because I disagree with it.

The demand for certainty is one which is natural to man, but is nevertheless an intellectual vice. If you take your children for a picnic on a doubtful day, they will demand a dogmatic answer as to whether it will be fine or wet, and be disappointed in you when you cannot be sure. …

But so long as men are not trained [emphasis mine] to withhold judgment in the absence of evidence, they will be led astray by cocksure prophets … For the learning of every virtue there is an appropriate discipline, and for the learning of suspended judgment the best discipline is philosophy.

The reader may be surprised that I disagree. It is hard to disagree that the demand for certainty is an intellectual vice. It is hard to disagree that we can be led astray by some cocksure prophet. Where I beg to differ with the great man is that I do not believe in the track record of advice-giving “philosophy” in helping us deal with the problem; nor do I believe that virtues can be easily taught; nor do I urge people to strain in order to avoid making a judgment. Why? Because we have to deal with humans as humans. We cannot teach people to withhold judgment; judgments are embedded in the way we view objects. I do not see a “tree”; I see a pleasant or an ugly tree. It is not possible without great, paralyzing effort to strip these small values we attach to matters. Likewise, it is not possible to hold a situation in one’s head without some element of bias. Something in our dear human nature makes us want to believe; so what?

Philosophers since Aristotle have taught us that we are deep-thinking animals, and that we can learn by reasoning. It took a while to discover that we do effectively think, but that we more readily narrate backward in order to give ourselves the illusion of understanding, and give a cover to our past actions. The minute we forgot about this point, the “Enlightenment” came to drill it into our heads for a second time.

I’d rather degrade us humans to a level certainly above other known animals but not quite on a par with the ideal Olympian man who can absorb philosophical statements and act accordingly. Indeed, if philosophy were that effective, the self-help section of the local bookstore would be of some use in consoling souls experiencing pain – but it isn’t. We forget to philosophize when under strain.

I’ll end this section on prediction with the following two lessons, one very brief (for the small matters), one rather lengthy (for the large, important decisions).

Being a Fool in the Right Places

The lesson for the small is: be human! Accept that being human involves some amount of epistemic arrogance in running your affairs. Do not be ashamed of that. Do not try to always withhold judgment – opinions are the stuff of life. Do not try to avoid predicting – yes, after this diatribe about prediction I am not urging you to stop being a fool. Just be a fool in the right places.[42]

What you should avoid is unnecessary dependence on large-scale harmful predictions – those and only those. Avoid the big subjects that may hurt your future: be fooled in small matters, not in the large. Do not listen to economic forecasters or to predictors in social science (they are mere entertainers), but do make your own forecast for the picnic. By all means, demand certainty for the next picnic; but avoid government social-security forecasts for the year 2040.

Know how to rank beliefs not according to their plausibility but by the harm they may cause.

Be Prepared

The reader might feel queasy reading about these general failures to see the future and wonder what to do. But if you shed the idea of full predictability, there are plenty of things to do provided you remain conscious of their limits. Knowing that you cannot predict does not mean that you cannot benefit from unpredictability.

The bottom line: be prepared! Narrow-minded prediction has an analgesic or therapeutic effect. Be aware of the numbing effect of magic numbers. Be prepared for all relevant eventualities.

THE IDEA OF POSITIVE ACCIDENT

Recall the empirics, those members of the Greek school of empirical medicine. They considered that you should be open-minded in your medical diagnoses to let luck play a role. By luck, a patient might be cured, say, by eating some food that accidentally turns out to be the cure for his disease, so that the treatment can then be used on subsequent patients. The positive accident (like hypertension medicine producing side benefits that led to Viagra) was the empirics’ central method of medical discovery.

This same point can be generalized to life: maximize the serendipity around you.

Sextus Empiricus retold the story of Apelles the Painter, who, while doing a portrait of a horse, was attempting to depict the foam from the horse’s mouth. After trying very hard and making a mess, he gave up and, in irritation, took the sponge he used for cleaning his brush and threw it at the picture. Where the sponge hit, it left a perfect representation of the foam.

Trial and error means trying a lot. In The Blind Watchmaker, Richard Dawkins brilliantly illustrates this notion of the world without grand design, moving by small incremental random changes. Note a slight disagreement on my part that does not change the story by much: the world, rather, moves by large incremental random changes.

Indeed, we have psychological and intellectual difficulties with trial and error, and with accepting that series of small failures are necessary in life. My colleague Mark Spitznagel understood that we humans have a mental hang-up about failures: “You need to love to lose” was his motto. In fact, the reason I felt immediately at home in America is precisely because American culture encourages the process of failure, unlike the cultures of Europe and Asia where failure is met with stigma and embarrassment. America’s specialty is to take these small risks for the rest of the world, which explains this country’s disproportionate share in innovations. Once established, an idea or a product is later “perfected” over there.

Volatility and Risk of Black Swan

People are often ashamed of losses, so they engage in strategies that produce very little volatility but contain the risk of a large loss – like collecting nickels in front of steamrollers. In Japanese culture, which is ill-adapted to randomness and badly equipped to understand that bad performance can come from bad luck, losses can severely tarnish someone’s reputation. People hate volatility, thus engage in strategies exposed to blowups, leading to occasional suicides after a big loss.

Furthermore, this trade-off between volatility and risk can show up in careers that give the appearance of being stable, like jobs at IBM until the 1990s. When laid off, the employee faces a total void: he is no longer fit for anything else. The same holds for those in protected industries. On the other hand, consultants can have volatile earnings as their clients’ earnings go up and down, but face a lower risk of starvation, since their skills match demand – fluctuat nec mergitur (fluctuates but doesn’t sink). Likewise, dictatorships that do not appear volatile, like, say, Syria or Saudi Arabia, face a larger risk of chaos than, say, Italy, as the latter has been in a state of continual political turmoil since the second war. I learned about this problem from the finance industry, in which we see “conservative” bankers sitting on a pile of dynamite but fooling themselves because their operations seem dull and lacking in volatility.

Barbell Strategy

I am trying here to generalize to real life the notion of the “barbell” strategy I used as a trader, which is as follows. If you know that you are vulnerable to prediction errors, and if you accept that most “risk measures” are flawed, because of the Black Swan, then your strategy is to be as hyperconservative and hyperaggressive as you can be instead of being mildly aggressive or conservative. Instead of putting your money in “medium risk” investments (how do you know it is medium risk? by listening to tenure-seeking “experts”?), you need to put a portion, say 85 to 90 percent, in extremely safe instruments, like Treasury bills – as safe a class of instruments as you can manage to find on this planet. The remaining 10 to 15 percent you put in extremely speculative bets, as leveraged as possible (like options), preferably venture capital-style portfolios.[43] That way you do not depend on errors of risk management; no Black Swan can hurt you at all, beyond your “floor”, the nest egg that you have in maximally safe investments. Or, equivalently, you can have a speculative portfolio and insure it (if possible) against losses of more than, say, 15 percent. You are “clipping” your incomputable risk, the one that is harmful to you. Instead of having medium risk, you have high risk on one side and no risk on the other. The average will be medium risk but constitutes a positive exposure to the Black Swan. More technically, this can be called a “convex” combination. Let us see how this can be implemented in all aspects of life.

“Nobody Knows Anything”

The legendary screenwriter William Goldman was said to have shouted “Nobody knows anything!” in relation to the prediction of movie sales. Now, the reader may wonder how someone as successful as Goldman can figure out what to do without making predictions. The answer stands perceived business logic on its head. He knew that he could not predict individual events, but he was well aware that the unpredictable, namely a movie turning into a blockbuster, would benefit him immensely.

So the second lesson is more aggressive: you can actually take advantage of the problem of prediction and epistemic arrogance! As a matter of fact, I suspect that the most successful businesses are precisely those that know how to work around inherent unpredictability and even exploit it.

Recall my discussion of the biotech company whose managers understood that the essence of research is in the unknown unknowns. Also, notice how they seized on the “corners”, those free lottery tickets in the world.

Here are the (modest) tricks. But note that the more modest they are, the more effective they will be.


a. First, make a distinction between positive contingencies and negative ones. Learn to distinguish between those human undertakings in which the lack of predictability can be (or has been) extremely beneficial and those where the failure to understand the future caused harm. There are both positive and negative Black Swans. William Goldman was involved in the movies, a positive-Black Swan business. Uncertainty did occasionally pay off there.

A negative-Black Swan business is one where the unexpected can hit hard and hurt severely. If you are in the military, in catastrophe insurance, or in homeland security, you face only downside. Likewise, as we saw in Chapter 7, if you are in banking and lending, surprise outcomes are likely to be negative for you. You lend, and in the best of circumstances you get your loan back – but you may lose all of your money if the borrower defaults. In the event that the borrower enjoys great financial success, he is not likely to offer you an additional dividend.

Aside from the movies, examples of positive-Black Swan businesses are: some segments of publishing, scientific research, and venture capital. In these businesses, you lose small to make big. You have little to lose per book and, for completely unexpected reasons, any given book might take off. The downside is small and easily controlled. The problem with publishers, of course, is that they regularly pay up for books, thus making their upside rather limited and their downside monstrous. (If you pay $10 million for a book, your Black Swan is it not being a bestseller.) Likewise, while technology can carry a great payoff, paying for the hyped-up story, as people did with the dot-com bubble, can make any upside limited and any downside huge. It is the venture capitalist who invested in a speculative company and sold his stake to unimaginative investors who is the beneficiary of the Black Swan, not the “me, too” investors.

In these businesses you are lucky if you don’t know anything – particularly if others don’t know anything either, but aren’t aware of it. And you fare best if you know where your ignorance lies, if you are the only one looking at the unread books, so to speak. This dovetails into the “barbell” strategy of taking maximum exposure to the positive Black Swans while remaining paranoid about the negative ones. For your exposure to the positive Black Swan, you do not need to have any precise understanding of the structure of uncertainty. I find it hard to explain that when you have a very limited loss you need to get as aggressive, as speculative, and sometimes as “unreasonable” as you can be.

Middlebrow thinkers sometimes make the analogy of such strategy with that of collecting “lottery tickets”. It is plain wrong. First, lottery tickets do not have a scalable payoff; there is a known upper limit to what they can deliver. The ludic fallacy applies here – the scalability of real-life payoffs compared to lottery ones makes the payoff unlimited or of unknown limit. Secondly, the lottery tickets have known rules and laboratory-style well-presented possibilities; here we do not know the rules and can benefit from this additional uncertainty, since it cannot hurt you and can only benefit you.[44]


b. Don’t look for the precise and the local. Simply, do not be narrow-minded. The great discoverer Pasteur, who came up with the notion that chance favors the prepared, understood that you do not look for something particular every morning but work hard to let contingency enter your working life. As Yogi Berra, another great thinker, said, “You got to be very careful if you don’t know where you’re going, because you might not get there”.

Likewise, do not try to predict precise Black Swans – it tends to make you more vulnerable to the ones you did not predict. My friends Andy Marshall and Andrew Mays at the Department of Defense face the same problem. The impulse on the part of the military is to devote resources to predicting the next problems. These thinkers advocate the opposite: invest in preparedness, not in prediction.

Remember that infinite vigilance is just not possible.


c. Seize any opportunity, or anything that looks like opportunity. They are rare, much rarer than you think. Remember that positive Black Swans have a necessary first step: you need to be exposed to them. Many people do not realize that they are getting a lucky break in life when they get it. If a big publisher (or a big art dealer or a movie executive or a hotshot banker or a big thinker) suggests an appointment, cancel anything you have planned: you may never see such a window open up again. I am sometimes shocked at how little people realize that these opportunities do not grow on trees. Collect as many free nonlottery tickets (those with open-ended payoffs) as you can, and, once they start paying off, do not discard them. Work hard, not in grunt work, but in chasing such opportunities and maximizing exposure to them. This makes living in big cities invaluable because you increase the odds of serendipitous encounters – you gain exposure to the envelope of serendipity. The idea of settling in a rural area on grounds that one has good communications “in the age of the Internet” tunnels out of such sources of positive uncertainty. Diplomats understand that very well: casual chance discussions at cocktail parties usually lead to big breakthroughs – not dry correspondence or telephone conversations. Go to parties! If you’re a scientist, you will chance upon a remark that might spark new research. And if you are autistic, send your associates to these events.


d. Beware of precise plans by governments. As discussed in Chapter 10, let governments predict (it makes officials feel better about themselves and justifies their existence) but do not set much store by what they say. Remember that the interest of these civil servants is to survive and self-perpetuate – not to get to the truth. It does not mean that governments are useless, only that you need to keep a vigilant eye on their side effects. For instance, regulators in the banking business are prone to a severe expert problem and they tend to condone reckless but (hidden) risk taking. Andy Marshall and Andy Mays asked me if the private sector could do better in predicting. Alas, no. Once again, recall the story of banks hiding explosive risks in their portfolios. It is not a good idea to trust corporations with matters such as rare events because the performance of these executives is not observable on a short-term basis, and they will game the system by showing good performance so they can get their yearly bonus. The Achilles’ heel of capitalism is that if you make corporations compete, it is sometimes the one that is most exposed to the negative Black Swan that will appear to be the most fit for survival. Also recall from the footnote on Ferguson’s discovery in Chapter 1 that markets are not good predictors of wars. No one in particular is a good predictor of anything. Sorry.


e. “There are some people who, if they don’t already know, you can’t tell ’em”, as the great philosopher of uncertainty Yogi Berra once said. Do not waste your time trying to fight forecasters, stock analysts, economists, and social scientists, except to play pranks on them. They are considerably easy to make fun of, and many get angry quite readily. It is ineffective to moan about unpredictability: people will continue to predict foolishly, especially if they are paid for it, and you cannot put an end to institutionalized frauds. If you ever do have to heed a forecast, keep in mind that its accuracy degrades rapidly as you extend it through time.

If you hear a “prominent” economist using the word equilibrium, or normal distribution, do not argue with him; just ignore him, or try to put a rat down his shirt.

The Great Asymmetry

All these recommendations have one point in common: asymmetry. Put yourself in situations where favorable consequences are much larger than unfavorable ones.

Indeed, the notion of asymmetric outcomes as the central idea of this book: I will never get to know the unknown since, by definition, it is unknown. However, I can always guess how it might affect me, and I should base my decisions around that.

This idea is often erroneously called Pascal’s wager, after the philosopher and (thinking) mathematician Blaise Pascal. He presented it something like this: I do not know whether God exists, but I know that I have nothing to gain from being an atheist if he does not exist, whereas I have plenty to lose if he does. Hence, this justifies my belief in God.

Pascal’s argument is severely flawed theologically: one has to be naïve enough to believe that God would not penalize us for false belief. Unless, of course, one is taking the quite restrictive view of a naïve God. (Bertrand Russell was reported to have claimed that God would need to have created fools for Pascal’s argument to work.)

But the idea behind Pascal’s wager has fundamental applications outside of theology. It stands the entire notion of knowledge on its head. It eliminates the need for us to understand the probabilities of a rare event (there are fundamental limits to our knowledge of these); rather, we can focus on the payoff and benefits of an event if it takes place. The probabilities of very rare events are not computable; the effect of an event on us is considerably easier to ascertain (the rarer the event, the fuzzier the odds). We can have a clear idea of the consequences of an event, even if we do not know how likely it is to occur. I don’t know the odds of an earthquake, but I can imagine how San Francisco might be affected by one. This idea that in order to make a decision you need to focus on the consequences (which you can know) rather than the probability (which you can’t know) is the central idea of uncertainty. Much of my life is based on it.

You can build an overall theory of decision making on this idea. All you have to do is mitigate the consequences. As I said, if my portfolio is exposed to a market crash, the odds of which I can’t compute, all I have to do is buy insurance, or get out and invest the amounts I am not willing to ever lose in less risky securities.

Effectively, if free markets have been successful, it is precisely because they allow the trial-and-error process I call “stochastic tinkering” on the part of competing individual operators who fall for the narrative fallacy – but are effectively collectively partaking of a grand project. We are increasingly learning to practice stochastic tinkering without knowing it – thanks to overconfident entrepreneurs, naïve investors, greedy investment bankers, and aggressive venture capitalists brought together by the free-market system. The next chapter shows why I am optimistic that the academy is losing its power and ability to put knowledge in straitjackets and that more out-of-the-box knowledge will be generated Wiki-style.


In the end we are being driven by history, all the while thinking that we are doing the driving.

I’ll sum up this long section on prediction by stating that we can easily narrow down the reasons we can’t figure out what’s going on. There are: a) epistemic arrogance and our corresponding future blindness; b) the Platonic notion of categories, or how people are fooled by reductions, particularly if they have an academic degree in an expert-free discipline; and, finally c) flawed tools of inference, particularly the Black Swan-free tools from Mediocristan.

In the next section we will go deeper, much deeper, into these tools from Mediocristan, into the “plumbing”, so to speak. Some readers may see it as an appendix; others may consider it the heart of the book.

Загрузка...