CHAPTER 10 THE LAW OF ACCELERATING RETURNS APPLIED TO THE BRAIN

And though man should remain, in some respects, the higher creature, is not this in accordance with the practice of nature, which allows superiority in some things to animals which have, on the whole, been long surpassed? Has she not allowed the ant and the bee to retain superiority over man in the organization of their communities and social arrangements, the bird in traversing the air, the fish in swimming, the horse in strength and fleetness, and the dog in self-sacrifice?

Samuel Butler, 1871

There was a time, when the earth was to all appearance utterly destitute both of animal and vegetable life, and when according to the opinion of our best philosophers it was simply a hot round ball with a crust gradually cooling. Now if a human being had existed while the earth was in this state and had been allowed to see it as though it were some other world with which he had no concern, and if at the same time he were entirely ignorant of all physical science, would he not have pronounced it impossible that creatures possessed of anything like consciousness should be evolved from the seeming cinder which he was beholding? Would he not have denied that it contained any potentiality of consciousness? Yet in the course of time consciousness came. Is it not possible then that there may be even yet new channels dug out for consciousness, though we can detect no signs of them at present?

Samuel Butler, 1871

When we reflect upon the manifold phases of life and consciousness which have been evolved already, it would be rash to say that no others can be developed, and that animal life is the end of all things. There was a time when fire was the end of all things: another when rocks and water were so.

Samuel Butler, 1871

There is no security against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now. A mollusk has not much consciousness. Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing. The more highly organized machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time. Assume for the sake of argument that conscious beings have existed for some twenty million years: see what strides machines have made in the last thousand! May not the world last twenty million years longer? If so, what will they not in the end become?

Samuel Butler, 1871

My core thesis, which I call the law of accelerating returns (LOAR), is that fundamental measures of information technology follow predictable and exponential trajectories, belying the conventional wisdom that “you can’t predict the future.” There are still many things—which project, company, or technical standard will prevail in the marketplace, when peace will come to the Middle East—that remain unknowable, but the underlying price/performance and capacity of information has nonetheless proven to be remarkably predictable. Surprisingly, these trends are unperturbed by conditions such as war or peace and prosperity or recession.

A primary reason that evolution created brains was to predict the future. As one of our ancestors walked through the savannas thousands of years ago, she might have noticed that an animal was progressing toward a route that she was taking. She would predict that if she stayed on course, their paths would intersect. Based on this, she decided to head in another direction, and her foresight proved valuable to survival.

But such built-in predictors of the future are linear, not exponential, a quality that stems from the linear organization of the neocortex. Recall that the neocortex is constantly making predictions—what letter and word we will see next, whom we expect to see as we round the corner, and so on. The neocortex is organized with linear sequences of steps in each pattern, which means that exponential thinking does not come naturally to us. The cerebellum also uses linear predictions. When it helps us to catch a fly ball it is making a linear prediction about where the ball will be in our visual field of view and where our gloved hand should be in our visual field of view to catch it.

As I have pointed out, there is a dramatic difference between linear and exponential progressions (forty steps linearly is forty, but exponentially is a trillion), which accounts for why my predictions stemming from the law of accelerating returns seem surprising to many observers at first. We have to train ourselves to think exponentially. When it comes to information technologies, it is the right way to think.

The quintessential example of the law of accelerating returns is the perfectly smooth, doubly exponential growth of the price/performance of computation, which has held steady for 110 years through two world wars, the Great Depression, the Cold War, the collapse of the Soviet Union, the reemergence of China, the recent financial crisis, and all of the other notable events of the late nineteenth, twentieth, and early twenty-first centuries. Some people refer to this phenomenon as “Moore’s law,” but that is a misconception. Moore’s law—which states that you can place twice as many components on an integrated circuit every two years, and they run faster because they are smaller—is just one paradigm among many. It was in fact the fifth, not the first, paradigm to bring exponential growth to the price/performance of computing.

The exponential rise of computation started with the 1890 U.S. census (the first to be automated) using the first paradigm of electromechanical calculation, decades before Gordon Moore was even born. In The Singularity Is Near I provide this graph through 2002, and here I update it through 2009 (see the graph on page 257 titled “Exponential Growth of Computing for 110 Years”). The smoothly predictable trajectory has continued, even through the recent economic downturn.

Computation is the most important example of the law of accelerating returns, because of the amount of data we have for it, the ubiquity of computation, and its key role in ultimately revolutionizing everything we care about. But it is far from the only example. Once a technology becomes an information technology, it becomes subject to the LOAR.

Biomedicine is becoming the most significant recent area of technology and industry to be transformed in this way. Progress in medicine has historically been based on accidental discoveries, so progress during the earlier era was linear, not exponential. This has nevertheless been beneficial: Life expectancy has grown from twenty-three years as of a thousand years ago, to thirty-seven years as of two hundred years ago, to close to eighty years today. With the gathering of the software of life—the genome—medicine and human biology have become an information technology. The human genome project itself was perfectly exponential, with the amount of genetic data doubling and the cost per base pair coming down by half each year since the project was initiated in 1990.3 (All the graphs in this chapter have been updated since The Singularity Is Near was published.)


The cost of sequencing a human-sized genome.1


The amount of genetic data sequenced in the world each year.2


We now have the ability to design biomedical interventions on computers and to test them on biological simulators, the scale and precision of which are also doubling every year. We can also update our own obsolete software: RNA interference can turn genes off, and new forms of gene therapy can add new genes, not just to a newborn but to a mature individual. The advance of genetic technologies also affects the brain reverse-engineering project, in that one important aspect of it is understanding how genes control brain functions such as creating new connections to reflect recently added cortical knowledge. There are many other manifestations of this integration of biology and information technology, as we move beyond genome sequencing to genome synthesizing.

Another information technology that has seen smooth exponential growth is our ability to communicate with one another and transmit vast repositories of human knowledge. There are many ways to measure this phenomenon. Cooper’s law, which states that the total bit capacity of wireless communications in a given amount of radio spectrum doubles every thirty months, has held true from the time Guglielmo Marconi used the wireless telegraph for Morse code transmissions in 1897 to today’s 4G communications technologies.4 According to Cooper’s law, the amount of information that can be transmitted over a given amount of radio spectrum has been doubling every two and a half years for more than a century. Another example is the number of bits per second transmitted on the Internet, which is doubling every one and a quarter years.5

The reason I became interested in trying to predict certain aspects of technology is that I realized about thirty years ago that the key to becoming successful as an inventor (a profession I adopted when I was five years old) was timing. Most inventions and inventors fail not because the gadgets themselves don’t work, but because their timing is wrong, appearing either before all of the enabling factors are in place or too late, having missed the window of opportunity.


The international (country-to-country) bandwidth dedicated to the Internet for the world.6


The highest bandwidth (speed) of the Internet backbone.7


Being an engineer, about three decades ago I started to gather data on measures of technology in different areas. When I began this effort, I did not expect that it would present a clear picture, but I did hope that it would provide some guidance and enable me to make educated guesses. My goal was—and still is—to time my own technology efforts so that they will be appropriate for the world that exists when I complete a project—which I realized would be very different from the world that existed when I started.

Consider how much and how quickly the world has changed only recently. Just a few years ago, people did not use social networks (Facebook, for example, was founded in 2004 and had 901 million monthly active users at the end of March 2012),8 wikis, blogs, or tweets. In the 1990s most people did not use search engines or cell phones. Imagine the world without them. That seems like ancient history but was not so long ago. The world will change even more dramatically in the near future.

In the course of my investigation, I made a startling discovery: If a technology is an information technology, the basic measures of price/performance and capacity (per unit of time or cost, or other resource) follow amazingly precise exponential trajectories.

These trajectories outrun the specific paradigms they are based on (such as Moore’s law). But when one paradigm runs out of steam (for example, when engineers were no longer able to reduce the size and cost of vacuum tubes in the 1950s), it creates research pressure to create the next paradigm, and so another S-curve of progress begins.

The exponential portion of that next S-curve for the new paradigm then continues the ongoing exponential of the information technology measure. Thus vacuum tube–based computing in the 1950s gave way to transistors in the 1960s, and then to integrated circuits and Moore’s law in the late 1960s, and beyond. Moore’s law, in turn, will give way to three-dimensional computing, the early examples of which are already in place. The reason why information technologies are able to consistently transcend the limitations of any particular paradigm is that the resources required to compute or remember or transmit a bit of information are vanishingly small.

We might wonder, are there fundamental limits to our ability to compute and transmit information, regardless of paradigm? The answer is yes, based on our current understanding of the physics of computation. Those limits, however, are not very limiting. Ultimately we can expand our intelligence trillions-fold based on molecular computing. By my calculations, we will reach these limits late in this century.

It is important to point out that not every exponential phenomenon is an example of the law of accelerating returns. Some observers misconstrue the LOAR by citing exponential trends that are not information-based: For example, they point out, men’s shavers have gone from one blade to two to four, and then ask, where are the eight-blade shavers? Shavers are not (yet) an information technology.

In The Singularity Is Near, I provide a theoretical examination, including (in the appendix to that book) a mathematical treatment of why the LOAR is so remarkably predictable. Essentially, we always use the latest technology to create the next. Technologies build on themselves in an exponential manner, and this phenomenon is readily measurable if it involves an information technology. In 1990 we used the computers and other tools of that era to create the computers of 1991; in 2012 we are using current information tools to create the machines of 2013 and 2014. More broadly speaking, this acceleration and exponential growth applies to any process in which patterns of information evolve. So we see acceleration in the pace of biological evolution, and similar (but much faster) acceleration in technological evolution, which is itself an outgrowth of biological evolution.

I now have a public track record of more than a quarter of a century of predictions based on the law of accelerating returns, starting with those presented in The Age of Intelligent Machines, which I wrote in the mid-1980s. Examples of accurate predictions from that book include: the emergence in the mid- to late 1990s of a vast worldwide web of communications tying together people around the world to one another and to all human knowledge; a great wave of democratization emerging from this decentralized communication network, sweeping away the Soviet Union; the defeat of the world chess champion by 1998; and many others.

I described the law of accelerating returns, as it is applied to computation, extensively in The Age of Spiritual Machines, where I provided a century of data showing the doubly exponential progression of the price/performance of computation through 1998. It is updated through 2009 below.

I recently wrote a 146-page review of the predictions I made in The Age of Intelligent Machines, The Age of Spiritual Machines, and The Singularity Is Near. (You can read the essay here by going to the link in this endnote.)9The Age of Spiritual Machines included hundreds of predictions for specific decades (2009, 2019, 2029, and 2099). For example, I made 147 predictions for 2009 in The Age of Spiritual Machines, which I wrote in the 1990s. Of these, 115 (78 percent) are entirely correct as of the end of 2009; the predictions that were concerned with basic measurements of the capacity and price/performance of information technologies were particularly accurate. Another 12 (8 percent) are “essentially correct.” A total of 127 predictions (86 percent) are correct or essentially correct. (Since the predictions were made specific to a given decade, a prediction for 2009 was considered “essentially correct” if it came true in 2010 or 2011.) Another 17 (12 percent) are partially correct, and 3 (2 percent) are wrong.


Calculations per second per (constant) thousand dollars of different computing devices.10


Floating-point operations per second of different supercomputers.11


Transistors per chip for different Intel processors.12


Bits per dollar for dynamic random access memory chips.13


Bits per dollar for random access memory chips.14


The average price per transistor in dollars.15


The total number of bits of random access memory shipped each year.16


Bits per dollar (in constant 2000 dollars) for magnetic data storage.17


Even the predictions that were “wrong” were not all wrong. For example, I judged my prediction that we would have self-driving cars to be wrong, even though Google has demonstrated self-driving cars, and even though in October 2010 four driverless electric vans successfully concluded a 13,000-kilometer test drive from Italy to China.18 Experts in the field currently predict that these technologies will be routinely available to consumers by the end of this decade.

Exponentially expanding computational and communication technologies all contribute to the project to understand and re-create the methods of the human brain. This effort is not a single organized project but rather the result of a great many diverse projects, including detailed modeling of constituents of the brain ranging from individual neurons to the entire neocortex, the mapping of the “connectome” (the neural connections in the brain), simulations of brain regions, and many others. All of these have been scaling up exponentially. Much of the evidence presented in this book has only become available recently—for example, the 2012 Wedeen study discussed in chapter 4 that showed the very orderly and “simple” (to quote the researchers) gridlike pattern of the connections in the neocortex. The researchers in that study acknowledge that their insight (and images) only became feasible as the result of new high-resolution imaging technology.

Brain scanning technologies are improving in resolution, spatial and temporal, at an exponential rate. Different types of brain scanning methods being pursued range from completely noninvasive methods that can be used with humans to more invasive or destructive methods on animals.

MRI (magnetic resonance imaging), a noninvasive imaging technique with relatively high temporal resolution, has steadily improved at an exponential pace, to the point that spatial resolutions are now close to 100 microns (millionths of a meter).


A Venn diagram of brain imaging methods.19


Tools for imaging the brain.20


MRI spatial resolution in microns.21


Spatial resolution of destructive imaging techniques.22


Spatial resolution of nondestructive imaging techniques in animals.23


Destructive imaging, which is performed to collect the connectome (map of all interneuronal connections) in animal brains, has also improved at an exponential pace. Current maximum resolution is around four nanometers, which is sufficient to see individual connections.

Artificial intelligence technologies such as natural-language-understanding systems are not necessarily designed to emulate theorized principles of brain function, but rather for maximum effectiveness. Given this, it is notable that the techniques that have won out are consistent with the principles I have outlined in this book: self-organizing, hierarchical recognizers of invariant self-associative patterns with redundancy and up-and-down predictions. These systems are also scaling up exponentially, as Watson has demonstrated.

A primary purpose of understanding the brain is to expand our toolkit of techniques to create intelligent systems. Although many AI researchers may not fully appreciate this, they have already been deeply influenced by our knowledge of the principles of the operation of the brain. Understanding the brain also helps us to reverse brain dysfunctions of various kinds. There is, of course, another key goal of the project to reverse-engineer the brain: understanding who we are.

Загрузка...