For more than 500 years, the bulk of human knowledge and information has been stored as paper documents. You’ve got one in your hands right now (unless you’re reading this from the CD-ROM or a future on-line edition). Paper will be with us indefinitely, but its importance as a means of finding, preserving, and distributing information is already diminishing.
When you think of a “document” you probably visualize pieces of paper with something printed on them, but that is a narrow definition. A document can be any body of information. A newspaper article is a document, but the broadest definition also includes a television show, a song, or an interactive video game. Because all information can be stored in a digital form, documents will be easy to find, store, and send on the highway. Paper is harder to transmit and very limiting if the contents are more than text with drawings and images. Future digitally stored documents will include pictures, audio, programming instructions for interactivity, and animation, or a combination of these and other elements.
On the information highway, rich electronic documents will be able to do things no piece of paper can. The highway’s powerful database technology will allow them to be indexed and retrieved using interactive exploration. It will be extremely cheap and easy to distribute them. In short, these new digital documents will replace many printed paper ones because they will be able to help us in new ways.
But not for quite some time. The paper-based book, magazine, or newspaper still has a lot of advantages over its digital counterpart. To read a digital document you need an information appliance such as a personal computer. A book is small, lightweight, high-resolution, and inexpensive compared to the cost of a computer. For at least a decade it won’t be as convenient to read a long, sequential document on a computer screen as on paper. The first digital documents to achieve widespread use will do so by offering new functionality rather than simply duplicating the older medium. A television set is also larger, more expensive, more cumbersome, and lower resolution than a book or magazine, but that hasn’t limited its popularity. Television brought video entertainment into our homes, and it was so compelling that television sets found their place alongside books and magazines.
Ultimately, incremental improvements in computer and screen technology will give us a lightweight, universal electronic book, or “e-book,” which will approximate today’s paper book. Inside a case roughly the same size and weight as today’s hardcover or paperback book, you’ll have a display that can show high-resolution text, pictures, and video. You’ll be able to flip pages with your finger or use voice commands to search for the passages you want. Any document on the network will be accessible from such a device.
The real point of electronic documents is not simply that we will read them on hardware devices. Going from paper book to e-book is just the final stage of a process already well under way. The exciting aspect of digital documentation is the redefinition of the document itself.
This will cause dramatic repercussions. We will have to rethink not only what is meant by the term “document,” but also by “author,” “publisher,” “office,” “classroom,” and “textbook.”
Today, if two companies are negotiating a contract, the first draft is probably typed into a computer, then printed on paper. Chances are it is then faxed to the other party, who edits, amends, and alters it by writing on the paper or by reentering the changed document into another computer, from which it is printed. He then faxes it back; the changes are incorporated; a new paper document is printed and faxed back again; and the editing process is repeated. During this transaction it is hard to tell who made which changes. Coordinating all the alterations and transmittals introduces a lot of overhead. Electronic documents can simplify this process by allowing a version of the contract to be passed back and forth with corrections and annotations and indications who made them and when printed alongside the original text.
Within a few years the digital document, complete with authenticatable digital signatures, will be the original, and paper printouts will be secondary. Already many businesses are advancing beyond paper and fax machines and exchanging editable documents, computer to computer, through electronic mail. This book would have been much harder to write without e-mail. Readers whose opinions I was soliciting were sent drafts electronically, and it was helpful to be able to look at the suggested revisions and see who had made them and when.
By the end of the decade a significant percentage of documents, even in offices, won’t even be fully printable on paper. They will be like a movie or a song is today. You will still be able to print a two-dimensional view of its content, but it will be like reading a musical score instead of experiencing an audio recording.
Some documents are so superior in digital form that the paper version is rarely used. Boeing decided to design its new 777 jetliner using a gigantic electronic document to hold all the engineering information. To coordinate collaboration among the design teams, manufacturing groups and outside contractors during development of previous airplanes, Boeing had used blueprints and constructed an expensive full-scale mock-up of the airplane. The mock-up had been necessary to make sure that parts of the airplane, designed by different engineers, actually fit together properly. During development of the 777, Boeing did away with blueprints and the mock-up and from the start used an electronic document that contained digital 3-D models of all the parts and how they fit together. Engineers at computer terminals were able to look at the design and see different views of the content. They could track the progress in any area, search for interesting test results, annotate with cost information, and change any part of the design in ways that would be impossible on paper. Each person, working with the same data, was able to look for what specifically concerned him. Every change could be shared, and everyone could see who made any change, when it was made, and why. Boeing was able to save hundreds of thousands of pieces of paper and many person-years of drafting and copying by using digital documents.
Digital documents can also be faster to work with than paper. You can transmit information instantly and retrieve it almost as quickly. Those using digital documents are already discovering how much simpler it is to search and navigate through them quickly, because their content can be restructured so easily.
The organizational structure of a reservation book at a restaurant is by date and time. A 9:00 P.M. reservation is written farther down the page than an 8:00 P.M. reservation. Saturday-night dinner reservations follow those for Saturday lunch. A mâitre d’ or anyone else can rapidly find out who has a reservation on any date for any time because the book’s information is ordered that way. But if, for whatever reason, someone wants to extract information in another way, the simple chronology is useless.
Imagine the plight of a restaurant captain if I called to say, “My name is Gates. My wife made us a reservation for some time next month. Would you mind checking to see when it is?”
“I’m sorry, sir, do you know the date of the reservation?” the captain would be likely to ask.
“No, that’s what I’m trying to find out.”
“Would that have been on a weekend?” the captain asks.
He knows he’s going to be paging through the book by hand, and he’s hoping to reduce the task by focusing the dates in any possible way.
A restaurant can use a paper-based reservation book because the total number of reservations isn’t large. An airline reservation system is not a book but a database containing an enormous quantity of information—flights, air fares, bookings, seat assignments, and billing information—for hundreds of flights a day worldwide. American Airlines’ SABRE reservation system stores the information—4.4 trillion bytes of it, which is more than 4 million million characters—on computer hard disks. If the information in the SABRE system were copied into a hypothetical paper reservation book, it would require more than 2 billion pages.
For as long as we’ve had paper documents or collections of documents, we have been ordering information linearly, with indexes, tables of contents, and cross-references of various kinds to provide alternate means of navigation. In most offices filing cabinets are organized by customer, vendor, or project in alphabetical order, but to speed access, often a duplicate set of correspondence is filed chronologically. Professional indexers add value to a book by building an alternative way to find information. And before library catalogs were computerized, new books were entered into the paper catalogs on several different cards so a reader could find a book by its title or any one of its authors or topics. This redundancy was to make information easier to find.
When I was young I loved my family’s 1960 World Book Encyclopedia. Its heavy bound volumes contained just text and pictures. They showed what Edison’s phonograph looked like, but didn’t let me listen to its scratchy sound. The encyclopedia had photographs of a fuzzy caterpillar changing into a butterfly, but there was no video to bring the transformation to life. It also would have been nice if it had quizzed me on what I had read, or if the information had always been up-to-date. Naturally I wasn’t aware of those drawbacks then. When I was eight, I began to read the first volume. I was determined to read straight through every volume. I could have absorbed more if it had been easy to read all the articles about the sixteenth century in sequence or all the articles pertaining to medicine. Instead I read about “Garter Snakes” then “Gary, Indiana,” then “Gas.” But I had a great time reading the encyclopedia anyway and kept at it for five years until I reached the Ps. Then I discovered the Encyclopaedia Britannica, with its greater sophistication and detail. I knew I would never have the patience to read all of it. Also, by then, satisfying my enthusiasm for computers was taking up most of my spare time.
Current print encyclopedias consist of nearly two dozen volumes, with millions of words of text and thousands of illustrations, and cost hundreds or thousands of dollars. That’s quite an investment, especially considering how rapidly the information gets out of date. Microsoft Encarta, which is outselling print and other multi-media encyclopedias, comes on a single 1-ounce CD-ROM (which stands for Compact Disc Read Only Memory). Encarta includes 26,000 topics with 9 million words of text, 8 hours of sounds, 7,000 photographs and illustrations, 800 maps, 250 interactive charts and tables, and 100 animations and video clips. It costs less than $100. If you want to know how the Egyptian “ud” (a musical instrument) sounds, hear the 1936 abdication speech of Great Britain’s King Edward VIII, or see an animation explaining how a machine works, the information’s all there—and no paper-based encyclopedia will ever have it.
Articles in a print encyclopedia often are followed by a list of articles on related subjects. To read them, you have to find the referenced article, which may be in another volume. With a CD-ROM encyclopedia all you have to do is click on the reference and the article will appear. On the information highway, encyclopedia articles will include links to related subjects—not just those covered in the encyclopedia, but those in other sources. There will be no limit to how much detail you will be able to explore on a subject that interests you. In fact, an encyclopedia on the highway will be more than just a specific reference work—it will be, like the library card catalog, a doorway to all knowledge.
Today, printed information is hard to locate. It’s almost impossible to find all the best information—including books, news articles, and film clips—on a specific topic. It is extremely time-consuming to assemble the information you can find. For example, if you wanted to read biographies of all the recent Nobel Prize laureates, compiling them could take an entire day. Electronic documents, however, will be interactive. Request a kind of information, and the document responds. Indicate that you’ve changed your mind, and the document responds again. Once you get used to this sort of system, you find that being able to look at information in different ways makes that information more valuable. The flexibility invites exploration, and the exploration is rewarded with discovery.
You’ll be able to get your daily news in a similar way. You’ll be able to specify how long you want your newscast to last. This will be possible because you’ll be able to have each of the news stories selected individually. The newscast assembled for and delivered only to you might include world news from NBC, the BBC, CNN, or the Los Angeles Times, with a weather report from a favorite local TV meteorologist—or from any private meteorologist who wanted to offer his or her own service. You will be able to request longer stories on the subjects that particularly interest you and just highlights on others. If, while you are watching the newscast, you want more than has been put together, you will easily be able to request more background or detail, either from another news broadcast or from file information.
Among all the types of paper documents, narrative fiction is one of the few that will not benefit from electronic organization. Almost every reference book has an index, but novels don’t because there is no need to be able to look something up in a novel. Novels are linear. Likewise, we’ll continue to watch most movies from start to finish. This isn’t a technological judgment—it is an artistic one: Their linearity is intrinsic to the storytelling process. New forms of interactive fiction are being invented that take advantage of the electronic world, but linear novels and movies will still be popular.
The highway will make it easy to distribute digital documents cheaply, whatever their form. Millions of people and companies will be creating documents and publishing them on the network. Some documents will be aimed at paying audiences and some will be free to anyone who wants to pay attention. Digital storage is fantastically inexpensive. Hard-disk drives in personal computers will soon cost about $0.15 for a megabyte (million bytes) of information. To put this in perspective, 1 megabyte will hold about 700 pages of text, so the cost is something like $0.00021 per page—about one two-hundredth what the local copy center would charge at $0.05 a page. And because there is the option of reusing the storage space for something else, the cost is actually the cost of storage per unit time—in other words, of renting the space. If we assume just a three-year average lifetime for the hard-disk drive, the amortized price per page per year is $0.00007. And storage is getting cheaper all the time. Hard-disk prices have been dropping by about 50 percent per year for the last several years.
Text is particularly easy to store because it is very compact in digital form. The old saying that a picture is worth a thousand words is more than true in the digital world. High-quality photographic images take more space than text, and video (which you can think of as a sequence of up to thirty new images appearing every second) takes even more. Nevertheless, the cost of distribution for these kinds of data is still quite low. A feature film takes up about 4 gigabytes (4,000 megabytes) in compressed digital format, which is about $1,600 worth of hard-disk space.
Sixteen hundred dollars to store a single film doesn’t sound low-cost. However, consider that the typical local video-rental store usually buys at least eight copies of a hot new movie for about $80 a copy. With these eight copies the store can supply only eight customers per day.
Once the disk and the computer that manages it are connected up to the highway, only one copy of the information will be necessary for everyone to have access. The most popular documents will have copies made on different servers to avoid delays when an unusual number of users want access. With one investment, roughly what a single shop today spends for a popular videotape title, a disk-based server will be able to serve thousands of customers simultaneously. The extra cost for each user is simply the cost of using the disk storage for a short period of time and the communications charge. And that is becoming extremely cheap. So the extra per-user cost will be nearly zero.
This doesn’t mean that information will be free, but the cost of distributing it will be very small. When you buy a paper book, a good portion of your money pays for the cost of producing and distributing it, rather than for the author’s work. Trees have to be cut down, ground into pulp, and turned into paper. The book must be printed and bound. Most publishers invest capital in a first printing that reflects the largest number of copies they think will sell right away, because the printing technology is efficient only if lots of books are made at once. The capital tied up in this inventory is a financial risk for the publishers: They may never sell all the copies, and even if they do, it will take a while to sell them all. Meanwhile, the publisher has to store the books and ship them to wholesalers and ultimately to retail bookstores. Those folks also invest capital in their inventory and expect a financial return from it.
By the time the consumer selects the book and the cash register rings, the profit for the author can be a pretty small piece of the pie compared to the money that goes to the physical aspect of delivering information on processed wood pulp. I like to call this the “friction” of distribution, because it holds back variety and dissipates money away from the author and to other people.
The information highway will be largely friction free, a theme I will explore further in chapter 8. This lack of friction in information distribution is incredibly important. It will empower more authors, because very little of the customer’s dollar will be used to pay for distribution.
Gutenberg’s invention of the printing press brought about the first real shift in distribution friction—it allowed information on any subject to be distributed quickly and relatively cheaply. The printing press created a mass medium because it offered low-friction duplication. The proliferation of books motivated the general public to read and write, but once people had the skills there were many other things that could be done with the written word. Businesses could keep track of inventory and write contracts. Lovers could exchange letters. Individuals could keep notes and diaries. By themselves these applications were not sufficiently compelling to get large numbers of people to make the effort to learn to read and write. Until there was a real reason to create an “installed base” of literate people, the written word wasn’t really useful as a means for storing information. Books gave literacy critical mass, so you can almost say that the printing press taught us to read.
The printing press made it easy to make lots of copies of a document, but what about something written for a few people? New technology was required for small-scale publishing. Carbon paper was fine if you wanted just one or two more copies. Mimeographs and other messy machines could make dozens, but to use any of these processes you had to have planned for them when you prepared your original document.
In the 1930s, Chester Carlson, frustrated by how difficult it was to prepare patent applications (which involved copying drawings and text by hand), set out to invent a better way to duplicate information in small quantities. What he came up with was a process he called “xerography” when he patented it in 1940. In 1959, the company he had hooked up with—later known as Xerox—released its first successful production-line copier. The 914 copier, by making it possible to reproduce modest numbers of documents easily and inexpensively, set off an explosion in the kinds and amount of information distributed to small groups. Market research had projected that Xerox would sell at most 3,000 of their first copier model. They actually placed about 200,000. A year after the copier was introduced, 50 million copies a month were being made. By 1986, more than 200 billion copies were being made each month, and the number has been rising ever since. Most of these copies would never be made if the technology wasn’t so cheap and easy.
The photocopier and its later cousin, the desktop laser printer—along with PC desktop publishing software—facilitated newsletters, memos, maps to parties, flyers, and other documents intended for modest-sized audiences. Carlson was another who reduced the distribution friction of information. The wild success of his copier demonstrates that amazing things happen once you reduce distribution friction.
Of course, it’s easier to make copies of a document than it is to make it worth reading. There is no intrinsic limit to the number of books that can be published in a given year. A typical bookstore has 10,000 different titles, and some of the new superstores might carry 100,000. Only a small fraction, under 10 percent, of all trade books published make money for their publishers, but some succeed beyond anybody’s wildest expectations.
My favorite recent example is A Brief History of Time, by Stephen W. Hawking, a brilliant scientist who has amyotrophic lateral sclerosis (Lou Gehrig’s disease), which confines him to a wheelchair and allows him to communicate only with great difficulty. What are the odds that his treatise on the origins of the universe would have been published if there were only a handful of publishers and each of them could produce only a few books a year? Suppose an editor had one spot left on his list and had to choose between publishing Hawking’s book and Madonna’s Sex? The obvious bet would be Madonna’s book, because it would likely sell a million copies. It did. But Hawking’s book sold 5.5 million copies and is still selling.
Every now and then this sort of sleeper best-seller surprises everyone (but the author). A book I enjoyed greatly, The Bridges of Madison County, was the first published novel by a business-school teacher of communications. It wasn’t positioned by the publisher to be a bestseller, but nobody really knows what will appeal to the public’s taste. Like most examples of central planning trying to outguess a market decision, this is fundamentally a losing proposition. There are almost always a couple of books on The New York Times best-seller list that have bubbled up from nowhere, because books cost so relatively little to publish—compared to other media—that publishers can afford to give them a chance.
Costs are much higher in broadcast television or movies, so it’s tougher to try something risky. In the early days of TV there were only a few stations in each geographic area and most programming was targeted for the broadest possible audience.
Cable television increased the number of programming choices, although it wasn’t started with that intention. It began in the late 1940s as a way of providing better television reception to outlying areas. Community antennas to feed a local cable system were erected by viewers whose broadcast reception was blocked by hills. No one then imagined that communities with perfectly good broadcast television reception would pay to have cable so they could watch a steady stream of music videos or channels that offered nothing but news or weather twenty-four hours a day.
When the number of stations carried went from three or five to twenty-four or thirty-six, the programming dynamic changed. If you were in charge of programming for the thirtieth channel, you wouldn’t attract much of an audience if you just tried to imitate channels 1 through 29. Instead, cable channel programmers were forced to specialize. Like special-interest magazines and newsletters, these new channels attract viewers by appealing to strong interests held by a relatively smaller number of enthusiasts. This is in contrast to general programming, which tries to provide something for everyone. But the costs of production and the small number of channels still limit the number of television programs produced.
Although it costs far less to publish a book than to broadcast a TV show, it’s still a lot compared to the cost involved in electronic publishing. To get a book into print a publisher has to agree to pay the up-front expense of manufacturing, distribution, and marketing. The information highway will create a medium with entry barriers lower than any we have ever seen. The Internet is the greatest self-publishing vehicle ever. Its bulletin boards have demonstrated some of the changes that will occur when everyone has access to low-friction distribution and individuals can post messages, images, or software of their own creation.
Bulletin boards have contributed a lot to the popularity of the Internet. To be published there all you have to do is type your thoughts and post them someplace. This means that there is a lot of garbage on the Internet, but also a few gems. A typical message is only a page or two long. A single message posted on a popular bulletin board or sent to a mailing list might reach and engage millions of people. Or it might sit there and languish with no impact whatsoever. The reason anyone is willing to risk the latter eventuality is the low distribution friction. The network bandwidth is so great and the other factors that contribute to the cost are so low that nobody thinks about the cost of sending messages. At worst you might be a bit embarrassed if your message just sits there and nobody responds to it. On the other hand, if your message is popular, a lot of people will see it, forward it as e-mail to their friends, and post their own comments on it.
It is amazingly fast and inexpensive to communicate with bulletin boards. Mail or telephone communications are fine for a one-on-one discussion, but they are also pretty expensive if you are trying to communicate with a group. It costs nearly a dollar to print and mail a letter and on average about that much for a long-distance phone call. And to make such a call you have to know the number and have coordinated a time to talk. So it takes considerable time and effort to contact even a modest-size group. On a bulletin board all you have to do is type your message in once and it’s available to everyone.
Bulletin boards on the Internet cover a wide range of topics. Some postings are not serious. Somebody will send a message with something humorous in it to a mailing list or post it somewhere. If it seems funny enough, it starts being forwarded as e-mail. In late 1994 this happened with a phony press release about Microsoft buying the Catholic Church. Thousands of copies were distributed inside Microsoft on our e-mail system. I was sent more than twenty copies as various friends and colleagues inside and outside the company chose to forward them.
There are many more serious examples of the networks’ being used to mobilize those who share a common concern or interest. During the recent political conflict in Russia, both sides were able to contact people throughout the world through postings on electronic bulletin boards. The networks let you contact people you have never met or heard from who happen to share an interest.
Information published by electronic posting is grouped by topic. Each bulletin board or newsgroup has a name, and anyone interested can “hang out” there. There are lists of interesting newsgroups or you can browse names that sound interesting. If you wanted to communicate about paranormal phenomena, you would go to the newsgroup alt.paranormal. If you wanted to discuss that sort of thing with others who don’t believe in it, you would go to sci.skeptic. Or you could connect to copernicus.bbn.com and look in National School Network Testbed for a set of lesson plans used by kindergarten through twelfth-grade teachers. Almost any topic you can name has a group communicating about it on the network.
We have seen that Gutenberg’s invention started mass publishing, but the literacy it engendered ultimately led to a great deal more person-to-person correspondence. Electronic communication developed the other way around. It started out as electronic mail, a way to communicate to small groups. Now millions of people are taking advantage of the networks’ low-friction distribution to communicate on a wide scale via various forms of posting.
The Internet has enormous potential, but it’s important for its continuing credibility that expectations aren’t cranked too high. The total number of users of the Internet, and of commercial on-line services such as Prodigy, CompuServe, and America Online, is still a very small portion of the population. Surveys indicate that nearly 50 percent of all PC users in the United States have a modem, but fewer than about 10 percent of those users subscribe to an on-line service. And the attrition rate is very high—many subscribers drop off after less than a year.
Significant investments will be required to develop great on-line content that will delight and excite PC users and raise the number on-line from 10 percent up to 50 percent, or even the 90 percent I believe it will become. Part of the reason this sort of investment isn’t happening today is that simple mechanisms for authors and publishers to charge their users or to be paid by advertisers are just being developed.
Commercial on-line services collect revenue, but they have been paying information providers royalties of only 10 percent to 30 percent of what customers pay. Although the provider probably knows the customers and market better, pricing—the way the customer is charged—and marketing are both controlled by the service. The resulting revenue stream is simply not large enough to encourage the information providers to create exciting new on-line information.
Over the next several years the evolution of on-line services will solve these problems and create an incentive for suppliers to furnish great material. There will be new billing options monthly subscriptions, hourly rates, charges per item accessed, and advertising payments—so that more revenue flows to the information providers. Once that happens a successful new mass medium will come into existence. This might take several years and a new generation of network technology, such as ISDN and cable modems, but one way or another it will happen. When it does, it will open tremendous opportunities for authors, editors, directors—every creator of intellectual property.
Whenever a new medium is created, the first content offered is brought over from other media. But to take best advantage of the capabilities of the electronic medium, content needs to be specially authored with it in mind. So far the vast majority of content on-line has been “dumped” from another source. Magazine or newspaper publishers are taking text already created for paper editions and simply shoving it on-line, often minus the pictures, charts, and graphics. Plain-text bulletin boards and e-mail are interesting but cannot really compete with the richer forms of information in our lives. On-line content should include lots of graphics, photos, and links to related information. As communications get faster and the commercial opportunity becomes clear, more audio and video elements will be included.
The development of CD-ROMs—multi-media versions of audio compact discs—provides some lessons that can be applied to the creation of on-line content. CD-ROM-based multi-media titles can integrate different types of information—text, graphics, photographic images, animation, music, and video—into a single document. Much of these titles’ value today is in the “multi,” not in the “media.” They are the best approximations of what the rich documents of the future will be like.
The music and audio on CD-ROMs are clear, but rarely as good as on a music CD. You could store CD-quality sound on a CD-ROM, but the format it uses is very bulky, so if you stored too much CD-quality sound, you wouldn’t have room for data, graphics, and other material.
Motion video on CD-ROMs still needs improving. If you compare the quality of video a PC can display today with the postage-stamp-size displays of just a few years ago, the progress is amazing. Longtime computer users got very excited when they first encountered video on their computers. On the other hand, the grainy, jerky image is certainly no better than a 1950s television picture. The size and quality of images will improve with faster processors and better compression, and eventually will become far better than today’s television picture.
CD-ROM technology has enabled a new category of applications. Shopping catalogs, museum tours, and textbooks are being republished in this new, appealing form. Every subject is being covered. Competition and technology will bring rapid improvements in the quality of the titles. CD-ROMs will be replaced by a new high-capacity disc that will look like today’s CD but will hold ten times as much data. The additional capacity of these extended CDs will allow for more than two hours of digital video on a single disc, which means they’ll be capable of holding a whole movie. The picture and sound quality will be much higher than those of the best TV signal you can receive on a home set, and new generations of graphics chips will allow multi-media titles to include Hollywood-quality special effects under the interactive control of the user.
Multi-media CD-ROMs are popular today because they offer users interactivity rather than because they have imitated TV. The commercial appeal of interactivity has already been demonstrated by the popularity of CD-ROM games such as Brøderbund’s Myst and Virgin Interactive Entertainment’s Seventh Guest, which are whodunits, a blending of narrative fiction and a series of puzzles that allow a player to investigate a mystery, collecting clues in any order.
The success of these games has encouraged authors to begin to create interactive novels and movies in which they introduce the characters and the general outline of the plot, then the reader/player makes decisions that change the outcome of the story. No one suggests that every book or movie should allow the reader or viewer to influence its outcome. A good story that makes you just want to sit there for a few hours and enjoy it is wonderful entertainment. I don’t want to choose an ending for The Great Gatsby or La Dolce Vita. F. Scott Fitzgerald and Federico Fellini have done that for me. The suspension of disbelief essential to the enjoyment of great fiction is fragile and may not hold up under the heavy-handed use of interactivity. You can’t simultaneously control the plot and surrender your imagination to it. Interactive fiction is as similar to and different from the older forms as poetry is similar to and different from drama.
There will be interactive stories and games available on the network too. Such applications can share content with CD-ROMs, but at least for a while the software will have to be carefully prepared so the CD-ROMs won’t be slow when used on a network. This is because, as discussed earlier, bandwidth, or the speed at which bits are transferred from the CD-ROM to the computer, is far greater than the bandwidth of the existing telephone network. Over time, the networks will meet—then exceed—the speed of the CD-ROM. And when that happens, the content being created for the two forms will be the same. But this will take a number of years, because improvements are also being made in CD-ROM technology. In the meantime the bit rate will differentiate the two forms enough so that they will remain separate technologies.
The technologies underlying the CD-ROM and on-line services have improved dramatically, but very few computer users are creating multi-media documents yet. Too much effort is still required. Millions of people have camcorders and make videos of their kids or their vacations. However, to edit video you have to be a professional with expensive equipment. This will change. Advances in PC word processors and desktop-publishing software have already made professional-quality tools for creating simple paper documents available relatively inexpensively to millions. Desktop-publishing software has progressed to the point that many magazines and newspapers are produced with the same sort of PC and software package you can buy at any local computer store and use to design an invitation to your daughter’s birthday party. PC software for editing film and creating special effects will become as commonplace as desktop-publishing software. Then the difference between professionals and amateurs will be one of talent rather than access to tools.
Georges Méliès created one of the first special effects in movies when, in 1899, he turned a woman into feathers on the screen in The Conjurer, and moviemakers have been playing cinematic tricks ever since. Recently, special-effects technology has improved dramatically through the use of the digital manipulation of images. First a photograph is converted into binary information, which, as we have seen, software applications are able to manipulate easily. Then the digital information is altered and finally returned to photographic form, as a frame in a movie. The alterations are nearly undetectable if well done, and the results can be spectacular. Computer software gave life to the dinosaurs in Jurassic Park, the thundering wildebeest herd in The Lion King, and the crazy cartoon effects in The Mask. As Moore’s Law increases hardware speed, and software becomes increasingly sophisticated, there is virtually no limit to what can be achieved. Hollywood will continue to push the state of the art and create amazing new effects.
It will be possible for a software program to fabricate scenes that will look as real as anything created with a camera. Audiences watching Forrest Gump could recognize that the scenes with Presidents Kennedy, Johnson, and Nixon were fabricated. Everyone knew Tom Hanks hadn’t really been there. It was a lot harder to spot the digital processing that removed Gary Sinise’s two good legs for his role as an amputee. Synthesized figures and digital editing are being used to make movie stunts safer. You’ll soon be able to use a standard PC to make the software to create the effects. The ease with which PCs and photo-editing software already manipulate complex images will make it easy to counterfeit photographic documents or alter photographs undetectably. And as synthesis gets cheaper it will be used more and more; if we can bring Tyrannosaurus rex back to life, can Elvis be far behind?
Even those who don’t aspire to becoming the next C. B. DeMille or Lina Wertmuller will routinely include multi-media in the documents they construct every day. Someone might start by typing, handwriting, or speaking an electronic mail message: “Lunch in the park may not be such a great idea. Look at the forecast.” To make the message more informative, he could then point his cursor at an icon representing a local television weather forecast and drag it across his screen to move the icon inside his document. When his friends get the message, they will be able to look at the forecast right on their screens—a professional-looking communication.
Kids in school will be able to produce their own albums or movies and make them available to friends and family on the information highway. When I have time, I enjoy making special greeting cards and invitations. If I’m making a birthday card for my sister, for instance, to personalize it I sometimes add pictures reminding her of fun events of the past year. In the future I’ll be able to include movie clips that I’ve
customized with only a few minutes’ work. It will be simple to create an interactive “album” of photographs, videos, or conversations. Businesses of all types and sizes will communicate using multi-media. Lovers will use special effects to blend some text, a video clip from an old movie, and a favorite song to create a personal valentine.
As the fidelity of visual and audio elements improves, reality in all its aspects will be more closely simulated. This “virtual reality,” or VR, will allow us to “go” places and “do” things we never would be able to otherwise.
Vehicle simulators for airplanes, race cars, and spacecraft already provide a taste of virtual reality. Some of the most popular rides at Disneyland are simulated voyages. Software vehicle simulators, such as Microsoft Flight Simulator, are among the most popular games ever created for PCs, but they force you to use your imagination. Multimillion-dollar flight simulators at companies such as Boeing give you a much better ride. Viewed from the outside, they’re boxy, stilt-legged mechanical creatures that would look at home in a Star Wars movie. Inside, the cockpit video displays offer sophisticated data. Flight and maintenance instruments are linked to a computer that simulates flight characteristics—including emergencies—with an accuracy pilots say is remarkable.
A couple of friends and I “flew” a 747 simulator a couple years ago. You sit down to a control panel in a cockpit identical to one in a real plane. Outside the windows, you see computer-generated color video images. When you “take off” in the simulator, you see an identifiable airport and its surroundings. The simulation of Boeing Field, for instance, might show a fuel truck on the runway and Mount Rainier in the distance. You hear the rush of air around wings that aren’t there, the clunk of nonexistent landing gear retracting. Six hydraulic systems under the simulator tilt and shake the cockpit. It’s pretty convincing.
The main purpose of these simulators is to give pilots a chance to gain experience in handling emergencies. When I was using the simulator my friends decided to give me a surprise by having a small plane fly by. While I sat in the pilot’s seat the all-too-real-looking image of a Cessna flashed into view. I wasn’t prepared for the “emergency” and I crashed into it.
A number of companies, from entertainment giants to small start-ups, are planning to put smaller-scale simulator rides into shopping malls and urban sites. As the price of technology comes down, entertainment simulators may become as common as movie theaters are today. And it won’t be too many years until you’ll be able to have a high-quality simulation in your own living room.
Want to explore the surface of Mars? It’s a lot safer to do it via VR. How about visiting somewhere humans never will be able to go? A cardiologist might be able to swim through the heart of a patient to examine it in a way she never would have been able to with conventional instrumentation. A surgeon could practice a tricky operation many times, including simulated catastrophes, before she ever touches a scalpel to a real patient. Or you could use VR to wander through a fantasy of your own design.
In order to work, VR needs two different sets of technology—software that creates the scene and makes it respond to new information, and devices that allow the computer to transmit the information to our senses. The software will have to figure out how to describe the look, sound, and feel of the artificial world down to the smallest detail. That might sound overwhelmingly difficult, but actually it’s the easy part. We could write the software for VR today, but we need a lot more computer power to make it truly believable. At the pace technology is moving, though, that power will be available soon. The really hard part about VR is getting the information to convince the user’s senses.
Hearing is the easiest sense to fool; all you have to do is wear headphones. In real life, your two ears hear slightly different things because of their location on your head and the directions they point. Subconsciously you use those differences to tell where a sound is coming from. Software can re-create this by calculating for a given sound what each ear would be hearing. This works amazingly well. You can put on a set of headphones connected to a computer and hear a whisper in your left ear or footsteps walking up behind you.
Your eyes are harder to fool than your ears, but vision is still pretty straightforward to simulate. VR equipment almost always includes a special set of goggles with lenses that focus each eye on its own small computer display. A head-tracking sensor allows the computer to figure out which direction your head is facing, so the computer can synthesize what you would be seeing. Turn your head to the right, and the scene portrayed by the goggles is farther to the right. Lift your face, and the goggles show the ceiling or sky. Today’s VR goggles are too heavy, too expensive, and don’t have enough resolution. The computer systems that drive them are still a bit too slow. If you turn your head quickly, the scene lags somewhat behind. This is very disorienting and after a short period of time causes most people to get headaches. The good news is that size, speed, weight, and cost are precisely the kinds of things that technology following Moore’s Law will correct soon.
Other senses are much more difficult to fool, because there are no good ways of connecting a computer to your nose or tongue, or to the surface of your skin. In the case of touch, the prevailing idea is that a full bodysuit could be made lined with tiny sensor and force feedback devices that would be in contact with the whole surface of your skin. I don’t think bodysuits will be common, but they’ll be feasible.
There are between 72 and 120 tiny points of color (called pixels) per inch on a typical computer monitor, for a total of between 300,000 and 1 million. A full bodysuit would presumably be lined with little touch sensor points—each of which could poke one specific tiny spot. Let’s call these little touch elements “tactels.”
If the suit had enough of these tactels, and if they were controlled finely enough, any touch sensation could be duplicated. If a large number of tactels poked all together at precisely the same depth, the resulting “surface” would feel smooth, as if a piece of polished metal were against your skin. If they pushed with a variety of randomly distributed depths, it would feel like a rough texture.
Between 1 million and 10 million tactels—depending on how many different levels of depth a tactel had to convey—would be needed for a VR bodysuit. Studies of the human skin show that a full bodysuit would have to have about 100 tactels per inch—a few more on the fingertips, lips, and a couple of other sensitive spots. Most skin actually has poor touch resolution. I’d guess that 256 tactels would be enough for the highest-quality simulation. That’s the same number of colors most computer displays use for each pixel.
The total amount of information a computer would have to calculate to pipe senses into the tactel suit is somewhere between one and ten times the amount required for the video display on a current PC. This really isn’t a lot of computer power. I’m confident that as soon as someone makes the first tactel suit, PCs of that era will have no problem driving them.
Sound like science fiction? The best descriptions of VR actually come from so-called cyberpunk science fiction like that written by William Gibson. Rather than putting on a bodysuit, some of his characters “jack in” by plugging a computer cable directly into their central nervous systems. It will take scientists a while to figure out how this can be done, and when they do, it will be long after the highway is established. Some people are horrified by the notion, whereas others are intrigued. It will probably first be used to help people with physical disabilities.
Inevitably, there has been more speculation (and wishful thinking) about virtual sex than about any other use for VR. Sexually explicit content is as old as information itself. It never takes long to figure out how to apply any new technology to the oldest desire. The Babylonians left erotic poems in cuneiform on clay tablets, and pornography was one of the first things the printing press was used for. When VCRs became common home appliances, they provoked a surge in the sales and rentals of X-rated videos, and today pornographic CD-ROMs are popular. On-line bulletin boards such as the Internet and the French Minitel system have lots of subscribers for their sexually oriented services. If historical patterns are a guide, a big early market for advanced virtual-reality documents will be virtual sex. But again, historically, as each of these markets grew, explicit material became a smaller and smaller factor.
Imagination will be a key element for all new applications. It isn’t enough just to re-create the real world. Great movies are a lot more than just graphic depictions on film of real events. It took a decade or so for such innovators as D. W. Griffith and Sergei Eisenstein to take the Vitascope and the Lumièes’ Cinématographe and figure out that motion pictures could do more than record real life or even a play. Moving film was a new and dynamic art form and the way it could engage an audience was very different from the way the theater could. The pioneers saw this and invented movies as we know them today.
Will the next decade bring us the Griffiths and Eisensteins of multi-media? There is every reason to think they are already tinkering with the existing technology to see what it can do and what they can do with it.
I expect multi-media experimentation will continue into the decade after that, and the one after that, and so on indefinitely. At first, the multi-media components appearing in documents on the information highway will be a synthesis of current media—a clever way to enrich communication. But over time we will start to create new forms and formats that will go significantly beyond what we know now. The exponential expansion of computing power will keep changing the tools and opening new possibilities that will seem as remote and farfetched then as some of the things I’ve speculated on here might seem today. Talent and creativity have always shaped advances in unpredictable ways.
How many have the talent to become a Steven Spielberg, a Jane Austen, or an Albert Einstein? We know there was at least one of each, and maybe one is all we’re allotted. I cannot help but believe, though, that there are many talented people whose aspirations and potential have been thwarted by economics and their lack of tools. New technology will offer people a new means with which to express themselves. The information highway will open undreamed-of artistic and scientific opportunities to a new generation of geniuses.