Some people remember time according to the cars they drove or the jobs they held or the places they lived or the sweethearts they dated. My years are marked by computers.
I had only three computers while I was growing up. There was the aforementioned Commodore VIC-20, which I inherited from my grandfather. It was one of the first "home" computers, the predecessors to the present-day PCs. The Commodore 64 became sort of the big brother to the VIC-20, followed by the Amiga, which had a particularly strong following in Europe. Those computers never became truly popular, like the PC or even the Apple II, which was already common about the time I played around with the VIC.
In those days before the proliferation of PCs, most of the programming on home computers was done in assembly language. (I can't believe I've taken to starting sentences with "In those days...") Computers had their own home-brew operating system, the equivalent of what DOS was on a PC. Depending on the computer, it was either a rudimentary format or a slightly more enhanced one. Like DOS, the OS had a program loader and a basic language environment. Back then there were no standards and a number of companies wanted to control the market. Commodore was one of the better known of these.
When I had gotten about as much as I could out of the VIC-20, I started saving up for a next-generation model. This was a big deal in my life. As I mentioned, I've lost track of who in my family was living where at what particular time, and a lot of other things, but the path to my second computer was something that's hard to forget.
I had some Christmas-and-birthday money stashed away (because I was born on December 28th, the two occasions are sort of melded together). I also earned some money one summer working on the clean-up crew in Helsinki's parks. Many of the parks in Helsinki aren't landscaped and well-maintained, but are more like recreational or green areas that are overgrown forests. What we had to do was saw off overgrown bushes or pick up dead branches -- it was even interesting. I've always liked the outdoors. I also had a newspaper route, too, at one point-except that it wasn't newspapers, it was junk mail. Actually, I wasn't really into summer jobs, come to think of it. But I did them in those days. On the whole, I probably got more money from school stipends.
In Finland, it's relatively common for people to give endowments to schools, even the public elementary schools. So, starting in fourth grade, money gets distributed to students based on whatever the person setting up the fund had in mind. I remember one of the endowments in my school went to the best-liked kid in class. This was in sixth grade and we actually voted within the class on who should get the money. It wasn't me who won, I might add. The bounty amounted to only about 200 Finnmarks, which was maybe forty dollars, at the time, but it seemed like a lot of money to give a sixth grader just for being popular.
Quite often the money went to the best person in a particular subject or sport. And a lot of the awards were school-specific or funded through the government. In some cases, the funds dwindled over time. I remember one that amounted to about a penny in value. When that was the situation, the school would chip in to make it somewhat more useful, but it still was a fairly small sum of money; more than anything else, this was a way of maintaining the tradition of giving out money every year. Finland takes its academic traditions seriously,which is a good thing.
So I would receive these stipends every year for being the Math Guy. By high school the awards got bigger. The biggest ones were on the order of $500. So that's where most of the money for my second computer came from; my weekly allowance wouldn't have paid for a computer. I also borrowed some money from my dad.
It was 1986 or 1987. I was sixteen or seventeen. My basketball years were behind me. I spent an inordinate amount of time researching the field before deciding which computer to buy. PCs weren't very good back then, so when I fantasized about my new machine I knew it wasn't going to be a PC.
I opted for a Sinclair QL, which many of you are probably too young to remember. Here's the history. The Sinclair was one of the first 32-bit machines on the market for home use. Sir Clive Sinclair, the founder of the company, was the Steve Wosniak of Britain. He made these computer kits that were sold as Timex computers in the United States. That's right, the same company that made Timex watches imported the Sinclair computer stuff and sold it here under the Timex name. The early ones were sold as kits before he started selling ready-made computers.
The Sinclair had this operating system called Q-DOS. I knew it by heart back then. It was written especially for that particular computer. It had quite an advanced Basic for the time, and fairly good graphics. One of the things that excited me the most about the operating system was that it was multitasking: You could run multiple programs at once. However, the Basic part wasn't multitasking, so you couldn't run more than one Basic program at once. But if you wrote your own programs in assembly language, you could let the operating system schedule them and time slice it so you could run many of them at the same time.
The computer contained the 8-megahertz 68008 chip, which was the second and cheaper version of Motorola's 68000 chip. Internally, the first generation of 68000 chips were 32-bit, but externally had a 16-bit interface to anything outside the CPU (central processing unit) -- such as memory or hardware add-ons. Because it could only load 16 bits at a time from memory, 16-bit operations were often quicker than the 32-bit operations. The architecture was hugely popular and it still exists today in a lot of embedded devices or cars. It's not the same chip, but it's based on the same architecture.
The 68008 chip, the version in my computer, used 8 bits, not 16 bits, for its interface with the world outside the CPU. But even though it interacted with the outside world at 8 bits at a time, internally it was 32 bits. That made it more pleasant to program in many ways.
It had 128 kilobytes of memory -- not megabytes -- which was huge at the time for a home machine. The VIC-20 it replaced had only 31/2 kilobytes of memory. And because it was a 32-bit machine it could access all the memory with no problem at all, which was unheard of back then. That was the main reason I wanted to buy the computer. The technology was interesting and I loved the CPU.
I was hoping to get the computer at a discount by buying it at a store where a friend knew the owners. But it would have taken so long for the computer to arrive that I just shlogged down to Akademiska Bokhandeln, the largest bookstore in Helsinki, which had a computer section. I just bought it from them over the counter.
The computer cost nearly $2,000. There used to be this rule that entry-level computers were always $2,000. It's only in the last couple of years that this has changed. Now you can buy a new PC for $500. It's like cars. Nobody makes cars for under $10,000. At some point, it's not worth it anymore. Sure, companies can build a car that can be sold for $7,000, but the automakers reason that people who could afford $7,000 for a car are happier buying one for $10,000 that has extra stuff, like air conditioning, as standard equipment. If you compare entry-level cars this year with entry-level cars from fifteen years ago, they cost about the same. In fact, adjusted for inflation they might cost slightly less. But they're a lot better.
That's how it used to be with computers. When computers were not something that everybody bought, there was a pain threshold of around $2,000. If the lowest-cost computer is much more expensive, a company isn't going to be able to sell many of them. But they were expensive enough to manufacture that it didn't make sense for a company to make them much cheaper. People would always pay the extra $200 or so to get a better machine.
In the last two years they have become a lot less expensive to make. And even the low-end machines have gotten pretty good. Companies have lost many of the people who would pay the extra $200 for a slightly better machine. Since companies couldn't sell on features alone, they've had to sell on price.
I admit it: Back in 1987, one of the selling points of the QL was that it looked cool.
It was entirely matte black, with a black keyboard. It was fairly angular. This was not a rounded, pretty-boy machine. It tried to be kind of extreme. The keyboard was about an inch thick because it was pan of the same unit as the computer. That's the way most of the home computers were designed. On the right-hand side of the keyboard, where you would have a keypad, you had two slots for the revolutionary Sinclair microdrive, which was this endless loop of tape that was used only on a Sinclair machine. It acted and was organized like a disk drive. Because it was one long loop, you could just spin it until you hit what you wanted. It turned out to be a bad idea because it was not as reliable as a disk drive.
So I spent close to $2,000 for the Sinclair QL. Most of what I did with it was one programming project after another. I was always searching for something interesting to do. I had a Forth language interpreter and compiler, just to play around with. Forth was a strange language that nobody uses anymore. It was kind of a fun, niche-market language that was fairly widely used in the 1980s for different things, but it never became very popular, being difficult to follow for non-techie people. Actually it was kind of useless.
I wrote programming tools for myself. One of the first things I bought for the machine was an expansion bay with an EEPROM card (Electrically Erasable and Programmable Read Only Memory). It's memory you write yourself with special modules, and it stays around when you tum the power off. That way, I could have the tools easily available to me whenever I wanted, without having to load them into RAM (random access memory) and use precious RAM for programs.
What got me interested in operating systems: I bought a floppy controller so I wouldn't have to use the microdrives, but the driver that came with the floppy controller was bad so I ended up writing my own. In the process of writing that I found some bugs in the operating system -- or at least a discrepancy between what the documentation said the operating system would do and what it actually did. I found it because something I had written didn't work.
My code is always, um, perfect. So I knew it had to be something else, and I went in and disassembled the operating system.
You could buy books that contain partial listings of the operating system. That helps. You also need a disassembler, a tool that takes the machine language and turns it into assembly language. That's important because when you only have a machine language version, it's difficult to follow the instructions. You find that an instruction will jump to a numerical address, which makes it very hard to read. A good disassembler will make up names for the numbers and also allow you to specify names. It also can be used to help you identify particular instruction sequences. I had my own disassembler that I could use to create reasonably nice listings. When something didn't work, I could go in and tell it to find the listing from a particular spot, and I could see everything that the operating system was going to do. Sometimes I used the disassembler not because something was buggy but because I was trying to understand what it was supposed to do.
One of the things I hated about the QL was that it had a read-only operating system. You couldn't change things. It did have hooks -- places where you can insert your own code to take over certain functions -- but only at particular places. It's so much nicer to be able to replace your operating system completely. Doing an operating system in ROM (read-only memory) is a bad idea.
Despite what I've said about Finland being such a technology butt-kicker, the Sinclair QL wasn't making big inroads in Europe's seventh-largest nation. Because the market was so small, whenever you wanted to buy upgrades for the iconoclastic, leading-edge machine, you had to do it from England, via postal order. It involved scouring catalogues until you found someone who sold whatever it was you wanted. Then you had to get together certified checks and wait weeks for delivery (this being before the days of Amazon.com and credit cards). That's what I had to do when I wanted to expand my RAM from 128 kilobytes to 640 kilobytes. That was the drill when I bought a new assembler, to translate assembly language into machine code (the ones and zeros),and an editor, which is basically a word-processing program for programming.
Both the new assembler and editor worked fine, but they were on the microdrives and couldn't be put on the EEPROM. So I wrote my own editor and assembler and used them for all my programming. Both were written in assembly language, which is incredibly stupid by today's standards. It's complicated and time-consuming -- I'd guess it takes a hundred times longer to solve a problem in assembly language than in the C language, for example, which was available at the time.
I added a few commands to the basic interpreter that came with the machine so that when I wanted to edit something I basically just ran my editor automatically and it was instantly there. My editor was faster than the one that came with the machine. I was particularly proud of how fast I could write characters to the screen. Normally, with a machine like that, it would take so long to fill the screen with characters that you could see text scroll. And I was pleased with the fact that with my editor, you wrote text so fast that when you scrolled quickly down you created a blur. That was important to me. The improvement made the machine feel much snappier, and I knew that I had done a lot of work to make it operate so fast.
At this time, there weren't very many people I knew who were as involved in computers as I was. There was a computer club at school, but I didn't spend much time there. It was basically for kids who wanted to know about computers. There were only about 250 students in my entire high school, and I don't think anybody else had been using one since the age of ten.
One of the big things I liked doing on my Sinclair QL was to make clones of games. I wrote clones of the games from the VIC-20 that I had enjoyed and sometimes I added enhancements. But mostly they were not better: a better machine, not a better concept.
My favorite game was probably Asteroids, but I could never make a good clone of it. The reason was that, at the time, all the arcade Asteroids games were done with real vector graphics.Instead of having graphics based on small dots -- pixels -- they had graphics that were actually done the way a cathode-ray tube (CRT) works, which is to have electrons shot out from an electron cannon from behind the CRT and deflected with magnets. They got much higher-resolution graphics that way, but you couldn't reproduce this very easily.You could make a clone, but it wouldn't look like the original Asteroids game if you wrote it on a computer that didn't have the special graphics capability.
I remember making a Pac Man clone in assembly language. The first step is to kind of remember what the Pac Man characters are supposed to look like. Then you try to draw them on a sixteen-by-sixteen grid of paper, with color. And if you are artistic, you can do a good job. But if you are Mr. Non-Artistic, like I am, it ended up looking like Pac Man's sick cousin.
Okay, so it wasn't a very good clone. But I was really proud of it. The game was actually playable, and I sent it in to one of the magazines that published computer code. I had sold other programs to magazines and thought this would be a natural.
Not.
One of the problems was that the program had been written in assembly language. That meant that if you made the slightest, slightest mistake copying it from the magazine, it wouldn't work.
I wrote some of my own games, too. But it takes a certain mindset to create games. Because games require a lot of performance, you have to get really low down into the hardware of the computer. I could do that, but I didn't have the game play mentality. What makes a great game is not usually how fast it is or how good the graphics are. There has to be something that makes you play it -- something that keeps you with it. It's just like movies. Special effects are one thing, but you also need a plot. And my games never had a plot. A game has to have a progression, an idea. Often, the progression is just that the game gets faster. That's what Pac Man does. Sometimes the maze changes or the monsters get better at following you.
One of the things that interested me about Pac Man was tackling the problem of making graphics that don't flicker. It's a fairly common problem in older computer games, because without special hardware your characters just Bicker. The way you move your characters around is to take away the old copy and write a new copy. If you happen to have bad timing, people can actually see when there's no copy, so it Bickers. You can get around this in multiple ways. You can actually draw the new guy first and then remove the old guy, but you must be careful not to remove that part of the old guy that was occluded by the new guy. Instead of seeing an irritating flicker, you get a good effect-you sometimes see the shadow of the old character on the screen. The brain interprets that in a good way. It doesn't Bicker, but it creates a motion blur. The trouble with this solution is that it is fairly expensive and time-consuming to create.
There's a reason that games are always on the cutting edge, and why they often are the first types of programs that programmers create. Partly it has to do with the fact that some of the smartest programmers out there are fifteen-year-old kids playing around in their rooms. (It's what I thought sixteen years ago, and I still suspect it's true.) But there's another reason games are so pioneering: Games tend to push hardware.
If you look at computers today, they're usually fast enough for anything. But the place you test the limits of the hardware are with action games, like some of the 3-D ones that are now popular. Fundamentally, games are one of the few things on computers where you can tell if things aren't happening in real time. In word processing, you don't mind a delay of a second here or a second there. But in a game, it starts to be noticeable at a sub-tenth of a second. Games used to be fairly simple. These days, programming is actually a fairly small part of any game. There's music, there's the plot. If you compare it to making a movie, the programming component is just the camera work.
So I had the Sinclair QL for three years. It took me from high school to the University of Helsinki to the Finnish Army. It was fine, but we were definitely ready to part ways. In the last year or so I had discovered its shortcomings. The 68008 was a good enough CPU, but I was reading about the next generation 68020, and learning about such virtues as memory management and paging. These new computers could do things that are really important when you are working on low-level stuff.
What irritated me about the Sinclair QL was that while the operating system was capable of multitasking, you could still crash at any time because there was no memory protection. One task that decided to do something bad could just crash the machine.
The Sinclair QL was Sir Clive Sinclair's last foray into designing and making computers. One of the reasons: It wasn't commercially successful. It had interesting technology, but the company had production problems and quality assurance problems and the inevitable bad press. Moreover, the market was beginning to become more competitive.
The late 1980s were the years when you could start to imagine that, yes, maybe someday your average trolley rider would own a computer, if only to perform word processing. And all signs pointed to the PC. Yes, the original IBM PCs had started flooding the shelves and becoming successful despite numerous technical shortcomings. Those ubiquitous beige creatures had the IBM stamp of approval, after all, and that meant a lot. Another attraction: The peripherals were standard and easy to obtain.
I was reading about all these newer CPUs that could do what I wanted. It became clear that the 68020, which looked interesting, wasn't going anywhere. I could have considered buying a CPU upgrade for the QL. In those days that meant basically rebuilding the machine. And the operating system didn't know about memory management, anyway, so I would have had to write my own version. So it was like: Hhhmmm. Doing that will be a big step. And it will be expensive to get a new CPU.
And then there was still the increasing headache of buying things for the computer. It wasn't as if there was a Sears catalogue for the Sinclair QL and you just picked up the phone and ordered more memory. The postal-order-from-England routine was getting old. (I didn't mind that there was no shrink-wrapped software because I was able to write all that myself.)
There was a positive side effect to this pain-in-the-neck. When I was thinking about getting rid of the machine, I decided to sell my peripherals-the real hard drive I had purchased because I couldn't take the microdrive one second longer, and my expansion RAM. But there weren't people lined up in the streets searching for such stuff, so you had to advertise in a computer magazine and pray. And that's how I met my good friend Jouko Vierumaki, who turned out to be probably the only other person in all of Finland who owned a Sinclair QL. He answered my ad and took the train from Lahti and bought some of my peripherals. Then he introduced me to snooker.
My first year at university, the Sinclair QL sat on a desk against my first-floor bedroom window on Perersgatan, but I didn't do much in the way of programming. Partly it was a matter of wanting to concentrate on my studies. But also, I simply found myself lacking a project to do on my computer. Lack a project and you lack enthusiasm. You're trying to come up with something that might motivate you.
It seemed like the perfect time to join the army, which I knew I would have to do anyway. I was nineteen years old and irritated with my computer's shortcomings and unattached to any interesting computer project. I boarded a train for Lapland.
I've already indicated how clueless I was about, among other things, the physical demands of army service. So after the eleven months of phys ed-with-firearms, I felt perfectly justified in spending the remaining decades of my life in blissful inactivity, with the only exercise coming from tapping code into a keyboard or gripping my fingers around a glass of pilsner. (In fact, the first near-sport activity after leaving the army didn't take place until almost ten years to the day following my discharge, when David coerced me into going boogie-boarding with him in the killer waves at Half Moon Bay. I practically drowned, and my legs were sore for days.)
Army service ended on the 7th of May, 1990. Although Tove would tell you I have trouble remembering our anniversary, I can't possibly forget the date I was discharged.
The first thing I wanted to do was get a cat.
I had a friend whose cat had produced a litter a few weeks earlier, so I bought the sole remaining kitten, which was white, male, beautiful -- and, because he had spent his first few weeks in the outdoors, easily able to live both inside and outside my mother's apartment. I named him Randi, short for Mithrandir, the white wizard in Lordof the Rings. He is now eleven years old and, like his owner, has become totally adjusted to the California lifestyle.
No, I don't think 1did anything productive that entire summer. Classes for my second year at the university wouldn't start until fall. My computer was not quite up to snuff. So I sort of hung around in my ratty bathrobe or played with Randi or, occasionally, got together with friends so they could chuckle about my attempts at bowling or snooker. Okay, I did do a little fantasizing about my next computer.
I faced a geek's dilemma. Like any good computer purist raised on a 68008 chip, I despised PCs. But when the 386 chip came out in 1986, PCs started to look, well, attractive. They were able to do everything the 68020 did, and by 1990, mass-market production and the introduction of inexpensive clones would make them a great deal cheaper. I was very money-conscious because I didn't have any. So it was, like, this is the machine I want to get. And because PCs were flourishing, upgrades and add-ons would be easy to obtain. Especially when it came to hardware, I wanted to have something that was standard.
I decided to jump over and cross the divide. And it would be fun getting a new CPU. That's when I started selling off pieces of my Sinclair QL.
Now everybody has a book that has changed his or her life. The Holy Bible. Das Kapital. Tuesdays With Maury. Everything I Needed to Know I Learned in Kindergarten. Whatever. (I sincerely hope that, having read the preface and my theory on The Meaning of Life, you will decide that this book does the trick for you.) The book that launched me to new heights was Operating Systems: Design and Implementation, by Andrew S. Tanenbaum.
I had already signed up for my fall courses, and the one that I was most looking forward to was in the C programming language and the Unix operating system. In anticipation of the course, I bought the aforementioned textbook during the summer in the hope of getting a head start. In the book, Andrew Tanenbaum, a university professor in Amsterdam, discusses Minix, which is a teaching aid he wrote for Unix. Minix is also a small Unix clone. Soon after reading the introduction, and learning the philosophy behind Unix and what the powerful, clean, beautiful operating system would be capable of doing, I decided to get a machine to run Unix on. I would run Minix, which was the only version I could find that was fair!y useful.
As I read and started to understand Unix, I got a big enthusiastic jolt. Frankly, it's never subsided. (I hope you can say the same about something.)
The academic year that began in the fall of 1990 was to be the first time that the University of Helsinki would have Unix, the powerful operating system that had been bred in AT&T's Bell Labs in the late 1960s but had grown up elsewhere. In my first year of studies, we had aVAX running VMS. It was a horrible operating system, certainly not an environment that made you say, "Gee, I'd like to have this at home, too." Instead it made you say,"Hmmm. How do you do that?" It was hard to use. It didn't have many tools. It wasn't suited to easily accessing the Internet, which was running on Unix. You couldn't even easily figure out how large a file was. Admittedly, VMS was very well suited for certain operations, like databases. But it's not the kind of operating system that you get excited about.
The university had realized it was time to move away from all that. Much of the academic world was then growing enamored of Unix, so the university acquired a MicroVAX running Ultrix, which was Digital Equipment Corporation's version of Unix. It was a way of testing the waters of Unix.
I was eager to work with Unix by experimenting with what I was learning in Andrew Tanenbaum's book, excited about all the things I could explore if I had a 386 PC. There was no way I could get together the 18,000 FIM to buy one. I knew that once the fall semester began, I would be able to use my Sinclair QL to access the university's new Unix computer until I could afford to buy a PC on which I could run Unix on my own.
So there were two things I did that summer. Nothing. And read the 719 pages of Operating Systems: Design and Implementation. The red soft-cover textbook sort of lived on my bed.
The University of Helsinki sprang for a sixteen-user license for the MicroVAX. That meant admittance to the "C and Unix" course was limited to thirty-two students-I guess the thinking was that sixteen people would be using it by day, sixteen by night. Like the rest of us, the teacher was new to Unix. He admitted this up front, so it wasn't really a problem. But he would read the text only one chapter ahead of the students, whereas the students were sometimes skipping ahead by three chapters. So it became something of a game in which people tried to trip up the teacher by asking questions that related to things we would be learning three chapters later, just to see if he had read that far.
We were all babes in the Unix woods, with a course that was being made up as we went along. But what was obvious from this course was that there was a unique philosophy behind Unix. You grasped this in the first hour of the course. The rest was explaining the details.
What is special about Unix is the set of fundamental ideals that it strives for. It is a clean and beautiful operating system. It avoids special cases. Unix has the notion of processes -- a process is anything that does anything. Here's a simple example. In Unix the shell command, which is what you type to gain entry into the operating system, is not built into the operating system, as with DOS. It's just a task. Like any other task. It just happens that this task reads from your keyboard and writes back to your monitor. Everything that does something in Unix is a process. You also have files.
This simple design is what intrigued me, and most people, about Unix (well, at least us geeks). Pretty much everything you do in Unix is done with only six basic operations (called "system calls," because they are the calls you make to the operating system to do things for you). And you can build up pretty much everything from those six basic system calls.
There's the notion of "fork," which is one of the fundamental Unix operations. When a process does a fork, it creates a complete copy of itself. That way, you have two copies that are the same. The child copy most often ends up executing another process-replacing itself with a new program. And that's the second basic operation. Then you have four other basic system calls: open, close, read, and write -- all designed to access files. Those six system calls make up the simple operations that comprise Unix.
Sure, there are tons of other system calls to fill in all the details. But once you understand the six basic ones, you understand Unix. Because one of the beauties of Unix is realizing that you don't need to have complex interfaces to build up something complex. You can build up any amount of complexity from the interactions of simple things. What you do is create channels of communication (called "pipes" in Unix-speak) between simple processes to create complex problem-solving.
An ugly system is one in which there are special interfaces for everything you want to do. Unix is the opposite. It gives you the building blocks that are sufficient for doing everything. That's what having a clean design is all about.
It's the same thing with languages. The English language has twenty-six letters and you can build up everything from those letters. Or you have the Chinese language, in which you have one letter for every single thing you can think of. In Chinese, you start off with complexity, and you can combine complexity in limited ways. That's more of the VMS approach, to have complex things that have interesting meanings but can't be used in any other way. It's also the Windows approach.
Unix, on the other hand, comes with a small-is-beautiful philosophy. It has a small set of simple basic building blocks that can be combined into something that allows for infinite complexity of expression.
This, by the way,is also how physics works. You try and find the fundamental rules that are supposed to be fairly simple. The complexity comes from the many incredible interactions you get from those simple rules, not from any inherent complexity of the rules themselves.
The simplicity of Unix did not just happen on its own. Unix, with its notion of simple building blocks, was painstakingly designed and written by Dennis Richie and Ken Thompson at AT&T's Bell Labs. And you should absolutely not dismiss simplicity for something easy. It takes design and good taste to be simple.
To go back to the example of human languages: Pictorial writing like Chinese characters and hieroglyphics tend to happen first, and be "simpler," whereas the building block approach requires far more abstract thinking. In the same way, you should not confuse the simplicity of Unix with a lack of sophistication -- quite the reverse.
Which is not to say that the original reasons for Unix were all that sophisticated. Like so many other things in computers, it was all about games. It took somebody who wanted to play computer games on a PDP-II. Because that was what UNIX started out being developed for-Dennis and Ken's personal project for playing Space Wars. And because the operating system wasn't considered a serious project, AT&T didn't think of it as a commercial venture. In fact, AT&T was a regulated monopoly, and one of the things they couldn't do was to sell computers anyway. So the people who created Unix made it available quite freely along with source licenses, especially to universities. It wasn't a big deal.
This all led to Unix becoming a big project in academic circles. By the time of the 1984 breakup, when AT&T was finally allowed to get into the computer business, computer scientists at universities-particularly the University of California-Berkeley had been working on and improving Unix for years under the direction of people like Bill Joy and Marshall Kirk McKusik. People hadn't always necessarily put a lot of effort into documenting what they did.
But by the early 1990s, Unix had become the number-one operating system for all supercomputers and servers. It was huge business. One of the problems was that there were, by now, a host of competing versions of the operating system. Some were derived from the more controlled confines of the AT&T code base (the so-called "System V" flavors), while others were derived from the University of California-Berkeley code-base BSD (Berkeley Software Distribution). Yet others were a mixture of the two.
One BSD derivation in particular is worth mentioning. It was the 386BSD project done by Bill Jolitz based on the BSD code-base, distributed over the Internet. It was later to fragment and become the freely available BSD flavors -- NetBSD, FreeBSD, and OpenBSD -- and it was getting a lot of attention in the Unix community.
That's why AT&T woke up and sued the University of California-Berkeley. The original code had been AT&T's but most of the subsequent work had been done at Berkeley. The University of California regents contended that they had the right to distribute, or sell for a nominal fee, their version of Unix. And they demonstrated that they had done so much work that they essentially rewrote what AT&T had made available. The suit ended up being settled after Novell, Inc., bought Unix from AT&T. Essentially, parts of the system had to be excised from what AT&T had made available.
Meanwhile, all the legal haggling had been instrumental in giving a new kid on the block some time to mature and spread itself. Basically, it gave Linux time to take over the market. But I'm getting ahead of myself.
Since I'm digressing anyway, I'd like to explain something. Unix has this reputation for being a magnet for the eccentric fringe of computing. It's a reputation not worth arguing against. It's true.
Frankly, there are a lot of fairly crazy people in Unix. Not postal-rage crazy. Not poison-the-neighbor's-dog crazy. Just very alternative-lifestyle people.
Remember, much of the initial Unix activity took place in the late 1960s and early 1970s, while I was sleeping in a laundry basket in my grandparents' apartment. These were flower power people -- but technical flower power people. A lot of the Unix-must-be-free philosophy has more to do with the circumstances of the time rather than with the operating system. It was a time of rampant idealism. Revolution. Freedom from authority. Free love (which I missed out on, and probably wouldn't have known what to do with, anyway). And the relative openness of Unix, even if it was mainly due to the lack of commercial interests of the time, made it attractive to this kind of person.
The first time I was introduced to this side of Unix was probably in 1991 or so when Lars Wirzenius dragged me along to an event at the Polytechnic University of Helsinki (which, as everybody knows, is not actually in Helsinki but right across the border in Espoo). They just want to be associated with the glamorous Helsinki even if only by name). The speaker was Richard Stallman.
Richard Stallman is the God of Free Software. He started to work on an alternative to Unix in 1984, calling it the GNU system. GNU stands for "GNU is Not Unix," being one of many recursive acronyms where one of the letters stands for the acronym itself -- a kind of computer science in-joke that nobody else ever gets. Geeks -- we're just tons of fun to be around.
More importantly, RMS, as he prefers to be called, also wrote the Free Software Manifesto, and the Free Software copyright license -- the GPL (General Public License). Basically, he pioneered the notion of free source-code availability as something intentional, not just an accident, the way it happened with original Unix open development.
I have to admit that I wasn't much aware of the sociopolitical issues that were -- and are --so dear to RMS. I was not really all that aware of the Free Software Foundation, which he founded, and all that it stood for. Judging from the fact that I don't remember much about the talk back in 1991, it probably didn't make a huge impact on my life at that point. I was interested in the technology, not the politics-I had had enough politics at home. But Lars was an ideologist, and I tagged along and listened.
In Richard I saw, for the first time in my life, the stereotypical longhaired, bearded hacker type. We don't much have them in Helsinki.
I may not have seen the light, but I guess something from his speech must have sunk in. After all, I later ended up using the GPL for Linux. There I go, getting ahead of myself again.
January 2, 1991. It was the first day the stores were open after Christmas and my twenty-first birthday, the two biggest cash-generating days on my calendar.
With my Christmas-and-birthday money in hand, I made this huge economic decision to buy a computer that would cost 18,000 FIM, which was about $3,500. I didn't have that kind of money, so the idea was to put down one third of the cost and buy the computer on credit. Actually, the computer cost 15,000 FM. The rest came from the financing charges that would be paid over three years.
It was at one of these small corner shops, sort of a mom-and-pop computer store, only in this case it was just pop. I didn't care about the manufacturer, so I settled on a no-name, white-box computer. The guy showed you a price list and a smorgasbord of what CPU was available, how much memory, what disk size. I wanted power. I wanted to have 4 megabytes of RAM instead of 2 megabytes. I wanted 33 megahertz. Sure, I could have settled for 16 megahertz, but no, I wanted top of the line.
You told them what you wanted and they would put it together for you. It sounds quaint in this era of the Internet and UPS shipments. You came back three days later to pick it up, but those three days felt like a week. On January 5th I got my dad to help me drive the thing home.
Not only was it no-name, it was also nondescript. It was a basic gray block. I didn't buy this computer because it looked cool. It was a very boring-looking machine with a fourteen-inch screen, the cheapest, most reasonably studly box I could find. Incidentally, by "studly" I mean a powerful computer that a few people owned. I don't intend to make it sound so unappealing-yet-functional, sort of like a Volvo station wagon. But the fact is: I wanted something dependable and with easy access to the upgrades I would inevitably require.
The computer came with a cut-down version of DOS. I wanted to run Minix, the Unix variant, so I ordered it and the operating system took more than a month to make its way to Finland. Oh, you could buy the book on Minix from a computer store, but, since there was so little demand for the operating system itself, you had to order it from the bookstore. The cost was $169 plus taxes, plus conversion factor, plus whatever. I thought it was outrageous at the time. Frankly, I still do. The wasted month felt like about six years. I was even more frustrated by that than I had been during the months I was waiting to buy my PC.
And this was dead-winter. Every time you left your bedroom for the outside world you risked getting knocked onto the snow by old ladies who should have been home making cabbage soup for their families or watching hockey on television while knitting sweaters, not staggering along Mannerheimintie. I basically spent that month playing Prince of Persia on my new computer. When I wasn't doing that, I would read books that helped me understand the computer I had bought.
Minix finally arrived on a Friday afternoon, and I installed it that night. It required feeding sixteen floppy disks into the computer. The entire weekend was devoted to getting accustomed to the new system. I learned what I liked about the operating system -- and, more importantly, what I didn't like. I tried to compensate for its shortcomings by downloading programs that I had gotten used to from the university computer. In all, it took me a month or more to make this my own system.
Andrew Tanenbaum, the professor in Amsterdam who wrote Minix, wanted to keep the operating system as a teaching aid. So it had been crippled on purpose, in bad ways. There were patches to Minix -- improvements, that is -- including a well-known patch made by a hacker in Australia named Bruce Evans, who was the God of Minix 386. His improvement made Minix much more usable on a 386. Before even getting the computer I had been following the Minix newsgroups online, so I knew from the very beginning that I wanted to run his enhanced version. But because of the licensing situation, you had to buy the real version of Minix and then do a lot of work to bootstrap Evans's patches. It was a fairly major thing to do.
There were a number of features that disappointed me with Minix. The biggest letdown was terminal emulation, which was important because it was the program I used to connect to the university computer. I relied upon terminal emulation whenever I wanted to dial up the university's computer to either work on the powerful Unix computer or just go online.
So I began a project to create my own terminal emulation program. I didn't want to do the project under Minix, but instead to do it at the bare hardware level. This terminal emulation project would also be a great opportunity to learn how the 386 hardware worked. As I mentioned, it was winter in Helsinki. I had a studly computer. The most important part of the project was to just figure out what this machine did and have fun with it.
Because I programmed to the bare metal I had to start off from the BIOS, which is the early ROM code that the computer boots into. The BIOS reads either the floppy or the hard disk, and in this case, I had my program on a floppy. The BIOS reads the first sector of the floppy and starts executing it. This was my first PC and I had to learn how all this was done. This all happens in what's called "real mode." But in order to take advantage of the whole CPU and get into 32-bic mode, you have to go into "protected mode." There's a lot of complicated setup you have to do to make this happen.
So to create a terminal emulation program this way, you need to know how the CPU works. In fact, part of the reason I wrote in assembly language was just to learn about the CPU. The other things you need to know are how to write to the screen, how to read keyboard input, how to read and write to the modem. (I hope I'm not losing any of the non-geeks who have steadfastly refused to leap ahead to chapter XII{1}.)
I wanted to have two independent threads. One thread would read from the modem and then display on the screen. The other thread would read from the keyboard and write out to the modem. And there would be two pipes going both ways. This is called task-switching, and a 386 had hardware to support this process. I thought it was a cool idea.
My earliest test program was written to use one thread to write the letter A to the screen. The other thread wrote the letter B. (I know, it sounds unimpressive.) And I programmed this to happen a number of times a second. With the timer interrupt, I wrote it so that the screen would fill with AAAAAAAAAA. Then, all of a sudden, it would switch to BBBBBBBBB. It's a completely useless exercise from any practical standpoint, but it was a good way of showing that my task-switching worked. It took maybe a month to do this because I had to learn everything as I was going along.
So ultimately I was able to change the two threads, the AAAAAAAA and BBBBBBB, so that one read from the modem and wrote to the screen, and the other read from the keyboard and wrote to the modem. I had my own terminal emulation program.
When I wanted to read news, I would put in my floppy and reboot the machine, and I would read news from the university computer using my program. If I wanted to make changes to improve the terminal emulation package, I would boot into Minix and use it for programming.
And I was very proud of it.
My sister Sara knew about my great personal accomplishment. I showed it to her and she looked at the screens of AAAAAAs and BBBBBBBB's for about five seconds; then she said "Good" and went away, unimpressed. I realized it didn't look like much. It's completely impossible to explain to somebody else that, while something may not look like much, a lot is going on in the background. It's about as impressive as showing somebody a stretch of road you've just filled in with tar. Probably the only other person who saw it was Lars, the other Swedish-speaking computer science major who started the same year I did.
It was March, maybe April, and if the snow was turning to slush on Petersgatan. I didn't know -- or much care. I was spending most of my time in a bathrobe, huddled over my unattractive new computer, with thick black window shades shielding me from the sunlight, not to mention the outside world. I was eeking out the monthly payments for my PC, which was scheduled to be paid off in three years. What I didn't know was that I would only be sending in payments for another year. By then, I would have written Linux, which would be seen by many more people than just Sara and Lars. By that time, Peter Anvin, who works with me now at Transmeta, would have started a collection on the Internet to get my computer paid off.
Everybody knew I wasn't making any money on Linux. People just started saying, Let's start a collection to payoff Linus's computer.
It was wonderful.
I had absolutely no money. I always felt it was important to not have asked for money or begged for money, but the fact that it was simply given to me was... I'm getting choked up.
That's how Linux got started. With my test programs turning into a terminal emulation package.
Red Herring magazine sends me to Finland to report on Oulu, the emerging high-tech center that is home to 141 startups despite its forbidding location a few hours' drive from the Arctic Circle. It's a good opportunity to hook up with Linus's parents and his sister, Sara, in Helsinki.
His father, Nils (who goes by the name Nicke), meets me in the lobby of the Sokos Hotel Vaakuna, across the plaza from the Helsinki railway station. He is trim, wears thick glasses, bears Lenin's beard. He has recently ended his four-year assignment in Moscow for the Finnish Broadcasting Company and is now writing a book about Russia and deciding whether or not to take a post in Washington, a place he doesn't find interesting. Months earlier he had won a prestigious national journalism award, a commendation that his ex-wife Anna later would say "mellowed him considerably."
In the early evening he drives me in his Volvo V40 on a tour of Linus's snow-crusted neighborhoods, pointing out the solid building in which both father and son attended elementary school, driving past the grandparents' apartment where Linus lived in his first three months, and then the park-view building in which the family lived for the following seven years. Nicke had spent one of those years in Moscow studying to be a communist, when Linus was five years old. Next he points out the pale yellow apartment building in which Linus and his sister moved following the divorce -- a street-level adult video store has replaced the electronics supply store of Linus's youth -- and finally we drive past the most substantial of the structures, the five-story apartment block in which his maternal grandparents resided, the birthplace of Linux. Linus's mother, Anna, still lives there. This could be Manhattan's Upper East Side in late December.
Nicke is funny, smart, self-deprecating, and shares a host of gestures with his son, like the way he cradles his chin in his hand when he talks. They also share a grin. Unlike his son he is a lifelong athlete -- a socialist jock -- who plays on a basketball team, runs five miles a day, and has taken to swimming distances in an icy lake early each morning. Atfi fty-five, he walks with the athletic confidence of someone maybe two thirds his age. Another trait he does not share with Linus: Nicke seems to have had a complicated romantic life.
We have dinner in a bustling restaurant in central Helsinki where Nicke talks about the difficulty Linus had growing up as the son of an overactive communist who was a frequent public speaker and at one point held a minor public office. He explains that Linus was often teased about his father's radical politics, and that some parents even refused to let him play with their children. For that reason, explains Nicke, his son made a constant effort to distance himself from the left-wing rhetoric that was the backdrop of his childhood. "He wouldn't let me discuss it. He would leave the room," Nicke says. "Or else he always made a point of having an opposing view. I know Linus was teased in school for having the wrong father. The message to me was, 'Don't put me in this awkward situation.' "
Nicke drives me to his home, where he says we will drink beer in his kitchen. It is north of the central business district, in a collection of buildings that were originally built in the 1920s to house workers. We ascend the steps to his apartment and remove our shoes in the entry. The place recalls the late 1960s counterculture, with woven-basket lampshades, third-world wall hangings, houseplants. We sit at the kitchen table, where Nicke pours beer and we talk about fathering. "A parent shouldn't think that it is he who makes his children what they are," he says, reaching for his mobile phone to dial up the woman with whom he lives. He indicates that Linus is just starting to read the historical books he has been bugging him to read for years, and that Linus probably has never bothered to read his own grandfather's poetry.
I ask Nicke if he has ever expressed an interest in programming, ever asked Linus to show him the fundamentals. He tells me he never has. Fathers and sons are unique individuals, he reasons, explaining that the act of delving into Linus's passion would be akin to "invading his soul." He seems comfortable in the role of father to a famous person. In a recent newspaper profile following his winning of the national journalism award, he was quoted assaying that, even in the days when he picked Linus up from the playground, other kids would point and say, "Look, there's Linus's father!"
Sara Torvalds has traveled by train from her home in a small city west of Helsinki, where the street signs are in Swedish first and Finnish second, and where she can afford an apartment with a claw-foot tub and sauna, and where, to her delight, Swedish -- not Finnish -- is heard on the streets. As she explains, she is in a minority within a minority: as a young adult she converted to Catholicism, an act that relegated her to the 10 percent of Finnish citizens who are non-Lutherans and caused her agnostic father to disown her for a matter of weeks.
Today she has traveled to Helsinki to teach catechism to youngsters under a government-sponsored program. She is pleasant and upbeat, and at twenty-nine she exudes the uncynical spirit of an earnest and busy high schooler. Her fair skin and round face give her a vague resemblance to her older brother, but it is obvious that she is naturally more sociable than he is. She regularly taps the keys of her mobile phone to send text messages to friends she will be meeting later in the day; then, just as frequently, she checks for replies. She has a successful translation business.
It is noon and Sara is taking me to meet her mother for lunch, with stops at various childhood locales: the cat park, the elementary school. "My parents were card-carrying communists, so that's how we were brought up -- to think the Soviet Union was a good thing. We visited Moscow," she explains. "What I remember most was the huge toy store they had, bigger than any thing in Helsinki." Her parents divorced when she was six. "I remember when we were told that Dad would be moving out for good. I thought, That's good. Now the fighting will stop. Actually, he had been going to Moscow for long periods, so we were used to him going away," she says. By the time she was ten, Sara opted to move in with her father, who had relocated to the neighboring city of Espoo, rather than live with her mother and Linus. "It wasn't that I didn't want to live with Mom. It was just that I didn't want to live with Linus. That way we would only fight on weekends. we fought all the time. Little by little, we fought less as we grew older. "
"We arrive at her mother's first-floor apartment and Anna Torvalds is thrilled to see us. Mikke is her nickname. She refuses tolet me indulge in the Finnish custom of removing one's shoes: "Don't be silly. This place is already dusty. You couldn't possibly make it worse." She is short, dark-haired, and extremely quick-witted. Within seconds of our arrival, the telephone rings. A real estate agent wants to show me the vacant apartment adjacent to Mikke's, so that I could describe it to her son in the United States and hand-deliver literature about it, in the event that he might want to purchase the place as a sort of Helsinki pied-a-terre. We enter the sprawling apartment, where the agent, who bears an eerie resemblance to the Annette Bening character in the film American Beauty, instructs us to slip little blue cloth booties over our shoes before we take the tour. Soon the agent, in an annoyingly cheerful tone, says something like, "Now this room here. It's a perfect room for antiques that you wouldn't want to have damaged by the sun." Mikke shoots me a conspiratorial glance and replies, in a mocking voice: "OH, what a delightful way of telling us this room doesn't get any light. "
Back in her own kitchen, Mikke sits at a rectangular table bearing a colorful tablecloth and pours coffee into an oversized mug. Her apartment, like that of her ex-husband, brims with books and folk art. There are black and white Marimekko curtains. The apartment originally contained three bedrooms and a kitchen. When her children moved out, Mikke moved into the large bedroom that had been occupied by Sara. She then dismantled the walls around Linus's room, and those around her original bedroom, to create a huge livingroom/kitchen. She points to a vacant spot and says, "That's where his computer was. I guess I should put up some sort of plaque. What do you think?" She chains smokes. She is an easy conversationalist, with such a solid command of English that there are no pauses when she delivers a phrase like, "He's not some random shmuck you meet on the street." On the wall in her bedroom is a huge Soviet flag. It was a gift to Linus from Jouko Vierumaki, who had bought it during an international ski-jump competition. Linus had kept it in a drawer for years, but Mikke hung it above her bed.
Mikke pulls out an album containing the family's few photographs. There's Linus at the age of two or three, naked on the beach. There's Linus, at the same age, shooting a moon outside a famous castle near Helsinki. There's Linus as an early adolescent, looking thin and awkward. There's Mikke at a sixtieth birthday party for her statistics professor father. She points out her older sister and brother. "She's a New York psychiatrist. He's a nuclear physicist. And me, I'm the black sheep. Right? But I had the first grandchild," she declares, then lights a Gauloises.
We eat lunch at a restaurant named for Wilt Chamberlain. Sara consults her mobile phone while Mikke orders multiple espressos. Mikke recalls 'the wayshe andNicke argued over whether Linus should or should not be forced to give up his pacifier: they wrote notes to each other and left them on the counter. There is talk about Linus's poor memory and his inability to remember faces. "If you're watching a movie with him and the hero changes his shirt from red to yellow, Linus will ask, 'Who is this guy?' " says Sara. There is talk about a family biking/camping vacation to Sweden. Sleeping on the overnight ferry. Having Sara's bicycle stolen the first day. Spending the budget on a new bicycle. Erecting the tent on a cliff. Leaving Linus inside to read all day while mother and daughter swam and fished. And then, after a powerful windstorm blew in, realizing that the only thing preventing the tent from being whisked into the Baltic Sea was Linus, who had been sleeping inside, oblivious to the extreme change in weather.
Mikke laughs as she relives the years in which Linus hid in his room, slaving away on a computer. "Nicke kept saying to me, 'Kick him out, make him get a job,' but Linus wasn't bothering me. He didn't require much. And whatever it was he was doing with his computer, that was his business, his thing, and he had a right to do it. I had no idea what it was all about."
Now she is as current as anyone on her son's activities. Mikke and the other family members are on the receiving end of a continual barrage of media queries. Those requests are forwarded to Linus, who typically responds by telling his mother, father, or sister to use their own judgment when answering. But after they write a response, they generally forward it to Linus for his approval before sending it on to the reporter.
Months earlier, when I emailed Mikke requesting her recollections of Linus's childhood, her response was lengthy and well-crafted. She titled her essay, "On Raising Linus from a Very Small Nerd." In it, she recounted her early observations that her toddler son showed the same signs of scientific determination she saw in her father and older brother:
"When you see a person whose eyes glaze over when a problem presents itself or continues to bug him or her, who then does not hear you talking, who fails to answer any simple question, who becomes totally engrossed in the activity at hand, who is ready to forego food and sleep in the process of working out a solution, and who does not give up. Ever. He --or she, of course -- maybe interrupted, and in the course of daily life often is, but blithely carries on later, single-mindedly. Then you know."
She wrote about the sibling rivalry between Linus and Sara, and about the irreconcilable differences. (Sara: "I don't LIKE the taste of mushrooms/liver/whatever." Linus: "YES YOU DO!") And the grudging respect. "Linus once expressed his awe of his sister very succinctly at an early age. He might have been five or seven or whatever, when he very seriously to ldme: 'You see. I don't think any new thoughts. I think thoughts that other people have thought, and I rearrange them. But Sara, she thinks thoughts that never were before.' "
These reminiscences may reveal that I still don't think Linus has any 'special' talent and certainly not 'for computers' -- if it weren't that, it would be something else. In another day and age he would focus on some different challenge, and I think he will. (What I mean is, I hope he won't be stuck in Linux maintenance forever). For he is, I think, motivated not by 'computers,' and certainly not by fame or riches, but by honest curiosity and a wish to conquer difficulties as they arise, and to do it *the right way* because that's theway it IS and hewon't give up.
I suppose I have already answered the question of what Linus was like as a son -- easy to raise, yes. All he needed was a challenge and he did the rest. When he did start concentrating on computers as a youngster, it was even easier. As Sara and I used to say, just give Linus a spare closet with a good computer in it and feed him some dry pasta and hewill be perfectly happy.
Except ... and this is where my heart was in my throat when he was growing up: How on Earth was he going to meet any nice girls that way? I could only once more resort to the tried and true parenting measure of keeping my fingers crossed. And lo and behold: It worked! He met Tove while teaching at the university, and when she made him forget both his cat and his computer for several days, it was immediately obvious that Nature had triumphed, as is her wont.
I only hope the Ghouls of Fame won't distract him too much. (Fame seems not to have changed him, but he hasmellowed, and now tends to talk to people when they approach him. He even seems to have difficulty saying no. But I suspect it has more to do with his having become a husband and father than with all the media hullabaloo).
And it's obvious that both mother and daughter stay abreast of that hullabaloo. It is late January 2000, the day following Transmeta's big public announcement of what it has been up to, and early in our lunch, Mikke asks Sara, "Was there anything in the paper today about you-know-who and you-know-what?"
That night, on her way to work, Mikke asks her taxi to wait outside my hotel while she drops off a pine child's chair she'd like me to hand-deliver to Patricia. That, and a floor plan of the available apartment for Linus.
About my first memory of Linus doing something remarkable.
I think it was early 1992. I was visiting Linus at his completely messy home once again -- by bike and with no agenda. While watching MTV, as usual, I asked about Linus's operating -- system development. Normally he answered somethinq meaningless. This time, he led me to his computer (from Torvalds' messy kitchen to his totally chaotic room).
Linus gave the computer his username and password and got to a command prompt. He showed some basic functionality of the command interpreter -- nothing special, though. After awhile, he turned to me with a Linus grin on his face and asked: "It looks like DOS, doesn't it?"
I was impressed and nodded. I wasn't stunned, because it looked like DOS too much -- with nothing new, really. I should have known Linus never grins that way without a good reason. Linus turned back to his computer and pressed some function key combination -- another login screen appeared. A new login and a new command prompt. Linus showed me four individual command prompts and explained that later they could be accessed by four separate users.
That was the moment I knew Linus had created something wonderful. I have no problem with that -- I still dominate the snooker table.
Jouko "Avuton" Vierumaki
For me, it meant mainly that the phone lines were constantly busy and nobody could call us... At some point, postcards began arriving from different corners of the globe. I suppose that's when I realized people in the real world were actually using what he had created.
Sara Torvalds
I don't know how to really explain my fascination with programming, but I'll try. To somebody who does it, it's the most interesting thing in the world. It's a game much more involved than chess, a game where you can make up your own rules and where the end result is whatever you can make of it.
And yet, to the outside, it looks like the most boring thing on Earth.
Part of the initial excitement in programming is easy to explain: just the fact that when you tell the computer to do something, it will do it. Unerringly. Forever. Without a complaint.
And that's interesting in itself.
But blind obedience on its own, while initially fascinating, obviously does not make for a very likable companion. In fact, that part gets pretty boring fairly quickly. What makes programming so engaging is that, while you can make the computer do what you want, you have to figure out how.
I'm personally convinced that computer science has a lot in common with physics. Both are about how the world works at a rather fundamental level. The difference, of course, is that while in physics you're supposed to figure out how the world is made up, in computer science you create the world. Within the confines of the computer, you're the creator. You get to ultimately control everything that happens. If you're good enough, you can be God. On a small scale.
And I've probably offended roughly half the population on Earth by saying so.
But it's true. You get to create your own world, and the only thing that limits what you can do are the capabilities of the machine -- and, more and more often these days, your own abilities.
Think of a treehouse. You can build a treehouse that is functional and has a trapdoor and is stable. But everybody knows the difference between a treehouse that is simply solidly built and one that is beautiful, that takes creative advantage of the tree. It's a matter of combining art and engineering. This is one of the reasons programming can be so captivating and rewarding. The functionality often is second to being interesting, being pretty, or being shocking.
It is an exercise in creativity.
The thing that drew me into programming in the first place was the process of just figuring out how the computer worked. One of the biggest joys was learning that computers are like mathematics: You get to make up your own world with its own rules. In physics, you're constrained by existing rules. But in math, as in programming, anything goes as long as it's self-consistent. Mathematics doesn't have to be constrained by any external logic, but it must be logical in and of itself. As any mathematician knows, you literally can have a set of mathematical equations in which three plus three equals two. You can do anything you want to do, in fact, but as you add complexity, you have to be careful not to create something that is inconsistent within the world you've created. For that world to be beautiful, it can't contain any flaws. That's how programming works.
One of the reasons people have become so enamored with computers is that they enable you to experience the new worlds you can create, and to learn what's possible. In mathematics you can engage in mental gymnastics about what might be. For example, when most people think of geometry, they think of Euclidean geometry. But the computer has helped people visualize different geometries, ones that are not at all Euclidean. With computers, you can take these made-up worlds and actually see what they look like. Remember the Mandelbrot set -- the fractal images based on Benoit Mandelbrot's equations? These were visual representations of a purely mathematical world that could never have been visualized before computers. Mandelbrot just made up these arbitrary rules about this world that doesn't exist, and that has no relevance to reality, but it turned out they created fascinating patterns. With computers and programming you can build new worlds and sometimes the patterns are truly beautiful.
Most of the time you're not doing that. You're simply writing a program to do a certain task. In that case, you're not creating a new world but you are solving a problem within the world of the computer. The problem gets solved by thinking about it. And only a certain kind of person is able to sit and stare at a screen and just think things through. Only a dweeby, geeky person like me.
The operating system is the basis for everything else that will happen in the machine. And creating one is the ultimate challenge. When you create an operating system, you're creating the world in which all the programs running the computer live -- basically, you're making up the rules of what's acceptable and can be done and what can't be done. Every program does that, but the operating system is the most basic. It's like creating the constitution of the land that you're creating, and all the other programs running on the computer are just common laws.
Sometimes the laws don't make sense. But sense is what you strive for. You want to be able to look at the solution and realize that you came to the right answer in the right way.
Remember the person in school who always got the right answer? That person' did it much more quickly than everybody else, and did it because he or she didn't try to. That person didn't learn how the problem was supposed to be done but, instead, just thought about the problem the right way. And once you heard the answer, it made perfect sense.
The same is true in computers. You can do something the brute force way, the stupid, grind-the-problem-down-until-it's-not-a-problem-anymore way, or you can find the right approach and suddenly the problem just goes away. You look at the problem another way, and you have this epiphany: It was only a problem because you were looking at it the wrong way.
Probably the greatest example of this is not from computing but from mathematics. The story goes that the great German mathematician Carl Friedrich Gauss was in school and his teacher was bored, so to keep the students preoccupied he instructed them to add up all the numbers between 1 and 100. The teacher expected the young people to take all day doing that. But the budding mathematician came back five minutes later with the correct answer: 5,050.
The solution is not to actually add up all the numbers, because that would be frustrating and stupid. What he discovered was that by adding 1 and 100 you get 101. Then by adding 2 and 99 you get 101. Then 3 and 98 is 101. So 50 and 51 is 101. In a matter of seconds he noticed that it's 50 pairs of 101, so the answer is 5,050.
Maybe the story is apocryphal, but the point is clear: A great mathematician doesn't solve a problem the long and boring way because he sees what the real pattern is behind the question, and applies that pattern to find the answer in a much better way. The same is definitely true in computer science, too. Sure, you can just write a program that calculates the sum. On today's computers that would be a snap. But a great programmer would know what the answer is simply by being clever. He would know to write a beautiful program that attacks the problem in a new way that, in the end, is the right way.
It's still hard to explain what can be so fascinating about beating your head against the wall for three days, not knowing how to solve something the better way, the beautiful way. But once you find that way, it's the greatest feeling in the world.
My terminal emulator grew legs. I was using it regularly to log onto the university computer and read email or participate in the discussions of the Minix newsgroup. The trouble is, I wanted to download things and upload things. That meant I needed to be able to save things to disk. In order to do that, my terminal emulator would require a disk driver. It also needed to get a file system driver, so that it would be able to look at the organization of the disk and save the stuff I was downloading as files.
That was the point where I almost gave up, thinking it would be too much work and not worth it. But there wasn't much else to do. I was going to classes that spring, and they weren't especially challenging. My sole outside activity was the weekly meeting (party) of Spektrum each Wednesday night. Social non-animal that I was, that became my only occasion to do anything other than program or study. Without those meetings (parties), I would have been a total recluse that spring, instead of a near-total recluse. Spektrum provided a built-in framework for a social life of some sort, and I don't think lover missed one of their events. They were important to me -- so important, in fact, that I sometimes lost sleep anticipating those meetings, hoping not to feel self-conscious about my lack of social graces or my nose or my obvious absence of a girlfriend. This is standard geek stuff.
What I'm trying to say is that I didn't have a heck of a lot of other interesting things going on. And the disk driver/file system driver project would be interesting. So I said, I'll do this. I wrote a disk driver. And because I wanted to save files to my Minix file system -- and because the Minix file system was well-documented anyway -- I made my file system compatible with the Minix file system. That way, I could read files I created under Minix and write them to the same disk so that Minix would be able to read the files I created from my terminal emulation thing.
This took a lot of work -- a program-sleep-program-sleep-program-eat (pretzels)-program-sleep-program-shower (briefly)-program schedule. By the time I did this it was clear the project was on its way to becoming an operating system. So I shifted my thinking of it as a terminal emulator to thinking of it as an operating system. I think the transition occurred in the hypnosis of one of those marathon programming sessions. Day or night? I can't recall. One moment I'm in my threadbare robe hacking away on a terminal emulator with extra functions. The next moment I realize it's accumulating so many functions that it has metamorphosed into a new operating system in the works.
I called it my "gnu-emacs of terminal emulation programs." Gnu-emacs started out as an editor, but the people who created it built in a host of functions. They intended it to be an editor that can be programmed, but then the programmability part took over and it became the editor from hell. It contains everything but the kitchen sink, which is why sometimes the icon for the editor is actually a kitchen sink. It's known for being a huge piece of programming effort that has more functions than any editor needs. The same thing was happening with my terminal emulator. It was growing to be much more.
From: torvalds@klaava.Helsinki.Fi (Linus Benedict Torvalds)
To: Newsgroup: comp.os.minix
Subject: Gcc-1.40 and a posix question
Message-ID: <1991Jul3,100050.9886@klaava.Helsinki.Fi>
Date: 3 Jul 91 10:00:50 GMT
Hello Netlanders,
Due to a project I'm working on (in minix) , I'm interested in the posix standard definition. Could somebody please point me to a (preferably) machine-readable format of the latest posix rules? Ftp-sites would be nice.
Okay, this is the earliest public evidence that a geek in Finland was willing to test the bounds of his computing skill. The POSIX standards are the lengthy rules for each of the hundreds of system calls in Unix -- what you need in order to get the computer to perform its operations, starting with Read, Write, Open, Close. POSIX is a Unix-standards body,an organization comprised of representatives from companies that want to agree on common guidelines. Standards are important in order for programmers to be able to write applications to the operating system and have them run on more than one version. The system calls -- particularly the important ones -- would give me a list of the various functions needed for an operating system. I would then write the code to make each of those functions happen in my own way. By writing to the POSIX standards, my code would be usable by others.
I didn't know at the time that I could have bought those rules in hard-copy form directly from POSIX, but it wouldn't have mattered anyway. Even if I could have afforded the cost, it always took a long time to get things shipped to Finland. Hence my appeal for a version that I could download for free from an ftp site.
Nobody responded with a source for the POSIX standards, so I went to Plan B. I tracked down manuals for the Sun Microsysterns version of Unix at the university, which was operating a Sun server. The manuals contained a basic version of the system calls that was good enough to help me get by. It was possible to look at the manual pages to see what the system call was supposed to do, and then set about the task of implementing it from there. The manual pages didn't say how to do it, they just said what the end results were. I also gleaned some of the system calls from Andrew Tanenbaum's book and a few others. Eventually somebody sent me the thick volumes containing the POSIX standards.
But my email message did not go unnoticed. Any knowledgeable person (and only knowledgeable people would be reading the Minix site) could tell that my project would have to be an operating system. Why else would I want the POSIX rules? The message aroused the curiosity of Ari Lemke, a teaching assistant at Helsinki University of Technology (where I would have studied had I not been so interested in studying theory). Ari sent me a nice reply, offering to make a subdirectory on the university's ftp site available for when I would be ready to post my operating system for anyone who might be interested in downloading it.
Ari Lemke must have been quite an optimist. He created the subdirectory (ftp.funet.fi) long before I had something I wanted to release. I had the password, and everything was set up for me to just log in and upload stuff to it. But it took about four months for me to feel I had anything I was willing to share with the world, or at least with Ari and the few other operating system freaks with whom I had been exchanging email.
My original goal was to create an operating system that I could eventually use as a replacement for Minix. It didn't have to do more than Minix, but it had to do the things in Minix that I cared about, and some other things I cared about, too. For example, not only was the Minix terminal emulation bad, but there was no way of performing the job-control function -- putting a program in the background while you're not using it. And memory management was done very simplistically, as it still is in the Mac OS, incidentally.
The way you create an operating system is to find out what the system calls are supposed to do, and then write your own program to implement those system calls in your own way. Generally speaking, there are a couple of hundred system calls. Some of them can represent multiple functions. Others are quite simple. Some of the more fundamental system calls are really complicated and depend on a great deal of infrastructure being there. Take the system calls of "Write" and "Read." You need to create a disk driver in order to write something to disk or read something from disk. Take "Open." You have to create the entire file system layer that parses the names and figures out where on the disk everything is. It took months just to write the "Open" system call. But once it was in place, the same code could be used for other functions.
That's how the early development was done. I was reading the standards from either the Sun OS manual or various books, just picking off system calls one by one and trying to make something that worked. It was really frustrating. The reason: Because nothing is happening, you can't really see any progress. You can make small test programs that test whatever it is you just added. But that doesn't really accomplish anything. After awhile you get to the point where, instead of just reading through a list of system calls, you give up on that approach. It's getting complete enough that you want to run a real program. The first program you have to run is a shell because, without a shell, it's pretty hard to run anything else. And besides, the shell itself contains many of the system calls you will need. Get it running and you will be able to print out a running list of the system calls you haven't implemented.
In Unix, the shell is kind of the mother of all programs. It's there to start up other binaries. (A binary is a program in the 1's and 0's that a machine reads. Whenever you write a program in a computer language, you then compile the source code and it becomes a binary.) The shell allows you to log on in the first place.
Okay, traditionally in a real Unix system the first program you run is called
init
, but init
really needs a lot of infrastructure in order to work. It's kind of a controller for what goes on. But when you don't really have anything that works, there isn't any point to having init
.
So instead of starting init, the first thing my kernel did was to start the shell. I had implemented about twenty-five system calls and, as I mentioned, this was the first real program I was trying to run. The shell wasn't something I had written myself. I had downloaded onto a disk a clone of the Bourne Shell, which was one of the original Unix shells. It was available over the Internet as free software, and its name was derived from a bad pun. The guy who wrote the original was named Bourne, so was the clone Bourne-Again Shell -- commonly referred to as bash.
When you try and load a real program from disk, invariably there's a bug in the disk driver or in the loader because it doesn't understand what it's reading in. So it prints out a running commentary on what it's doing. It's important because that's how you can find out what is going wrong.
I got to the point where my program was loading the shell and generating a printout of every system call that the shell contained that I hadn't yet implemented. I booted, ran the shell, and it would spit back something like: "system call 512 is not done." Day and night I was looking at printouts of system calls, trying to determine which ones I was doing wrong. But this was much more fun than taking a list of calls and just implementing them. You got to see progress being made.
It was late August or early September when I finally got the shell working. From that point, things got a lot easier.
This was a big deal.
When I got the shell working, I was pretty much immediately able to compile a few other programs. The shell was more complicated than the cp (copy) program, for example, or the 1's (for getting a directory listing) program. Everything you needed had to be there for the shell already, so once the shell was working it went from close to zero to 100 in nothing flat, because all these pieces had been in place. At some point there was enough in place that I experienced a Let There Be Light moment, because until then, nothing had really worked.
Yes, I felt a great sense of satisfaction. I think that was particularly important because I hadn't been doing anything that summer except working on the computer. This is not an exaggeration. The April through August period is pretty much the best time of the year in Finland. Folks are sailing in the archipelago, sunning themselves on beaches, sitting in their summer-cottage saunas. But I rarely even knew if it was day or night, weekend or weekday. Those thick black curtains blocked out the near round-the-clock sunshine, and the world. Some days -- nights? -- I'd roll out of bed directly into the chair at my computer, less than two feet away. Apparently my dad was bugging my mom to make me get a summer job. But she didn't mind: I wasn't bothering her. Sara was a bit annoyed that the phone lines were always tied up when I went online. She could probably write that sentence with a little less diplomacy. It's not an exaggeration to say that I had virtually no contact with the world outside my computer. Okay, maybe once a week a friend would knock on my window and if I wasn't scrolling through important code I would invite him in. (It was always a him-remember, this was before geeks were considered cool.) We would drink tea and maybe watch an hour of MTV in the tiny kitchen. Now that I think of it, yes, I do recall going out for an occasional beer or for some snooker after having my window pounded by someone like Juoko (I call him "Avuton," which means "he who slays dragons," but that's another story). But, in all honesty, nothing else was going on in my life at the time.
And I didn't feel the least bit like some pathetic, pale-skinned, propeller-head loser. The shell was operational, which meant that I had actually built the foundation of a working operating system. And I was having fun.
With the shell working, I started testing its built-in programs. Then I compiled enough new programs to actually do something. I was compiling everything in Minix, but I moved the shell over to a special partition that I had created for the new operating system. Privately I called it Linux.
Honest: I didn't want to ever release it under the name Linux because it was too egotistical. What was the name I reserved for any eventual release? Freax. (Get it? Freaks with the requisite X.) In fact, some of the early make files-the files that describe how to compile the sources-included the word "Freax" for about half a year. But it really didn't matter. At that point I didn't need a name for it because I wasn't releasing it to anybody.
From: torvalds@klaava.Helsinki.Fi (Linus Benedict Torvalds)
To: Newsgroups: comp.os.inix
Subject: What would you like to see most in minix?
Summary: small poll for my new operating system
Message-ID: <1991Aug25.205708.9541@klaava.Helsinki.Fi>
Hello everybody out there using minix - I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386 (486) AT clones. This has been brewing since april, and is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my as resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things).
I've currently ported bash (1.08) and gcc (1.40), and things seem to work. This implies that I'll get something practical within a few months, and I'd like to know what features most people would want. Any suggestions are welcome, but I won't promise I'll implement them :-)
Linus (torvalds@kruuna.helsinki.fi)
PS. Yes-it's free of any minix code, and it has a multi-threaded fs. It is NOT portable (uses 386 task switching etc.), and it probably never will support anything other than AT-harddisks, as that's all I have :-(.
The most hard-core operating system enthusiasts among the Minix crowd felt a spark. Not many suggestions about Minix features came my way, but there were other inquiries.
>Tell us more! Does it need a MMU?
Answer: Yes
>How much of it is in C? What difficulties will there be in porting? Nobody will believe you about nonportability ;-), and I for one would like to port it to my Amiga.
Answer: It's mostly in C, but most people wouldn't call what I write C. It uses every conceivable feature of the 386 I could find, as it was also a project to teach me about the 386. Some of my "C" files are almost as much assembler as C.
As already mentioned, it uses an MMU, for both paging (not to disk yet) and segmentation. It's the segmentation that makes it REALLY 386-dependent (every task has a 64Mb segment for code & data-max 64 tasks in 4Gb. Anybody who needs more than 64Mb/task-tough cookies).
And I even got a few folks offering to be beta testers.
In the end, it wasn't much of a decision to post it. That was how I was accustomed to exchanging programs. So the only real decision was, at what point am I comfortable to dare show this off to people? Or, phrased more accurately: When is it good enough that I won't have to be ashamed of it?
What I ultimately wanted was to have a compiler and a real environment so that you could create programs in Linux itself, without having to use Minix. But I felt so proud when the gnu shell worked that I was ready to let the world see. Also, I wanted feedback.
By the time the shell worked, I had a few rudimentary binaries I'd compiled for the operating system. You really couldn't do anything, but you could see that it was something resembling Unix. In fact, it worked like a very crippled Unix.
So I just decided I would make it available. I wouldn't tell anybody publicly. Instead, I just informed a handful of people by private email, probably between five and ten people in all, that I had uploaded it to the ftp site. Among them were Bruce Evans of Minix fame and Ari Lemke. I uploaded the sources to Linux itself and a few binaries so that you could start something. I told people what they needed to do in order to try and run this thing. They still had to have Minix installed-the 386 version-and they still had to have the GCC compiler. In fact they had to have my version of GCC, so I made that available, too.
There's a protocol for numbering releases. It's psychological. When you think a version is truly ready to be released, you number it version 1.0. But before that, you number the earlier versions to indicate how much work you need to accomplish before getting to 1.0. With that in mind, the operating system I posted to the ftp site was numbered version 0.01. That tells everybody it's not ready for much.
And yes, I remember the date: September 17, 1991.
I don't think more than one or two people ever checked it out. They had to go to the trouble of installing the special compiler, getting a clean partition so they could use that to boot, compiling my kernel, and then running just the shell. Running the shell was basically all you could do. You could print out the sources, which amounted to just 10,000 lines -- that's less than 100 pages of paper if you printed with small font. (Now it's something on the order of 10 million lines.)
One of the main reasons I distributed the operating system was to prove that it wasn't all just hot air, that I had actually done something. On the Internet, talk is cheap. Regardless of what you do, whether it be operating systems or sex, too many people are just faking it in cyberspace. So it's nice, after talking to a lot of people about building an operating system, to be able to say, "See, I actually got something done. I wasn't stringing you along. Here's what I've been doing ...."
And Ari Lemke, who insured that it made its way to the ftp site, hated the name Freax. He preferred the other working name I was using -- Linux -- and named my posting: pub/OS/Linux. I admit that I didn't put up much of a fight. But it was his doing. So I can honestly say I wasn't egotistical, or half-honestly say I wasn't egotistical. But I thought, okay, that's a good name, and I can always blame somebody else for it, which I'm doing now.
As I mentioned, my operating system really wasn't very useful. For one thing, it would crash very easily if you filled up memory or if you did anything nasty. Even if you weren't doing anything nasty, the operating system would crash if you kept it running for any length of time. But it wasn't meant to be run at that stage. It was meant to be looked at. Yes,and admired.
So it wasn't intended to be anything but a specialty for the few people who were interested in creating new operating systems. Very technical people -- and even within technical people, a special interest group.
Their reaction was invariably positive, but positive in a kind of "It would be nice if it could also do this" kind of sense, or "It looks cool but it really doesn't work on my computer at all."
I remember one email whose writer said he really liked my operating system, and he went on for at least one paragraph to tell me how nice it was. Then he explained that it had just eaten his hard disk, and that my disk driver was flaky or something. He had lost all the work he had done, but he was still very positive. It was fun to read that kind of email. It was a bug report about something that screwed him up.
That was just the sort of feedback I was looking for. I fixed some bugs, like the one that caused it to lock up when it ran out of memory. And I made the big step of porting the GCC compiler to the operating system, so I could compile small programs. That meant users wouldn't need to load my GCC compiler before running the operating system.
Do you pine for the days when men were men and wrote their own device drivers?
-announcement of the posting of Linux version 0.02
Early October saw the release of version 0.02, which included some fixed bugs and a few additional programs. The following month I released version 0.03.
I probably would have stopped by the end of 1991. I had done a lot of things I thought were interesting. Everything didn't really work perfectly, but in a software kind of world I find that once you solve the fundamental problems of a project, it's easy to lose interest. And that's what was happening to me. Trying to debug software is not very engaging. Then two things happened to keep me going. First, I destroyed my Minix partition by mistake. Second, people kept sending me feedback.
Back then I was booting into Linux but used Minix as the main development environment. Most of what I was doing under Linux was reading email and news from the university's computer via the terminal emulator I had written. The university computer was constantly busy, so I had written a program that auto-dialed into it. But in December, I mistakenly auto-dialed my hard disk instead of my modem. I was trying to auto-dial /dev/tty1, which is the serial line. But by mistake I auto-dialed /dev/hdal, which is the hard disk device. The end result was that I inadvertently overwrote some of the most critical parts of the of the partition where I had Minix. Yes, that meant I couldn't boot Minix anymore.
That was the point where I had a decision to make: I could reinstall Minix, or I could bite the bullet and acknowledge that Linux was good enough that I didn't need Minix. I would write the programs to compile Linux, under itself, and whenever I felt I needed Minix I would just add the desired feature to Linux. It's a big conceptual step when you drop the original hosting environment and truly make a program self-hosting, so big that I released the new version as 0.10 in late November. A few weeks later came version 0.11.
That's when there actually started to be a number of people using it and doing things with it. Until then, I had gotten maybe one-line bug fixes. But now, people were sending me new features. I remember going out and upgrading my machine to have 8 mgs of RAM instead of 4 mgs, to accommodate the need for additional memory. I also went out and bought a floating-point coprocessor because people had started asking me if Linux would support their floating-point coprocessors. The extra hardware would enable my computer to perform floating-point math.
I remember that, in December, there was this guy in Germany who only had 2 megabytes of RAM, and he was trying to compile the kernel and he couldn't run GCC because GCC at the time needed more than a megabyte. He asked me if Linux could be compiled with a smaller compiler that wouldn't need as much memory. So I decided that even though I didn't need the particular feature, I would make it happen for him. It's called page-to-disk, and it means that even though someone has only 2 mgs of RAM, he can make it appear to be more by using the disk for memory. This was around Christmas 1991. I remember on December 23rd trying to make the page-to-disk work. By December 24th, it kind of worked but crashed every once in awhile. Then on December 25th, it was done. It was basically the first feature I added to serve somebody else's need.
And I was proud of it.
Not that I mentioned anything about it to my family, as we gathered at my paternal grandmother's (Farmor!) to dine on ham and varieties of herring. Each day, the community of Linux users expanded, and I was receiving email from places that I'd dreamed about visiting, like Australia and the United States. Don't ask me why, but I didn't feel the need to discuss any of this with my parents, sister, or any other relatives. They didn't understand computers. I guess I thought they wouldn't understand what was happening.
As far as they were concerned, I was just tying up the phone lines with my modem. In Helsinki it used to be that you had a flat rate during the night, so I tried to do most of the work at home late at night. But occasionally I tied up the phone all day. I tried to get a second line, but the building that housed my mother's apartment was so old that they didn't have any extra lines and weren't interested in adding new ones. Sara was doing nothing but talking on the phone with her friends at the time. At least that's what it seemed like to me. Sowe had fights, occasionally. Virtual fights. As she talked to her friends, I would force the modem to start dialing so that she would hear dee-dee-dee-dee-dee when I was trying to dial out. It would disturb her but she would know that I really, really needed to read email. I never said I was the world's best older brother.
Page-to-disk was a fairly big thing because it was something Minix had never done. It was included in version 0.12, which was released in the first week of January 1992. Immediately, people started to compare Linux not only to Minix but to Coherent, which was a small Unix clone developed by Mark Williams Company. From the beginning, the act of adding page-to-disk caused Linux to rise above the competition.
That's when Linux took off. Suddenly there were people switching over from Minix to Linux. At the time, Linux didn't do everything Minix did, but it did most of the things people really cared about. And it had this one capability that people really, really cared about: With page-to-disk, you could run bigger programs than you had memory for. It meant that when you ran out of memory you could take an old piece of memory, save it off to disk, remember where you saved it, and reuse that memory for the problem you had to solve. This was a big deal in the opening weeks of 1992.
It was in January that Linux users grew from five, ten, twenty people -- folks who I could email and whose names I knew -- to hundreds of unidentifiable people. I didn't know everybody using Linux, and that was fun.
About this time there was a hoax speeding its way on the Internet. Some poor boy named Craig was dying of cancer and a popular chain letter urged you to show your support by sending him a postcard. It turned out to be somebody's idea of a sick joke; I don't think Craig ever really existed, much less suffered from cancer. But the appeal generated millions of postcards. So I was only half-serious when I asked for postcards instead of money from people who used Linux. It was like an oh-God-not-another-email-that-asks-for-postcards joke. In the PC world at the time, there had been a strong tradition of shareware. You downloaded a program and you were supposed to send in something on the order of ten bucks to the writer. I was getting emails from people asking me if I would like them to send me thirty bucks or so. I had to say something.
Looking back, the money would have been useful, I guess. I had amassed something like $5,000 in student loans, and had to shell out about $50 a month to payoff my computer. My other major expenditures were pizza and beer. But Linux was keeping me so preoccupied that I wasn't going out much at the time, maybe once a week at most. I didn't need money for dates although I could have used it for hardware add-ons, but that wasn't necessary. Probably a different son would have asked for money for his program, if only to fork over some rent to his working single mom. It never occurred to me at the time. Sue me.
I was more interested in seeing where people were using Linux. Instead of cash, I preferred postcards. And they poured in-from New Zealand, from Japan, from the Netherlands, from the United States. It was Sara who typically picked up the mail, and she was suddenly impressed that her combative older brother was somehow hearing from new friends so far away. It was her first tip-off that I was doing anything potentially useful during those many hours when I had the phone line engaged. The postcards totaled in the hundreds, and I have no idea what happened to them. They must have disappeared in one of my moves. Avuton calls me "the least nostalgic person" he has ever met.
Actually, I didn't want the money for a variety of reasons. When I originally posted Linux, I felt I was following in the footsteps of centuries of scientists and other academics who built their work on the foundations of others-on the shoulders of giants, in the words of Sir Isaac Newton. Not only was I sharing my work so that others could find it useful, I also wanted feedback (okay,and praise). It didn't make sense to charge people who could potentially help me improve my work. I suppose I would have approached it all differently if I hadn't been raised in Finland, where anyone exhibiting the slightest sign of greediness is viewed with suspicion, if not envy. (This has changed a bit since the days when Nokia phones started making their way into pockets the world over, boosting the bank accounts of numerous Finns.) And, yes, I undoubtedly would have approached the whole no-money thing a lot differently if I had not been brought up under the influence of a diehard academic grandfather and a diehard communist father.
Regardless, I didn't want to sell Linux. And I didn't want to lose control, which meant I didn't want anybody else to sell it, either. I made that clear in the copyright policy I included in the copying file of the first version I had uploaded back in September. Thanks to the Berne Convention in Europe in the 1800s, you own the copyright to anything you create, unless you sell the copyright. As the copyright owner, I got to make up the rules: You can use the operating system for free, as long as you don't sell it, and if you make any changes or improvements you must make them available to everybody in source code (as opposed to binaries, which are inaccessible). If you didn't agree with these rules, you didn't have the right to copy the code or do anything with it.
Think of yourself. You put six months of your life into this thing and you want to make it available and you want to get something out of it, but you don't want people to take advantage of it. I wanted people to be able to see it, and to make changes and improvements to their hearts' content. But I also wanted to make sure that what I got out of it was to see what they were doing. I wanted to always have access to the sources so that if they made improvements I could use those improvements myself. It made sense to me that the way for Linux to develop into the best possible technology was to keep it pure. If money was to get involved, things would get murky. If you don't let money enter the picture, you won't have greedy people.
While I wasn't interested in asking for money for Linux, other people were not shy about requesting donations whenever they gave someone a copy of the operating system they had loaded onto a floppy disk. By February, it was not uncommon for folks to attend Unix users' meetings armed with floppies containing Linux. People started asking me if they could charge, say, five dollars just to cover the cost of the disk and their time. The trouble was, that was a violation of my copyright.
It was time to rethink my Linux-is-not-for-sale stance. By that point, Linux was getting so much online discussion that I felt fairly confident that nobody was going to be in a position to just take it and run with it, which had been my big fear. At least they wouldn't do it without generating a lot of negative reaction. If anybody tried abducting Linux and turning it into a commercial project, there would have been a strong backlash, and a growing community of hacker types who would say "Hey, that's Linux! You can't do that," although not in such polite words.
The momentum had been established: On a daily basis, hackers from around the world were sharing their suggested changes. We were collectively creating the best operating system around, and couldn't possibly veer away from our trajectory. Because of this, and because Linux had become so recognizable, I felt comfortable allowing people to sell it.
But before I make myself sound like Mr. Beneficent, let me mention another critical element of my decision. The fact is, to make Linux usable, I had relied on a lot of tools that had been distributed freely over the Internet -- I had hoisted myself up on the shoulders of giants. The most important of these free software programs was the GCC compiler. It had been copyrighted under the General Public License, universally known as the GPL (or the "copyleft"), which was the brainchild of Richard Stallman. Under terms of the GPL, money is not the issue. You can charge a million bucks if somebody's willing to pay it, but you have to make sources available. And the person you give or sell the source to has to have all the rights you have. It's a brilliant device. But unlike many hard-core GPL freaks, who argue that every new software innovation should be opened up to the universe under the general public license, I believe it should be the right of the individual inventor to decide what to do with his or her invention.
So I dumped my old copyright and adopted the GPL, a document that Stallman had written with lawyers looking it over. (Because lawyers were involved, it runs on for pages.)
The new copyright was included in version 0.12, but I remember lying awake at night after releasing it, nervous about what commercial interests would do to the system. Looking back now, it seems ridiculous to have been so worried because the commercial interest was relatively small. Something made me think that I had to be careful. One of my worries was -- and still is -- that somebody would just take Linux and not honor the copyright. Back then I worried that it would be practically impossible to sue anyone in the United States who broke the copyright. It's still a concern. It's easy to prosecute someone for such violations, but I worry about somebody doing it until they're forced to stop.
And there are nagging fears that companies in places like China won't honor the GPL. Practically nothing in their legal system prevents them from breaking the copyright, and in a real sense it's not worth the trouble to go after people who would try to do something illegal. That's what big software companies and the music industry have tried to do and it hasn't been overwhelmingly successful. My fears are mitigated by reality. Somebody might do it for awhile, but it is the people who actually honor the copyright, who feed back their changes to the kernel and have it improved, who are going to have a leg up. They'll be part of the process of upgrading the kernel. By contrast, people who don't honor the GPL will not be able to take advantage of the upgrades, and their customers will leave them. I hope.
Generally speaking, I view copyrights from two perspectives. Say you have a person who earns $50 a month. Should you expect him or her to pay $250 for software? I don't think it's immoral for that person to illegally copy the software and spend that five months' worth of salary on food. That kind of copyright infringement is morally okay. And it's immoral -- not to mention stupid -- to go after such a "violator." When it comes to Linux, who cares if an individual doesn't really follow the GPL if they're using the program for their own purposes? It's when somebody goes in for the quick money-that's what I find immoral, whether it happens in the United States or Mrica. And even then it's a matter of degree.
Greed is never good.
The attention wasn't all positive. Although confrontation never has been my best sport, I was bullied into defending Linux and my manhood when Andrew Tanenbaum kept making attacks on the operating system that was supplanting his own. We're nerds, so it was all done via email.
Who could blame him for getting hot under the T-shirt? Before any Linux newsgroups had been created, I routinely used Minix newsgroups to make announcements about Linux or find people who were interested in the operating system. Why should Andrew like that?
So, for starters, he was unhappy about my infringing on his newsgroup. And he obviously wasn't too pleased that his operating system was becoming eclipsed by this new creation from the snowy wilds of Finland -- and that so many developers were joining the project. He also had opposing ideas for how operating systems should be built. At the time, Andrew was part of a camp of computer scientists who favored the microkernel approach to operating systems. He had done Minix as a microkernel, and Amoeba, the system he was working on at the time, also involved one.
This was a flourishing movement in the late 1980s and early 1990s. And Linux's success was threatening it. So he kept posting unpleasant little jabs.
The theory behind the microkernel is that operating systems are complicated. So you try to get some of the complexity out by modularizing it a lot. The tenet of the microkernel approach is that the kernel, which is the core of the core of the core, should do as little as possible. Its main function is to communicate. All the different things that the computer offers are services that are available through the microkernel communications channels. In the microkernel approach, you're supposed to split up the problem space so much that none of it is complex.
I thought this was stupid. Yes, it makes every single piece simple. But the interactions make it far more complex than it would be if many of the services were included in the kernel itself, as they are in Linux. Think of your brain. Every single piece is simple, but the interactions between the pieces make for a highly complex system. It's the whole-is-bigger-than-the-parts problem. If you take a problem and split it in half and say that the halves are half as complicated, you're ignoring the fact that you have to add in the complication of communication between the two halves. The theory behind the microkernel was that you split the kernel into fifty independent parts, and each of the parts is a fiftieth of the complexity. But then everybody ignores the fact that the communication among the parts is actually more complicated than the original system was -- never mind the fact that the parts are still not trivial.
That's the biggest argument against microkernels. The simplicity you try to reach is a false simplicity.
Linux started out much smaller and much, much simpler. It didn't enforce modularity, so you could do a lot of things more straightforwardly than you ever could with Minix. One of the original problems I had with Minix was that if you had five different programs running at the same time and they all want to read five different files, the tasks would be serialized. In other words, you would have five different processes sending requests to the file system: "Can I please Read From File X?" The file system daemon that handles reading takes one of them and sends it back, then takes the next one and sends it back, and so on.
Under Linux, which is a monolithic kernel, you have five different processes that each do a system call to the kernel. The kernel has to be very careful that they don't get: confused with each other, but it very naturally scales up to any number of processes doing whatever they want. It makes Linux much faster and more efficient.
Another problem with Minix was that you got the sources but the licenses didn't allow you to do a lot. Take someone like Bruce Evans, who performed major surgery on Minix and made it much more usable. He couldn't just incorporate his improvements. He was restricted to only making patches. From a practical standpoint that's a complete disaster. He couldn't legally make a bootable image available to people so they could easily upgrade. So users had to take a multiple-step process to even get a usable system, which was horribly impractical.
The only time I ended up communicating with Andrew Tanenbaum was in early 1992. Imagine logging on one blizzardy morning and running across the unedited version of this:
From: ast@cs.vu.nl (Andy Tanenbaum)
To: Newsgroups: comp.os.minix
Subject: LlNUX is obsolete
Date: 29 Jan 92 12:12:50 GMT
I was in the U.S. for a couple of weeks, so I haven't commented much on LINUX (not that I would have said much had I been around), but for what it's worth, I have a couple of comments now.
As most of you know, for me MINIX is a hobby, something that I do in the evening when I get bored writing books and there are no major wars, revolutions, or senate hearings being televised live on CNN. My real job is a professor and researcher in the area of operating systems.
As a result of my occupation, I think I know a bit about where operating systems are going in the next decade or so. Two aspects stand out:
1. MICROKERNEL VS MONOLITHIC SYSTEM
Most older operating systems are monolithic, that
is, the whole operating system is a single a.out file
that runs in "kernel mode." This binary contains the
process management, memory management, file system and
the rest. Examples of such systems are UNIX, MS-DOS,
VMS, MVS, OS/360, MULTICS, and many more.
The alternative is a microkernel-based system, in
which most of the OS runs as separate processes,
mostly outside the kernel. They communicate by message
passing. The kernel's job is to handle the message
passing, interrupt handling, low-level process
management, and possibly the I/O. Examples of this
design are the RC4000, Amoeba, Chorus, Mach, and the
not-yet-released Windows/NT.
While I could go into a long story here about the
relative merits of the two designs, suffice it to say
that among the people who actually design operating
systems, the debate is essentially over. Microkernels
have won. MINIX is a microkernel-based system.
The file system and memory management are separate
processes, running outside the kernel. The I/O
drives are also separate processes. LlNUX is a
monolithic style system. This is a giant step back
into the 1970's.
2. PORTABILITY
MINIX was designed to be reasonably portable, and has
been ported from the Intel line to the 680xO (Atari,
Amiga, Macintosh), SPARC, and NS32016. LlNUX is tied
fairly closely to the 80x86. Not the way to go.
Don't get me wrong, I am not unhappy with LlNUX. It
will get all the people who want to turn MINIX in BSD
UNIX off my back. But in all honesty, I would suggest
that people who want a **MODERN** *free* OS look
around for a microkernel-based, portable OS, like
maybe GNU or something like that.
Andy Tanenbaum (ast@cs.vu.nl)
I knew I needed to defend my honor, so I wrote back:
From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)
Subject: Re: LINUX is obsolete
Date: 29 Jan 92 23:14:26 GMT
Organization: University of Helsinki
Well, with a subject like this, I'm afraid I'll have to reply. Apologies to minix-users who have heard enough about linux anyway. I'd like to be able to just *ignore the bait* but . Time for some serious flamefesting!
In article <12595@star.cs.vu.nl> ast@cs.vu.nl (Andy Tanenbaum) writes :
>I was in the U.S. for a couple of weeks, so I haven't commented much on LINUX (not that
>I would have said much had I been around), but for what it is worth, I have a couple of
>comments now.
>As most of you know, for me MINIX is a hobby,
something that I do in the evening when
>I get bored writing books and there are no major
wars, revolutions, or senate hearings
>being televised live on CNN. My real job is professor
and researcher in the area of
>operating systems.
You use this as an excuse for the limitations of
minix? Sorry, but you lose: I've got more excuses
than you have, and linux still beats the pants off
minix in almost all areas. Not to mention the fact
that most of the good code for minix seems to have
been written by Bruce Evans.
Re 1: You doing minix as a hobby-look at who makes
money off minix, and who gives linux out for free.
Then talk about hobbies. Make minix freely available,
and one of my biggest gripes with it will disappear.
Linux has very much been a hobby (but a serious one;
the best type) for me: I get no money for it, and it's
not even part of any of my studies in the university.
I've done it all on my own time, and on my own
machine.
Re 2: Your job is being a professor and researcher:
That's one hell of a good excuse for some of the
brain damages of minix. I can only hope (and assume)
that Amoeba doesn't suck like minix does.
>1. MICROKERNEL VS MONOLITHIC SYSTEM
True, linux is monolithic, and I agree that microkernels
are nicer. With a less argumentative subject,
I'd probably have agreed with most of what you said.
From a theoretical (and aesthetical) standpoint,
linux loses. If the GNU kernel had been ready last
spring, I'd not have bothered to even start my project:
the fact is that it wasn't and still isn't.
Linux wins heavily on points of being available now.
>MINIX is a microkernel-based system. [deleted, but
not so that you miss the point] LINUX is a monolithic
style system.
If this was the only criterion for the "goodness"
of a kernel, you'd be right. What you don't mention
is that minix doesn't do the microkernel thing very
well, and has problems with real multitasking (in the
kernel). If I had made an OS that had problems with a
multithreading file system, I wouldn't be so fast to
condemn others: in fact, I'd do my damnedest to make
others forget about the fiasco.
[yes, i know there are multithreading hacks for
minix, but they are hacks, and bruce evans tells me
there are lots of race conditions.]
>2. PORTABILITY
"Portability is for people who cannot write new programs"
-me, right now (with tongue in cheek)
The fact is that linux is more portable than minix.
What? I hear you say. It's true-but not in the sense
that ast means: I made linux as conformant to standards
as I knew how (without having any POSIX standard
in front of me). Porting things to linux is
generally/much/easier than porting them to minix.
I agree that portability is a good thing: but only
where it actually has some meaning. There is no
idea in trying to make an operating system overly
portable: adhering to a portable API is good enough.
The very/idea/of an operating system is to use the
hardware features, and hide them behind a layer of
high-level calls. That is exactly what linux does:
it just uses a bigger subset of the 386 features than
other kernels seem to do. Of course this makes the
kernel proper unportable, but it also makes for
a/much/simpler design. An acceptable trade-off, and
one that made linux possible in the first place.
I also agree that linux takes the non-portability to
an extreme: I got my 386 last January, and linux was
partly a project to teach me about it. Many things
should have been done more portably if it would have
been a real project. I'm not making overly many excuses
about it though: it was a design decision, and last
april when I started the thing, I didn't think anybody
would actually want to use it. I'm happy to report I
was wrong, and as my source is freely available, anybody
is free to try to port it, even though it won't be
easy.
PS> I apologise for sometimes sounding too harsh:
minix is nice enough if you have nothing else. Amoeba
might be nice if you have 5-10 spare 386's lying
around, but I certainly don't. I don't usually get
into flames, but I'm touchy when it comes to linux:)
There were a few more installments in this, one of my few flame wars. But you get the point: There were opposing voices, even in the early days. (Or maybe the point is: Be careful when you put yourself out there in an electronic forum. Your typos and errors of grammar will haunt you forever.)
Linus and I leave our families and friends back at the campsite and take an afternoon hike along a clear stream. We're camping at Grover Hot Springs, way up in the Eastern Sierra over the July 4th weekend, at a site that seems to have been lifted from the pages of NacionalGeographic--"This is a Kodak moment," Linus proclaims, pausing to look out over a wildflower-dusted meadow and the dramatic cliffs that provide the backdrop. We settle at a site along the stream, and I ask him to describe his life during those days when Linux's appeal was spreading far beyond its original family of newsgroup enthusiasts, few of whom Linus had even personally met.
"It must have felt great," I say. "For years you were toiling away on your own in your bedroom, with little contact with the world outside your CPU. Suddenly you have people from every corner of the planet acknowledging what great work you're doing. You're the center of this growing community that is looking to you to --"
"I don't have a memory of it being a big deal for me," he replies. "I really don't think it was. It was kind of the thing I was thinking about all the time, but mainly because there was always a problem to be solved. In that sense, I was thinking about it a lot, but it was not, emotionally, a big thing. Intellectually, it was something big.
"I liked the fact that there were a lot of people giving me motivation to do this project. I thought I had seen the end of it, a point where it was almost done. But that point never came because people kept giving me more reasons to continue and more brainteasers to worry about.
And that kept it interesting. Otherwise, I probably would have just moved on to another project, because that's how I worked, and that was fun. But I suspect I worried more about my nose or something like that," he says.
A few weeks later we are at the Stanford Shopping Center, where Linus is perplexed over the selection of running shoes from which he can choose. "How many miles do you typically run each week?" asks the salesman. Linus smiles; he hasn't run as much as a mile during the past ten years. Exercise hasn't been a major priority. But in his weaker moments, Linus admits that he would like to shed some of his excess poundage.
"Tote must have convinced you to help me get rid of my pouch," he jokes, patting his gut.
"Tell her that her check never arrived this week," I reply.
Soon we are circling the Stanford campus in search of a legal parking space. After maybe half an hour, we do a few stretches, then we start to run over narrow dirt paths past the campus's dried up lake, into the woods, and in the direction of our goal: the huge hillside satellite dish. We never make it. I set an unfairly swift pace and am surprised that Linus can stick right behind me for about a mile. Then he loses his wind. A few minutes later we spread out on the grass along the lake.
"What was your family's reaction to everything that was happening to Linux?" I ask. "They must have been pretty excited about it."
"I don't think anybody really noticed," he replies. "I won't say that nobody really cared. But I had been doing programming most of my life, and this was not anything different as far as they were concerned."
"Well, you must have said something to your folks. Like if your dad was driving you someplace, didn't you say, 'Hey, you're not going to believe this but you know the stuff I've been doing with my computer? Well, I've got hundreds of people who are using it..."
"No," he answers. "I just didn't feel the need to share this with my family and friends. I never had the feeling that I wanted to push it on people. I remember Lars Wirzenius, around the time I was writing Linux, decided to buy XENIX, SCO's version of Unix, and I think I remember he tried to make excuses, like, 'Don't take this the wrong way.' I personally don't think I ever cared. He eventually switched, but it wasn't a big deal for me. To me the fact that people used it was nice, and it was wonderful that I got comments back, but at the same time it was not that important. I didn't want to spread the gospel. I was proud of having people use my code, but I don't remember ever having the feeling that I wanted to share that with anybody. And I didn't think it was the most important thing on Earth. I didn't think that I was doing something really important because a hundred people were using my software. It was more like it was fun. And that's how I feel about it today. "
"So you didn't even want to tell your parents and family and friends about it. And you really weren't excited by everything that was happening?" I ask, not masking my disbelief.
He waits a few seconds before responding. "I don't remember if I even had feelings back then."
Linus buys a new car, a BMW Z3, a two-seater convertible that he says defines the word "fun." It is metallic blue, the perfect boy's model-car color. He chose that shade because the vehicle doesn't come in bright yellow, his color of choice. BMW yellow, heexplains, "looks like pee." For years he parked his Pontiac as close as possible to the entrance to 'Transmeta's headquarters in a Santa Clara office park. But the BMW is parked outside his office window, allegedly so it can be in the shade. Now when Linus works on his computer he can admire his new car at the same time.
A little more than a year earlier, we had taken our first trip over the mountain to Santa Cruz in a convertible, a white Mustang I had rented for the occasion. And during that excursion, Linus had made a point of stopping to check out the sports cars parked outside the sauna place and brewery we visited. Now we are heading over the mountain in his own sports car. He smiles as he takes the curves on Route 17.
"You deserve this," I say.
I pull a handful of CDs from the glove compartment.
"Pink Floyd?" I ask. "The Who? Janis Joplin?"
"lt's the music I grew up listening to. I never bought music when I was a kid, but we had this around the apartment. I guess my mother was playing it, although I remember she was big on Elvis Costello."
It is Friday afternoon, a sparkling Friday afternoon of California perfection with delights for each of the senses: cobalt skies for the eyes, intense sunshine for the skin, the fragrance of mountain eucalyptus, the sweet taste of pure air, the lull of Pink Floyd on upgraded speakers. Sure, to passing motorists we must have appeared to be some sort of postadolescent cliche, spraying on sunblock and doing the classic rock vocals, but not many cars passed Linus's new BMW Z3.
We park among shoddier vehicles along the side of Highway 1 a bit north of Santa Cruz, and make our way down to a mostly empty beach. We spread out on towels in the warm sun and wait a few minutes before I pull my tape recorder from my backpack. Again, I ask him to describe Linus in those early days.
He draws a box in the sand to represent his bedroom, then indicates the location of his bed and computer. "I would roll out of bed and immediately check my email," he says, moving his finger accordingly. "Some days I don't think I ever left the apartment. I wasn't checking my email just to see who was sending me email. It was more a matter of seeing if a particular problem had been fixed. It was more like, What new exciting issue do we have today? Or, if we had a problem, who had a solution?"
Linus tells me that his social life at the time was "pathetic." Then he figures that sounds too pathetic, so he amends it: "Let's say it was one notch above pathetic. "
"I didn't become a total complete recluse," he says, "but even though Linux was happening, I was still as antisocial as I had ever been. You noticed that I never contact people by phone. It's always been true. I never call. Most people who are my friends are the kind of people who contact people, and I'm not. You can imagine what that's like for dating, if you never call the woman. So during that time I had a few friends who just came knocking on my window, wanting to come in for a cup of tea. I don't think anybody could really tell the difference at that time -- Oh, he's doing something really big and important and someday he'll change the world. I don't think anybody really thought anything of the sort.
Linus's single regular social event in those days was the weekly Spektrum meeting, where he mingled with other science majors. These social encounters created far more anxiety than anything connected with technology.
"What was I worrying about? Just social life in general. Maybe worry is the wrong word, there was more emotional impact. Just thinking about girls. Linux wasn't that important to me at the time. To some degree, it still isn't. To some degree I can still ignore it.
"In those early years at the university, the social thing was very important. It wasn't as if I worried about my hunchback and people laughing about it. It was more like wanting to have friends and things. One of the reasons I liked Spektrum so much was that it was a framework for being social without having to be social. That was the evening I was social and every other evening I sat in front of the computer. It was much more of an emotional thing than Linux ever was. Linux was never something I got really upset about. I never lost any sleep over Linux.
"The things that I got really upset about, and what still makes me upset, is not the technology per se but the social interactions around it. One of the reasons I got so upset about Andrew Tanenbaum's posting was not so much the technical issues he was raising. If it had been anybody else, I would have just blown it off. The problem was that he was posting it to the mailing list and making me ... I was concerned about my social standing with those people and he was attacking it.
"One of the things that made Linux good and motivational was the feedback I was getting. It meant that Linux mattered and was a sign of my being in a social group. And I was the leader of the social group. There's no question that was important, more important than even telling my Mom and Dad what I was doing. I was more concerned about the people who were using Linux. I had created a social circle and had the respect of those people. That's not how I thought of it at the time, and it's still not how I think of it. But it must be the most important thing. That's why I reacted so strongly to Andrew Tanenbaum.
The sun begins its descent into the Pacific andit's time to leave the beach. Linus insists that I drive his car home -- to see how well it responds -- and that we take the long and winding way, Route 9, back to Silicon Valley.
Linus says the flamefest With the Minix creator eventually moved into private email because it had become too nasty to be public. It was quiet for a few months. Then, Tanenbaum emailed Linus to direct him to the five-line ad in the back of Byte magazine for somebody's commercial version of Linux.
"The last email I got from Andrew was him asking me if this is really what I wanted to do, have somebody selling my work. I just sent him an email back saying Yes, and I haven't heard from him since," he says.
Maybe a year later, when Linus was in the Netherlands for his first public speech, he made his way to the university where Tanenbaum taught, hoping to get him to autograph Linus's copy of Operating Systems: Design and Implementation, the book that changed his life. He waited outside his door but Tanenbaum never emerged. The professor was out of town at the time, so they never met.
The hotel room was only slightly above freezing as I lay in bed, shivering, the night before my first speech. In the Netherlands they don't heat places like they do in Finland, and this drafty room even had huge single-pane windows, as if it were meant to be occupied only in the summer. But the coldness wasn't the only thing keeping me awake on the night of November 4, 1993. I was nervous beyond belief.
Public speaking had always been a rough spot for me. In school they made us give presentations about something we had heavily researched -- rats or whatever -- and I always found it impossible to do. I would stand up there, unable to talk, and just start giggling. And trust me, I'm not a giggler. It was even uncomfortable when I had to go up to the blackboard to show the class how I figured out a problem.
But there I was in Ede, Netherlands, an hour's train ride from Amsterdam, because I had been invited to be speaker at the tenth anniversary of the Netherlands Unix Users Group. I wanted to prove to myself that I could do this. A year earlier I had been asked to speak before a similar organization in Spain, but declined because my fear of public speaking was greater than my desire to travel. And back then, I really loved to travel. (I still like traveling, but it's not nearly the novelty it was for a kid who had barely been out of Finland. The only places I had ever been were Sweden, where we took a few camping vacations, and Moscow, where we visited my dad when I was about six years old.)
It sort of bothered me that I had blown the chance to visit Spain, so I convinced myself that I would accept the next speaking invitation that came along. But I was having second thoughts as I lay in bed, wondering if I would ever overcome my fear of getting up in front of large groups of people, worrying that I would be unable to open my mouth, or, worse, that I would lapse into giggles before the 400 members of the audience.
That's right, I was a mess.
I told myself the usual stuff. That the audience wants you to succeed, that they wouldn't be there in the first place if they didn't like you, and that I certainly knew the topic: the reasons behind the various technical decisions in the writing of the Linux kernel, the reasons for making it open source. Still, I was unconvinced that the speech would be a success, and my mind chugged along like an unstoppable freight-train engine. I literally was shaking in bed -- and the frigid air was the least of it.
The speech? Well, the audience was sympathetic to the obviously frightened soul standing before them, clinging to his PowerPoint slides (thank God for Microsoft) like a life preserver, and then haltingly answering their questions. Actually, the question-and-answer session was the best part. After my speech -- such as it was -- Marshall Kirk McKusik, who was instrumental in BSD Unix, came up to me and told me he found my speech interesting. I was so grateful for the gesture, I felt like getting down on my knees and kissing his feet. There are few people I look up to in computers, and Kirk is one of them. It's because he was so nice to me after that first speech.
My first speech was like shock treatment. So were the ones that followed. But they started making me more self-confident.
David keeps asking me how my stature at the university changed as Linux grew bigger. But I wasn't aware of any professors even mentioning it, or any other students pointing me out to their friends. Nothing like that. People around me at the university knew about Linux, but actually most of the hackers involved in it were from outside of Finland.
In the fall of 1992 I had been made a teaching assistant for Swedish-language classes in the Computer Sciences department. (Here's how that happened: They needed Swedish speaking TA's for the basic computer courses. There were only two Swedish-speaking computer science majors who had started at the university a few years earlier: Lars and Linus. There wasn't much of a choice.) At first I was afraid to even go up to the blackboard and work on problems, but it didn't take long for me to just concentrate on the material and not worry about embarrassing myself. By the way, three years later I was promoted to research assistant, which meant that instead of getting paid for teaching I was paid to work in the computer lab, which mostly meant doing development work on Linux. It was the start of a trend: having someone else pay me to do Linux. That's basically what happens at Transmeta.
David: "So when did it start becoming a big deal?"
Me: "It's still not a big deal."
Okay, I'll amend that. It started becoming more of a deal when it became clear how many people depend on Linux as something other than a toy operating system. When they started using it for more than just tinkering around, I realized that if something goes wrong, I'm responsible. Or at least I started feeling responsible. (I still do.) During 1992 the operating system graduated from being mostly a game to something that had become integral to people's lives, their livelihoods, commerce.
The shift occurred in the spring of 1992, about a year after I had started terminal emulation, when the first version of the X windowing system ran under Linux. That meant the operating system was capable of supporting a graphical user interface and that users could work in multiple windows simultaneously, thanks to the X windowing project, which had its origins at MIT. It was a big change. I remember that I had joked with Lars about it, around a year before it actually happened, telling him that someday we would run X and be able to do it all. I never thought it would happen that quickly. A hacker named Orest Zborowski was able to port X to Linux.
The way the X window system works is by way of the X server, which does all the graphics. The server talks to the clients, which are the things that say "I want a window and I want it this big." The communication goes through a layer called sockets, or, more formally, Unix Domain Sockets. It's how you communicate internally in Unix, but you also use sockets to communicate over the Internet. So Orest wrote the first socket layer for Linux just to port X to it. Orest's socket interface was kind of tacked on and not integrated with the other code. It was a situation in which I agreed to the patch because we needed it, even though it was fairly raw.
It took me awhile to get used to the notion that we had a graphical user interface. I don't think I even used it on a daily basis for the first year or so. And these days I can't live without it. There are always a ton of windows up when I work.
Orest's contribution not only enabled us to have windows, but it also opened a big door for the future. The domain sockets were used for the local networking that enables the X windowing system to operate. We could build on those same sockets to enable Linux to make the major leap to external networking -- to have the ability to link computers. Without networking, Linux was usable only for people who sat at home and used a modem to dial up somewhere, or who did everything locally. With great optimism, we started developing Linux networking on top of those original sockets, even though they hadn't been meant for networking at all.
I was so confident that we could easily do it that I made a leap in the version-numbering scheme. In March 1992 I had planned to release version 0.13. Instead, with the graphical user interface in place, I felt confident that we were maybe 95 percent of the way to our goal of releasing a full-fledged, reliable operating system, and one with networking. So I named the new release version 0.95.
Boy,was I premature. Not to mention clueless.
Networking is nasty business, and it ended up taking almost exactly two years to get it right, to a form where it could be released. When you add networking you suddenly introduce a host of new issues. There are security issues. You don't know who's out there and what they want to do. You have to be really careful that people don't crash your machine by sending it bad junk packets. You're not in control of who's trying to contact your machine anymore. Also, a lot of people have very different setups. With TCP/IP the networking standard, it's difficult to get all the time-outs right. It felt as if the process would drag on forever. By the end of 1993 we had an almost usable networking capability, although some people had serious problems getting it to work. We couldn't handle networks that didn't have 8-bit boundaries.
Because I had been overly optimistic in the naming of version 0.95, I was caught in a bind. Over the course of the two years it took to get version 1.0 out the door, we were forced to do some crazy things with numbers. There aren't many numbers between 95 and 100, but we continually released new versions based on bug fixes or added functions. By the time we got to version 0.99, we had to start adding numbers to indicate patch levels, and then we relied on the alphabet. At one point we had version 0.99, patch level 15A. Then version 0.99, patch level 15B, and so on. We made it all the way to patch level 152. Patch level 16 became version 1.0, the point where it was usable. This was released in March 1994 with great fanfare at the University of Helsinki Computer Sciences Department auditorium.
The period leading up to it had been kind of chaotic, but nothing could put a dent in Linux's popularity. We had our own Internet newsgroup, cornp.os.Iinux, which grew out of the ashes of my flamefest with Andrew Tanenbaum. And it was attracting hordes. Back then the Internet Cabal, the folks who more or less ran the Internet, kept unofficial monthly statistics on how many readers each newsgroup attracted. They weren't reliable statistics, but they were the best information available on the popularity of your site -- in this case, how many people were interested in Linux. Of all the newsgroups, alt.sex was the perennial favorite. (Not my particular favorite. Although I did check it out once or twice to see what the fuss was about, I was pretty much your typical undersexed nerd, more eager to play with my floating point processor than to keep abreast of the latest reports from the sexuality front -- newly discovered lovemaking positions or reports from heavy petters or whatever else it was that so many people were talking about on alt.sex.)
With the Cabal's monthly statistics, I could easily track the popularity of comp.os.linux. And trust me, I kept track. (While I might be somebody's idea of a folk hero, I've never been the selfless, ego-free, techno-Iovechild the hallucinating press insists I am.) By the fall of 1992, the estimates for our newsgroup were on the order of tens of thousands of people. That many people followed the newsgroup to see what was going on, but they weren't all using Linux. Every month, when the statistics came out, there was a summary report of the forty most popular newsgroups. If your newsgroup didn't make it to the top forty, you could fetch the full report on other newsgroups' popularity from a special maintenance newsgroup. Usually I had to go find the full report.
The Linux newsgroup kept creeping up the chart. At one point it made the top forty and I was happy. That was pretty cool. I seem to remember having written a fairly gloating article to comp.os.linux in which I basically listed the various os (operating system) newsgroups, including Minix, and said, "Hey look at this, we're more popular than Windows." Remember that, back then, people who liked Windows were not on the Internet. We made it to the top five sometime in 1993. I went to bed that night brimming with self-satisfaction, excited by the fact that Linux had become almost as popular as sex.
There certainly was no such matchup in my own little corner of the world. I truly did not have a life. By this time, as I mentioned earlier, Peter Anvin had organized an online collection that generated $3,000 in donations to help me payoff my computer, which I did at the end of 1993. And for Christmas, I upgraded to a 486 DX266, which I used for many years. But this was my life: I ate. I slept. Maybe I went to university. I coded. I read a lot of email. I was kind of aware of friends getting laid more, but that was okay.
Quite frankly, most of my friends were losers, too.
The speech in Ede almost convinced me that I could survive anything, even something as terrifying as standing up before a group of total strangers and being the focus of their attention. My confidence was slowly building in other areas, too. I was being forced to make quick decisions regarding Linux fixes and upgrades, and with each decision I felt increasingly comfortable in my role as leader of a growing community. The technical decisions had never been a problem; the problem was figuring out how to tell one person that I preferred another's suggested changes -- and being diplomatic about it. Sometimes it was as simple as saying, "So-and-so's fixes are working fine. Why don't we just go with those?"
I never saw the point of accepting anything other than what I thought was the best technical solution being presented. It was a way of keeping from taking sides when two or more programmers offered competing patches. Also, although I didn't think of it this way at the time, it was a way of getting people to trust me. And the trust compounds. When people trust you, they take your advice.
Of course you have to establish a foundation for all the trust. I guess that started not so much when I wrote the Linux kernel as when I posted it to the Internet, opening it up to anyone who wanted to join in and add the functions and details they liked, with me making ultimate decisions regarding the guts of the operating system.
Just as I never planned for Linux to have a life outside my own computer, I also never planned to be the leader. It just happened by default. At some point a core group of five developers started generating most of the activity in the key areas of development. It made sense for them to serve as the filters and hold the responsibility for maintaining those areas.
I did learn fairly early that the best and most effective way to lead is by letting people do things because they want to do them, not because you want them to. The best leaders also know when they are wrong, and are capable of pulling themselves out. And the best leaders enable others to make decisions for them.
Let me rephrase that. Much of Linux's success can be attributed to my own personality flaws: 1) I'm lazy; and 2) I like to get credit for the work of others. Otherwise the Linux development model, if that's what people are calling it, would still be limited to daily email messages among a half-dozen geeks, as opposed to an intricate web of hundreds of thousands of participants relying on mailing lists and developers' conventions and corporate sponsorship in maybe 4,000 projects that are taking place at anyone time. At the top, arbitrating disputes over the operating system's kernel, is a leader whose instinct is, and has always been, not to lead.
And things work out for the best. I divested myself of things that didn't hold much interest for me. The first of these was the user level, the external parts of the system that end users deal with directly, as opposed to the deep-down, internal code. First somebody volunteers to maintain it. Then the process for maintaining all the subsystems becomes organic. People know who has been active and who they can trust, and it just happens. No voting. No orders. No recounts.
If two people are maintaining similar kinds of software drivers, for example, I'll sometimes accept the work from both of them and see which one ends up getting used. Users tend to lean on one versus the other. Or, if you let both maintainers work it out, they may end up evolving in different directions and their contributions end up having very distinct uses.
What astonishes so many people is that the open source model actually works.
I guess it helps to understand the mentality of hackers in the free software universe. (By the way, I usually try to avoid the term "hacker." In personal conversations with technical people, I would probably call myself a hacker. But lately the term has come to mean something else: underage kids who have nothing better to do than sit around electronically breaking into corporate data centers, when they should be out volunteering at their local libraries or, at the very least, getting themselves laid.)
The hackers -- programmers -- working on Linux and other open source projects forego sleep, Stairmaster workouts, their kids' Little League games, and yes, occasionally, sex, because they love programming. And they love being part of a global collaborative effort -- Linux is the world's largest collaborative project -- dedicated to building the best and most beautiful technology that is available to anyone who wants it. It's that simple. And it's fun.
Okay, I'm starting to sound like a press release with all this shameless self-promotion. Open source hackers aren't the high-tech counterparts of Mother Teresa. They do get their names associated with their contributions in the form of the "credit list" or "history file" that is attached to each project. The most prolific contributors attract the attention of employers who troll the code, hoping to spot, and hire, top programmers. Hackers are also motivated, in large part, by the esteem they can gain in the eyes of their peers by making solid contributions. It's a significant motivating factor. Everybody wants to impress their peers, improve their reputation, elevate their social status. Open source development gives programmers the chance.
Needless to say, I was spending most of the year 1993 like I had spent most of 1992, 1991, et cetera: hunched over a computer. That was about to change.
Following in the academic footsteps of my grandfather, I was a teaching assistant at the University of Helsinki, assigned to the fall semester of the Swedish-language "Introduction to Computer Sciences"course. That's how I met Tove. She had more of an impact on my life than even Andrew Tanenbaum's book, Operating Systems: Design and Implementation. But I won't bore you with too many details.
Tove was one of fifteen students in my course. She had already received a degree in preschool education. She wanted to study computers, too, but wasn't progressing as quickly as the rest of the class. She eventually caught up.
The course was so basic-this was the fall of 1993, before the popularity of the Internet-my homework assignment for the class one day was to send me email. It sounds absurd today, but I said: "For homework, send me email."
Other students' emails contained simple test messages, or unmemorable notes about the class.
Tove asked me out for a date.
I married the first woman to approach me electronically.
Our first date never ended. Tove was a preschool teacher and six-time Finnish karate champion who had emerged from a functional family, although that's how I'd describe any family that was not as quirky as mine. She had a lot of friends. She felt like the right woman for me from the very first moment we got together. (I'll spare you the elaboration.) Within a few months Randi the cat and I had moved into her minuscule apartment.
For the first two weeks, I didn't even bother bringing over my computer. Not counting my army service, those two weeks were the longest span of time that I had been away from a computer since I had been eleven year sold and sitting on my grandfather's lap. Not to dwell on this, but it still holds the record for being my biggest stretch -- as a civilian -- without a CPU. Somehow. I managed (again, the details aren't interesting). My mother, the few times I saw her then, would mutter something about "a triumph of Mother Nature." I think my sister and father were just stunned.
Soon, Tove went out and got a cat to keep Randi company. Then we settled into a nice pattern of spending evenings alone or with friends, waking up at 5 A.M. so she could get to her job and I could go to the university early, before anyone would be there to disturb me, and read my Linux email.