25 May 31, 2024

“Brian Delaney — have you been working here all night? You promised it would just be a few minutes more when I left you here last night. And that was at ten o’clock.” Shelly stamped into the lab radiating displeasure.

Brian rubbed his fingers over rough revelatory whiskers, blinked through red-rimmed guilty eyes. Equivocated.

“What makes you think that?”

Shelly flared her nostrils. “Well, just looking at you reveals more than enough evidence. You look terrible. In addition to that I tried to phone you and there was no answer. As you imagine I was more than a little concerned.”

Brian grabbed at his belt where he kept his phone — it was gone. “I must have put it down somewhere, didn’t hear it ring.”

She took out her own phone and hit the memory key to dial his number. There was a distant buzzing. She tracked it down beside the coffeemaker. Returned it to him in stony silence.

“Thanks.”

“It should be near you at all times. I had to go looking for your bodyguards — they told me you were still here.”

“Traitors,” he muttered.

“They’re as concerned as I am. Nothing is so important that you have to ruin your health for it.”

“Something is, Shelly, that’s just the point. You remember when you left last night, the trouble we were having with the new manager program? No matter what we did yesterday the system would simply curl up and die. So then I started it out with a very simple program of sorting out colored blocks, then complicated it with blocks of different shapes as well as colors. The next time I looked, the manager program was still running — but all the other parts of the program seemed to have shut down. So I recorded what happened when I tried it again, and this time installed a natural language trace program to record all the manager’s commands to the other subunits. This slowed things down enough for me to discover what was going on. Let’s look at what happened.”

He turned on the recording he had made during the night. The screen showed the AI rapidly sorting colored blocks, then slowing — then barely moving until it finally stopped completely. The deep bass voice of Robin 3 poured rapidly from the speaker.

“…K-line 8997, response needed to input 10983 — you are too slow — respond immediately — inhibiting. Selecting subproblem 384. Response accepted from K-4093, inhibiting slower responses from K-3724 and K-2314, Selecting subproblem 385. Responses from K-2615 and K-1488 are in conflict — inhibiting both. Selecting…”

Brian switched it off. “Did you understand that?”

“Not really. Except that the program was busy inhibiting things — - ”

“Yes, and that was its problem. It was supposed to learn from experience, by rewarding successful subunits and inhibiting the ones that failed. But the manager’s threshold for success had been set so high that it would accept only perfect and instant compliance. So it was rewarding only the units that responded quickly, and disconnecting the slower ones — even if what they were trying to do might have been better in the end.”

“I see. And that started a domino effect because as each subunit was inhibited, that weakened other units’ connection to it?”

“Exactly. And then the responses of those other units became slower until they got inhibited in turn. Before long the manager program had killed off them all.”

“What a horrible thought! You are saying, really, that it committed suicide.”

“Not at all.” His voice was hoarse, fatigue abraded his temper. “When you say that, you are just being anthropomorphic. A machine is not a person. What on earth is horrible about one circuit disconnecting another circuit? Christ — there’s nothing here but a bunch of electronic components and software. Since there are no human beings involved nothing horrible can possibly occur, that’s pretty obvious—”

“Don’t speak to me that way or use that tone of voice!”

Brian’s face reddened with anger, then he dropped his eyes. “I’m sorry, I take that back. I’m a little tired, I think.”

“You think — I know. Apology accepted. And I agree, I was being anthropomorphic. It wasn’t what you said to me — it was how you said it. Now let’s stop snapping at each other and get some fresh air. And get you to bed.”

“All right — but let me look at this first.”

Brian went straight to the terminal and proceeded to retrace the robot’s internal computations. Chart after chart appeared on the screen. Eventually he nodded gloomily. “Another bug of course. It only showed up after I fixed the last one. You remember, I set things up to suppress excessive inhibition, so that the robot would not spontaneously shut itself down. But now it goes to the opposite extreme. It doesn’t know when it ought to stop.

“This AI seems to be pretty good at answering straightforward questions, but only when the answer can be found with a little shallow reasoning. But you saw what happened when it didn’t know the answer. It began random searching, lost its way, didn’t know when to stop. You might say that it didn’t know what it didn’t know.”

“It seemed to me that it simply went mad.”

“Yes, you could say that. We have lots of words for human-mind bugs — paranoias, catatonias, phobias, neuroses, irrationalities. I suppose we’ll need new sets of words for all the new bugs that our robots will have. And we have no reason to expect that any new version should work the first time it’s turned on. In this case, what happened was that it tried to use all of its Expert Systems together on the same problem. The manager wasn’t strong enough to suppress the inappropriate ones. All those jumbles of words showed that it was grasping at any and every association that might conceivably have guided it toward the problem it needed to solve — no matter how unlikely on the face of it. It also showed that when one approach failed, the thing didn’t know when to give up. Even if this AI worked there is no rule,that it had to be sane on our terms.”

Brian rubbed his bristly jaw and looked at the now silent machine. “Let’s look more closely here.” He pointed to the chart on the machine. “You can see right here what happened this time. In Rob-3.1 there was too much inhibition, so everything shut down. So I changed these parameters and now there’s not enough inhibition.”

“So what’s the solution?”

“The answer is that there is no answer. No, I don’t mean anything mystical. I mean that the manager here has to have more knowledge. Precisely because there’s no magic, no general answer. There’s no simple fix that will work in all cases — because all cases are different. And once you recognize that, everything is much clearer! This manager must be knowledge-based. And then it can learn what to do!”

“Then you’re saying that we must make a manager to learn which strategy to use in each situation, by remembering what worked in the past?”

“Exactly. Instead of trying to find a fixed formula that always works, let’s make it learn from experience, case by case. Because we want a machine that’s intelligent on its own, so that we don’t have to hang around forever, fixing it whenever anything goes wrong. Instead we must give it some ways to learn to fix new bugs as soon as they come up. By itself, without our help.”

“So now I know just what to do. Remember when it seemed stuck in a loop, repeating the same things about the color red? It was easy for us to see that it wasn’t making any progress. It couldn’t see that it was stuck, precisely because of being stuck. It couldn’t jump out of that loop to see what it was doing on a larger scale. We can fix that by adding a recorder to remember the history of what it has been doing recently. And also a clock that interrupts the program frequently, so that it can look at that recording to see if it has been repeating itself.”

“Or even better we could add a second processor that is always running at the same time, looking at the first one. A B-brain watching an A-brain.”

“And perhaps even a C-brain to see if the B-brain has got stuck. Damn! I just remembered that one of my old notes said, ‘Use the B-brain here to suppress looping.’ I certainly wish I had written clearer notes the first time around. I better get started on designing that B-brain.”

“But you’d better not do it now! In your present state, you’ll just make it worse.”

“You’re right. Bedtime. I’ll get there, don’t worry — but I want to get something to eat first.”

“I’ll go with you, have a coffee.”

Brian let them out and blinked at the bright sunshine. “That sounds as though you don’t trust me.”

“I don’t. Not after last night!”

Shelly sipped at her coffee while Brian worked his way through a Texas breakfast — steak, eggs and flapjacks. He couldn’t quite finish it all, sighed and pushed the plate away. Except for two guards just off duty, sitting at a table on the far wall, they were alone in the mess hall.

“I’m feeling slightly less inhuman,” he said. “More coffee?”

“I’ve had more than enough, thank you. Do you think that you can fix your screw-loose AI?”

“No. I was getting so annoyed at the thing that I’ve wiped its memory. We will have to rewrite some of the program before we load it again. Which will take a couple of hours. Even LAMA-5’s assembler takes a long time on a system this large. And this time I’m going to make a backup copy before we run the new version.”

“A backup means a duplicate. When you do get a functioning humanoid artificial intelligence — do you think that you will be able to copy it as well?”

“Of course. Whatever it does — it will still just be a program. Every copy of a program is absolutely identical. Why do you ask?”

“It’s a matter of identity, I guess. Will the second AI be the same as the first?”

“Yes — but only at the instant it is copied. As soon as it begins to run, to think for itself, it will start changing. Remember, we are our memories. When we forget something, or learn something new, we produce a new thought or make a new connection — we change. We are someone different. The same will apply to an AI.”

“Can you be sure of that?” she asked doubtfully.

“Positive. Because that is how mind functions. Which means I have a lot of work to do in weighting memory. It’s the same reason why so many earlier versions of Robin failed. The credit assignment problem that we talked about before. It is really not enough to learn just by short-term stimulus-response-reward methods — because this will solve only simple, short-term problems. Instead, there must be a larger scale reflective analysis, in which you think over your performance on a longer scale, to recognize which strategies really worked, and which of them led to sidetracks, moves that seemed to make progress but eventually led to dead ends.”

“You make the mind sound like — well — an onion!”

“It is.” He smiled at the thought. “A good analogy. Layer within layer and all interconnected. Human memory is not merely associative, connecting situations, responses and rewards. It is also prospective and reflective. The connections made must also be involved with long-range goals and plans. That is why there is this important separation between short-term and long-term memory. Why does it take about an hour to long-term memorize anything? Because there must be a buffer period to decide which behaviors actually were beneficial enough to record.”

Sudden fatigue hit him. The coffee was cold; his head was beginning to ache; depression was closing in. Shelly saw this, lightly touched his hand.

“Time to retire,” she said. He nodded sluggish agreement and struggled to push back the chair.

Загрузка...